God I hate Take Two.
God I hate Take Two.
He really got that dawg in him
On the low end, yearly OS upgrade compliance.
On the high end, dealing with the Kafkaesque whims of the App and Play stores randomly deciding to nuke your app (and thus business) from orbit as an “oopsie”
That uses a similar approach to the wake word technology, but slightly differently applied.
I am not a computer or ML scientist but this is the gist of how it was explained to me:
Your smartphone will have a low-powered chip connect to your microphone when it is not in use/phone is idle to run a local AI model (this is how it works offline) that asks one thing: is this music or is it not music. Anyway, after that model decides it’s music, it wakes up the main CPU which looks up a snippet of that audio against a database of other audio snippets that correspond to popular/likely songs, and then it displays a song match.
To answer your questions about how it’s different:
the song id happens on a system level access, so it doesn’t go through the normal audio permission system, and thus wouldn’t trigger the microphone access notification.
because it is using a low-powered detection system rather than always having the microphone on, it can run with much less battery usage.
As I understand it, it’s a lot easier to tell if audio seems like it’s music than whether it’s a specific intelligible word that you may or may not be looking for, which you then have to process into language that’s linked to metadata, etc etc.
The initial size of the database is somewhat minor, as what is downloaded is a selection of audio patterns that the audio snippet is compared against. This database gets rotated over time, and the song id apps often also allow you to send your audio snippet to the online megadatabases (Apple’s music library/Google’s music library) for better protection, but overall the data transfer isn’t very noticeable. Searching for arbitrary hot words cannot be nearly as optimized as assistant activations or music detection, especially if it’s not built into the system.
And that’s about it…for now.
All of this is built on current knowledge of researchers analysing data traffic, OS functions, ML audio detection, mobile computation capabilities, and traditional mobile assistants. It’s possible that this may change radically in the near future, where arbitrary audio detection/collection somehow becomes much cheaper computationally, or generative AI makes it easy to extrapolate conversations from low quality audio snippets, or something else I don’t know yet.
This has gotta be the stupidest (and honestly ugliest) alternative to Reddit awards.
Thanks for the crudely gold-colored-brass up vote, kind stranger.
They harsh the female vibes so bad, dude (gender neutral)
404 sprang up from ex-Motherboard writers when Vice Media went bankrupt this year. I think their articles are alright, because paying the bills as a journalist is very hard.
Dark Brandon is Awake.
I wish he was around more often than Sleepy Joe. 😔
Scale that up to a 4 ton production-ready consumer vehicle without introducing defects, I imagine
Bold move Cotton…but idk how many more “bold” moves he can afford.
FaceDeer stop being an inhuman techbro about ai for 5 minutes challenge
It technically has a seekbar, but it’s very hard to find and/or might be obscured by their capricious A/B randomized testing
“We will block Cloudflare as they protect piracy sites and must be shut down!!!” - Austrian copyright court, clueless
I literally swore off Firefox for half a decade because they removed and broke Panorama with their engine rewrite, so yes.
They were gaming racists.