Enjoyed this? Also watch Mark trying and failing to answer a video call on the thing.
All in all, it’s a shame.
The ‘texting’ demo they did with Mark writing a text message via “pen motions” was pretty neat.
It feels like Google’s “GLASS” vision finally coming to fruition, problem is, we’re not in 2013 anymore.
A product like this demands high trust in the company behind it to work and Google, Facebook, etc, have evaporated any trust I had in them.
It’s a sad state of affairs, a product like this felt like the future back then. It used to be so exciting, nowadays I can’t be excited for this stuff anymore… 👎The AI being hyped right now is not AI at all. It’s really important that we all acknowledge this, that the world is selling itself a multi-billion-dollar lemon: predictive text engines that have nothing intelligent about them. They’re giant sorting machines, which is why they’re so good at identifying patterns in scientific research, and could genuinely advance medicine in wonderful ways. But what they cannot do is think, and as such, it’s a collective mass-delusion that these systems have any use in our day-to-day lives beyond plagiarism.
Goddamn, a gaming outlet saying what the serious grown-up press should have been saying from the start!
I’m an old fart - I got my degree in CS in 1985, and I’ve been paying attention to the predictions and advancements in AI for a very long time. I have at least as much issue with the way people think and talk about it as the author, but probably less of an issue with it being called AI. Remember that for decades, the informal working definition of AI was “A computer doing anything that usually requires a human.” So for ages, they said we’d have AI if a computer could read a page of printed text out loud in English. That seemed almost unattainable when it was first talked about, but now it’s so trivial that no one would consider it AI.
People have tried to make definitions that are crisper than that, but few if any of those definitions requires anything we’d call “thinking.” The frustrating thing is that the general public talks all the time about AI as if it’s conscious . Even when we’re talking about its flaws, we use words like “hallucinating,” which is something only thinking beings can do.
To me, LLMs are the worst things because to so many people they seem like the are (or could be) thinking entities. They respond to questions in a lifelike manner and can construct (extrapolate?) somewhat novel responses. But they’re also the least useful to us as a society. I’m much more interested in the Machine Learning applications for distilling gobs of data to develop new medicines or identify critical items in images that humans don’t have the mental bandwidth for. But LLMs get all the press.
No arguments to what you wrote.
I’d add that llms are increasingly the only way I can find useful technical information on anything anymore.
Of course this is solving a problem that shouldn’t fucking exist in the first place, and I still need to take that information back to a search engine to verify it and do actual research, which may be the point.
…
Search is so. Fucking. Broken.
Not that I’m taking a dig, but what are you looking for that only Ai can find it?
It’s less about only being able to find it with ai but more about being able nail down definitions and understand the relationships between concepts well enough to know what to actually search for online.
This applies to anything remotely technical. Search results produce hot fucking garbage and the only way to find what you’re looking for these days is to know exactly what you’re looking for before you search. And even then it’s a crapshoot whether you’ll surface anything useful.
Ahh I see what you mean. Yes, I run into that as well. Usually my first try is to search technical forums directly as opposed to google but AI can do the job as well.
Hey guys! Want a way to do everything you can do on your cell phone but 100 times worse and more expensive? Well check out this piece of shit!