
I’ve spent quite a bit of time with both the Ray-Ban Meta Gen 1 and Gen 2 smart glasses over the past couple of years, and honestly, there’s a lot to love about them. One of my favorite features is the open-ear audio — it’s an absolute game-changer for taking calls or listening to music while biking, walking, or just moving through your day without tuning out the world. It feels natural, freeing even. The cameras aren’t half bad either; sure, you won’t be snapping DSLR-quality shots, but for casual, everyday moments, they get the job done nicely. Overall, it’s genuinely good tech — sleek, fun, and surprisingly practical.
But then there’s the one part Meta keeps trying to hype as revolutionary: the AI.
Unfortunately, the so-called “AI” element of Meta’s smart glasses remains the weakest link in the experience. While the glasses nail the basics — hands-free calls, effortless messaging, navigation prompts, and subtle notifications — none of those depend heavily on AI. They’re features that feel useful, intuitive, and well-executed on their own. It’s when you actually engage with Meta’s much-touted artificial intelligence that the whole experience starts to wobble.
Let’s be real — voice assistants have been around for over a decade, and they still haven’t evolved much beyond being moderately helpful. Whether it’s Alexa, Siri, or Google Assistant, they all stumble over natural conversation, context, and real-world understanding. Even with the promise of “next-gen” AI updates, they’re still mostly glorified shortcuts for playing music, setting reminders, or turning your lights on and off. And Meta AI — the voice assistant built into these Ray-Ban smart glasses — isn’t exactly breaking that cycle. It occasionally nails a query, but more often than not, it misses the mark or gives responses so vague they’re barely useful.
And that’s just the voice side of the story. The computer vision aspect — arguably the flashier AI feature — is even more disappointing. In theory, it’s a brilliant idea: point your glasses at something, ask a question, and let Meta AI identify or interpret what you’re seeing. For accessibility, like helping low-vision users read text or identify objects, this could be transformative — if it worked consistently. But for the average user, the results are underwhelming at best and hilariously off-target at worst.
When testing Meta AI, I often find myself stretching to come up with situations where it might actually be useful. I’ll glance at a random object and ask, “Hey Meta, what’s that?” only to get a response like, “That’s a scooter!” Thanks, Meta — groundbreaking stuff. Worse are the moments when it confidently gets things wrong, like the time it insisted every shell I picked up at the beach was a shark tooth. Let’s just say, the technology still has a long way to go before it’s reliable or genuinely impressive.
To be fair, Meta isn’t the only company banking heavily on AI-powered smart glasses. Google’s upcoming prototypes lean heavily into computer vision too, hinting at a world where your glasses’ camera is always on — constantly scanning, analyzing, and “learning.” From a privacy standpoint, that sounds like a nightmare waiting to happen. Magic Leap and other competitors seem to be following a similar path, chasing the same overhyped AI dream rather than focusing on features that actually enhance everyday life.
Don’t get me wrong — I’m not anti-AI. Smart glasses have incredible potential. They could make information more ambient, communication more seamless, and experiences more immersive. But the obsession with slapping “AI” on the product label misses the point entirely. What users really need is not glasses that pretend to think, but glasses that actually help. Until tech companies shift focus from buzzwords to real-world functionality, AI in smart glasses will remain more gimmick than game-changer — and Meta’s latest attempt is proof of that.