So Galaxy’s ‘real-time interpreter’ claims to use AI, but honestly, this just feels like a basic speech-to-text system connected to Google Translate. This doesn’t feel like something really new; Google Translate has had voice input forever.
Am I missing something here? Is there actually something innovative in this, or is it just Samsung throwing around ‘AI’ to hype up something that’s already been around for ages?
Samsung, like a lot of companies now, is playing on how confused people are about what AI really means. They can get away with calling almost anything AI since most people don’t fully get it.
Their ‘real-time interpreter’ doesn’t seem like AI is really needed here. All it seems like is a speech recognition system, which then gets plugged into Google Translate.
Actually, Google Translate itself uses machine learning, which is a form of AI. But it’s not the same as what ChatGPT or other language models do. Maybe Samsung’s voice recognition uses AI too, but it’s probably not a fancy language model or anything.
This isn’t really groundbreaking innovation, as speech to text has existed for quite a long time (Even Google Translate has had a voice input function for who knows how long).
Yeah, it’s not earth-shattering, but the twist here is the real-time part for calls. Google Translate has a conversation mode, so it’s more about connecting it to calls directly and maybe making it faster.
You know, if you listen closely, they call Galaxy AI ‘Advanced Intelligence’ instead of ‘Artificial Intelligence.’ Some of it actually uses generative AI, like for chat or image editing.