Gemini Now Knows You, But Old Problems Persist
Google's AI Gemini remembers your past conversations and data with its Personal Intelligence feature. But is this truly a revolution, or just a new chapter in a familiar story?
When Google announced the activation of Gemini's Personal Intelligence feature last week, I thought they were taking a victory lap. After all, in the AI world, they had gotten ahead of OpenAI, achieved a frighteningly good level of persuasive image generation, and even managed to catch Apple's attention. This new feature seemed like the icing on the cake of their recent successes.
However, the reality is a bit more complex. Personal Intelligence allows Gemini to access your past conversations and data from other Google services like Gmail, Calendar, Photos, and your search history without asking for specific permission each time. Of course, it's entirely opt-in, and you decide which apps it can access. It's currently only in beta and available to AI Pro and Ultra subscribers.
Sounds Familiar, Doesn't It?
If this sounds familiar, you're right. Gemini already offered the option to connect to your Workspace apps. But it required more effort from the user. I always had to explicitly tell it when I wanted it to check something in my email or calendar. Now, if your question requires searching for an email about a concert ticket, it can take action on its own.
This is actually a pretty big step. If you have to specify each request individually and babysit the AI, it's no more useful than the timer-setting robot assistants we've been using for a decade.
After enabling Personal Intelligence, Gemini offers some suggestions to try. For example, recommending books you might like based on your interests. The books it suggested to me were annoyingly accurate. Another conversation starter led to a long chat about coping strategies for the grass in my backyard, which I hate and the crows have torn to shreds.
Is Saying "Add to Calendar" Enough?
Gemini listed some native plant options I could consider, added reminders to my calendar based on the plan we agreed on, and compiled a shopping list in Keep that I could take to the hardware store.
Just a few months ago, when I asked Gemini to complete tasks like "Add this to my calendar," it routinely failed. So this is notable progress. But here's where the issue begins. Despite all this personalization and contextual understanding, Gemini still can't stop making basic mistakes, fabricating information, and stumbling on simple logic tests.
So what does this mean? You can have an assistant that accesses your personal data more deeply and knows you better. But this assistant may continue to give you answers that don't quite align with reality, or even mislead you. The reliability problem, though wrapped in fancy new features, remains firmly in place.
What's remarkable is this: Google has made a significant move in the race to personalize the user experience. But this move by tech giants also brings with it our concerns as users about privacy and data control. While the phrase "entirely opt-in" seems reassuring, it's debatable how many users grant these permissions with full understanding.
So what do we end up with? A smarter, more contextual, and sometimes even more useful AI assistant. But also a tool that requires us to rethink the balances of trust, accuracy, and control we've encountered countless times before. Gemini's new capability is also an indicator of where technology is headed: the better machines know us, the more important the question of how much we can trust them becomes.