OpenAI is taking a step toward its AI recognizing more about you. For travelers, that could be crucial to an assistant that anticipates needs.
Summary
Big technology companies developing generative AI have said they envision their technology as a daily personal assistant, which includes handling travel tasks.
Google, Microsoft, and Apple have all had advantages over OpenAI because they also make many of their own devices. (Apple’s iPhone 16, for example, has a side button to activate an AI assistant.)
OpenAI, on the other hand, is mostly limited to its application. But it wants to change that. The company is building its own device business with the help of Jony Ive, who led the design of the iPhone and other Apple devices. OpenAI recently said it plans to acquire the design firm Ive co-founded for $6.5 billion.
To win the race, each tech company is trying to make their AI fit into users’ lives as naturally and effortlessly as possible. That means building AI into phones, laptops, watches, TVs, augmented reality headsets, glasses, and more.
When devices are connected, a single digital assistant could use multiple sources—seeing what you see, hearing what you hear, and learning from your actions.
For travelers, the idea is that the digital assistant can act as a travel agent that anticipates needs and doesn’t just follow commands. Paired with fast-moving technology that can allow AI to search and book on an individual’s behalf, AI assistants themselves could become the main way users purchase travel.
Maybe you took a virtual tour of Machu Picchu on your VR headset, so later you get an alert on your phone about discounted trips to Peru.
Maybe your smart glasses saw you lingering in front of a Monet painting at the museum, so you get an alert on your phone a few weeks later about a new impressionist exhibit.
Maybe the AI notices an outdoor tour scheduled on your calendar, so your watch warns you that it will rain, with suggestions for an indoor activity it knows you like based on past searches.
It’s easy to see how Apple might take this next step. The button on the side of the iPhone 16 activates the camera, and the AI can pull information from the internet about what it sees. And Apple has already developed the technology to connect devices through the Apple ID.
Google is also moving toward this vision. The company is reimagining Search, powered by AI with integrations across the user’s apps. Its Gemini AI is coming to cars, watches, TVs, smart glasses, and augmented reality headsets. The glasses should be able to search for nearby restaurants and provide walking directions through voice commands alone.
Apple and Google may be somewhat limited in their ability to move as fast as they would like, though. It’s difficult to fully update an established lineup of products.
OpenAI has the opportunity to completely rethink how devices operate, built from the ground up with AI at their core. There is also the challenge of ensuring the hardware can handle the computing power that AI requires.
Just because OpenAI launches products doesn’t guarantee they will succeed, however. Microsoft abandoned its phone and Zune media player last decade when faced with competition. And Google discontinued its first attempt at smart glasses, Google Glass, in 2023 after several redesigns.
OpenAI has only said that it is planning a “family of products,” and the team hopes to share more details next year.