Saturday, April 27, 2024
AndroidSmart Phones

How the Industry is Killing AI for Me


During the first three days of the Mobile World Congress, my wearable device clocked me at approximately 35 kilometers, an almost threefold increase from my average workday movement. In other words, I walked a lot, saw many things, and heard many people.

The first day of MWC for the media is Sunday, when journalists and members of the industry attend exclusive events, including the release of new devices. My first event was down to earth with OnePlus and its new Watch 2 (hands-on). And maybe that’s why my second launch event of the day, which felt somewhat intangible, left me feeling a bit bitter.

Honor and its interface based on intent

Honor presented the Magic 6 Pro, a smartphone I had previously tested. Unfortunately, it didn’t arrive in Europe with its most exquisite feature: the Intent-based UI. At the keynote, Honor’s CEO George Zhao showcased all the features my Magic 6 Pro sample lacked, which are based on a language model developed by the company.

The name “Intent-based” suggests the technology’s nature. To clarify, the system uses a model that learns from your actions, interpreting inputs from various phone sensors, like eye tracking and display touch. I am not an expert in artificial intelligence, however, I would consider it to be the most advanced AI technology available for a mobile device.

A person holding Honor's device showcasing the Eye Tracking feature to open a notification
A person demonstrates the Eye Tracking feature on the Honor Magic 6 Pro by using their gaze to open a notification, during a demo at MWC 2024. / © nextpit

I didn’t have the opportunity to use these AI-powered features while testing the phone before its release, but I did see a demonstration of the technology at Honor’s booth. The concept is intriguing because it aims to save time by quickly fulfilling our needs with minimal interaction.

However, there’s an AI layer added to this equation, and that’s why I couldn’t test it on my Magic 6 Pro sample. It’s not regulated in Europe yet, but it is available in China, Honor’s home country.

But why is this problematic? Initially, I didn’t understand, or rather, I oversimplified it as privacy concerns. While not incorrect, this perspective doesn’t fully address the main issue. What Honor is presenting isn’t just machine programming; it’s a form of intelligence capable of learning on its own, a language model.

Of course, this isn’t a super-intelligence that will hijack your phone and impersonate you—at least, not yet. However, it is artificial intelligence built upon our behaviors, mimicking our thought processes and essentially emulating us.

So, when a company promotes an “Intent-based user interface” that anticipates our needs, consider this: you’re looking at an address in a chat message, and suddenly, the system suggests opening it in a map app. This convenience replicates human behavior, aiming to save us time and effort. 

Yet, this leads me to ponder the importance of our unique choices. If a system can predict and act on my preferences, even introducing me to options I hadn’t considered, what does that say about my individuality? Are we losing a part of ourselves when technology anticipates our desires?

Furthermore, when companies are not threatening our individuality, they are attempting to sell us things.

Honor is not the only company leveraging their implementations of AI on seemingly innocuous examples to save us time. Take, for instance, the “Circle to Search” feature first introduced by Samsung with the Galaxy AI and later by Google. 

At MWC 2024, Google had the largest presence I’ve ever seen at the Barcelona fair, with the classical Android Island, and two big closed booths for Android and Google Cloud. Anyway, crossing the Android Island, visitors could immediately see a huge banner showcasing the “benefits” of Circle to Search to buy a green purse.

There are several emerging trends and practices in the rapidly evolving AI industry that warrant a critical examination. Notably, the commodification of AI technologies, as illustrated by Google’s “Circle to Search” feature, highlights a concerning shift towards consumerism.

Android Island Circle to Search wall at MWC 2024
At Android Island, where Google showcased some of Android’s new features, the Circle to Search wall was full of items available for purchase on social media pages. / © nextpit

In easier words, apart from this example, the industry deals with ethical problems. They sometimes don’t focus enough on protecting people’s privacy or treating everyone fairly when making and using AI. Also, many companies don’t explain how they decide things or what they do with the information they collect.

Looking at Google, for instance, their rules say people shouldn’t send private or secret info to their services. But this gets tricky because AI is now on our devices, which we use for both private and work activities. It shouldn’t be an all-or-nothing situation.

Additionally, the overhype of AI capabilities often leads to unrealistic expectations and can overshadow the genuine benefits of these technologies.

From my own experience in the AI field, I’ve witnessed firsthand the tension between innovation and ethics. The “Circle to Search” feature, for example, reflects a broader industry trend of leveraging AI to simplify tasks and predict user needs. 

While these advancements can offer convenience, they also raise critical questions about privacy, autonomy, and the role of AI in our lives. My observations at MWC 2024, particularly the overwhelming use of AI in consumer behavior, and the futuristic promises of the Honor Intent-based UI, serve as a microcosm of the industry’s larger challenges.

It’s clear that AI can improve our lives, but it needs to be developed in a way that puts ethical considerations, transparency, and the well-being of society as a whole first.

Intelligent entities: a blessing or a curse?

At the Deutsche Telekom booth, I tried the AI phone—a future concept that replaces apps with an AI digital assistant. Created with Qualcomm and Brain.ai, this assistant handles tasks like trip planning and shopping through voice or text. At the demo, again, the emphasis was on me purchasing something.

Don’t get me wrong, I love saving time on overwhelming tasks and using this free extra time to invest in myself, my friends, and my family. I hate booking flights, but not because it’s a tedious task. Quite the opposite, I love to travel. My disdain comes from the process being chaotic. Each company has its own system, ambiguous language, and far too many service offerings.

My dilemma is this: I would delegate the task of reserving a flight to, say, an artificial intelligence-powered digital assistant, but I don’t want to give up on the fun of arranging my special trip. These are also memories I will be creating through the process.

I think AI should make our experiences better without taking away our chance to make those experiences. I envision an AI solution that can interact with me and help me buy my flight ticket, but not one that learns from my previous behavior and applies that to mimic my own choices. Can you tell the difference between these two programs?

Companies have a way of introducing features and technologies that we might not initially desire, gradually embedding them into our lives until they become the new standard. It’s akin to the proverbial frog in gradually heated water, not realizing the change until it’s too late.

What now?

At the Mobile World Congress, my excitement for artificial intelligence was overshadowed by concerns about how it’s currently being developed. Walking miles through the event, I witnessed firsthand the industry’s focus on convenience and consumerism, neglecting the ethical and personal aspects that make AI truly inspiring.

The use of AI in products like Honor’s Intent-based UI and Google’s “Circle to Search” highlighted a trend towards making technology that predicts our needs but at the expense of our privacy and individuality. This approach risks turning AI into a tool for selling rather than a means to enhance our human experience.

In order to improve the direction of AI development, we need transparency, responsibility, and a focus on innovation that respects our autonomy. Companies should aim to create AI that complements human creativity, not replaces it, right?

As we reflect on the future of AI, we’re faced with a choice: Do we let AI continue down a path focused on consumerism, or do we guide it towards enriching our lives and solving meaningful problems?



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.