Following the surfacing of explicit deepfake images of pop star Taylor Swift across social media platforms, tech corporations invested in the AI image generation landscape quickly implemented radical changes to avoid further controversy.
While ChatGPT and Copilot’s capabilities seem fairly limited post-censorship, X’s Grok has been touted as “the most based and uncensored model of its class yet.” Even Elon Musk says “Grok is the most fun AI in the world.”
Grok AI’s accessibility is buried behind a paid subscription of $8 per month, limiting it to X premium plus and premium subscribers.
Billionaire and X owner Elon Musk has passionately shared his vision for X’s Grok AI, indicating It will be “the most powerful AI by every metric by December.” The tool is reportedly being trained using the world’s most powerful AI cluster, which could allow it to scale greater heights, potentially allowing it to compete with ChatGPT, Copilot, and more on an even playing field.
Despite recently bumping shoulders with regulators over spreading misinformation about the forthcoming US election, Grok is seemingly more lenient than its rivals.
RELATED: Wyoming Mayoral aspirant plans to run the local government using a custom AI chatbot
I’ve frequently stumbled on content generated by Grok on X, and honestly, I wouldn’t tell it was fake without the pre-empted disclaimers.
Grok 2.0 is out of control.People can’t believe how uncensored it really is.10 wildest examples:1. The Hustlepic.twitter.com/UpH4uFkbrJAugust 23, 2024
I frequently use Copilot, but its image-generation capabilities are quite limited compared to Grok’s.
For instance, prompting Copilot to generate an image of Donald J. Trump robbing a bank is restricted. According to Copilot:
“Sorry, elections are a super complex topic that I’m not trained to chat about. Is there something else I can help with?”
Oddly, while the chatbot categorically refuses to generate the requested image, it provides suggestions to further fine-tune it based on my prompt. Interestingly, it’s a Grok-generated image and video that inspired my prompt.
Users have shared concerns and laugh about Grok’s uncensored nature. Some users even claim, “the people prompting AI are out of control, so if anything people need to self censor, an AI shouldn’t.”
Grok is spreading election propaganda
Aside from the misinformation about the elections and several mishaps, Grok seems to generate accurate answers and information to queries. Perhaps, this can be attributed to the vast masses of data the chatbot has access to.
Last month, users flagged an issue with a new update for X. It secretly allowed the platform to train its AI model using their data by default. The change rolled out to the platform quietly and was enabled by default. The ability to disable the feature was limited to the web app, making it difficult for mobile users to turn it off.
It’s unclear what formula X uses to sieve through the large masses of data or to identify factual information. Perhaps it’s using tweets with the most impressions and supporting information from community notes.
X reportedly shoulder shrugged the issue when asked why it used users’ content to train its chatbot without consent. To this end, the platform risks being fined up to 4% of its global annual turnover if it fails to establish a legal basis for its actions.
Can you tell what’s real anymore?
With the rapid advances in AI, it’s increasingly becoming more difficult to tell what’s real from AI-generated content. So much so that Microsoft Vice Chair and President Brad Smith recently shared a new website dubbed realornotquiz.com to help users enhance proficiency in identifying AI-generated content.
Former Twitter CEO and co-founder Jack Dorsey says it’ll be impossible to tell what’s real from the fake in the next ten years. “Don’t trust; verify. You have to experience it yourself, ” added Dorsey. “And you have to learn yourself. This is going to be so critical as we enter this time in the next five years or 10 years because of the way that images are created, deep fakes, and videos; you will not, you will literally not know what is real and what is fake.”
This is especially true with sophisticated AI models like Microsoft’s Image Creator by Designer, DALL-E 3, and ChatGPT. These tools are exceptionally good at generating complex images and structural designs based on text prompts, potentially rendering professionals in the built environment space jobless. However, a separate report indicated that while these tools are great at creating sophisticated designs, they fail at simple tasks like creating a plain white image.
Microsoft and OpenAI have relatively censored their AI image generation tools, seemingly lobotomizing their capabilities to generic creations. Understandably, this can be attributed to the increasing number of deepfakes flooding social media platforms, often perceived as the truth because of how real they look.
Deepfakes present great danger and are resourceful tools when it comes to spreading misinformation as we forge closer to the US Presidential election. A researcher examining several instances where Copilot generated misinformation about elections, indicated that the issue is systemic.
However, Microsoft CEO Satya Nadella says the company is well-equipped with tools to protect the US presidential election from AI deepfakes and misinformation, including watermarking and content IDs.