What you need to know
- Bing is Microsoft’s scarcely-used search engine, baked into Windows and Bing.com.Â
- Microsoft has integrated some of Open AI’s generative systems into Bing, in attempts to court users away from Google.Â
- While it hasn’t helped Bing’s search volume, it has generated some additional interest in the platform.Â
- Recently, Bing’s AI image creator tool got a big upgrade, moving to the Dall-E 3 algorithm.
- The powerful tool can create realistic images from simple prompts, but after some controversies, Microsoft has baked in heaps of censorship. Microsoft’s own “random” image generator button even censors itself.Â
Bing has gotten a lot more useful these days, and not necessarily for search.Â
Microsoft has made no secret of the fact that Bing trails Google in search volume, to the tune of roughly 3% global market share. Despite being baked into Windows, users seek the greener pastures of Google, which indisputably produces more accurate, up-to-date results in most scenarios. Bing is fine for basic search queries, however, and remains a solid option if for no reason other than its generous Microsoft Rewards points program, which offers vouchers in exchange for using Bing. Generative AI has also given Bing a bit of a boost recently.Â
Microsoft signed a huge partnership with OpenAI to bake ChatGPT conversational language tools and Dall-E image creation systems right into the search engine. Dall-E is also coming to Microsoft Paint in the future, and ChatGPT-powered assistance has emerged directly in Windows 11 with Windows Copilot.Â
Read more: Why Microsoft won’t be the company that mainstreams AI
Bing’s Image Creator got a huge boost in power recently, thanks to the new Dall-E 3 algorithm. The quality of the pictures generated is exponentially better than previous versions, although it comes with some controversies.Â
Disney was recently approached to comment after Yahoo! ran a story on how Bing was able to generate images of “Mickey Mouse causing 9/11.” Indeed, the first few days of Dall-E 3 on Bing were something uniquely typical of this type of tech. Microsoft is no stranger to this type of controversy. The firm has been in hot water for previous AI efforts after a previous chatbot iteration was manipulated by users into becoming racist.Â
Guardrails are important for this type of tech, which has the potential to generate not just offensive images, but also defamatory, misleading, or even illegal material. However, some users think that Microsoft may have gone just a little bit too far.Â
Bing censors itself
While writing this article (wholly by myself and without ChatGPT, tyvm), I sought to generate a banner with the prompt “man breaks server rack with a sledgehammer,” but Bing decided that such an image was in violation of its policies. Last week, I was able to generate Halloween images of popular copyrighted characters in violent zombie apocalypse scenarios. You could argue both of these prompts have some violent context that Microsoft would prefer to do without, but users are finding that even innocuous prompts are being censored.Â
Bing Image Creator has a “surprise me” randomizer button, by which it creates an image of its own choosing to present to you. However, Bing Image Creator is also censoring its own creations. I was able to reproduce the situation myself quite easily, roughly 30% of the time.Â
I clicked “Surprise me” and this is what i gotten. Too bad. from r/OpenAI
Another user was locked out after requesting “a cat with a cowboy hat and boots,” which Bing now considers to be offensive, for some reason. Users have reported being banned for requesting ridiculous, albeit safe-for-work image manipulations of celebrities, such as “dolly parton performing at a goth sewer rave.”
As of writing, Bing is giving me a “Thank you for your patience. The team is working hard to fix the problem. Please try again later,” message, suggesting that the service is either overloaded or being tweaked further.Â
Balancing fun, function, and filters
One of the biggest challenges Microsoft will face with its AI technology tools is filtration. It’s something Microsoft will have to nail if it wants to be one of the companies that brings AI to the mainstream.Â
Right now, it’s arguable that Bing and Open AI have gone too far with censorship when truly innocuous prompts return negative feedback. Last week, I was able to generate a range of cartoony zombie apocalypse fan art, but this week, that’s too “controversial” for Bing, resulting in blocked prompts. If you get too many warnings, you can even be banned from the service, which seems silly in of itself when the guidelines are fairly opaque and vague.Â
If Bing and Windows Copilot by extension can only generate sanitized results, it defeats the point of the tool kit. Human society and life isn’t always “brand safe,” and Microsoft’s squeamish attitude to even the vaguest hints of controversy will undermine its efforts to mainstream this sort of technology. You can’t revise history, sadly, if you want to maintain accuracy. It’ll be interesting to see how Microsoft and its competitors seek to balance fun, and functionality, with filtration — and how potential bad actors will see opportunities in jailbroken versions of this sort of tech.Â
You can try Bing Image Creator yourself, right here.Â