Google enables free sharing of custom Gemini AI bots and it’s a huge deal | Digital Trends

The big news from Google earlier today was the arrival of Gemini within the Chrome browser, with a promise of tab awareness and a sidebar assistant that can take steps on users’ behalf. But another important news that flew under the radar was the ability to share custom versions of the Gemini chatbot with other users.

What’s the big picture?

Imagine creating an AI agent that performs a specific task for you. I’ve created a custom version of Gemini called Email Assistant. It helps me with answering emails. All I have to do is copy-paste the contents of an email, type “Yes” or “No,” and it accordingly creates a polite response for those emails.

Read More

How to turn off Google AI Overviews | Mashable

I’ll get straight to the point: there’s no “off” button for Google’s new AI Overviews feature. There is instead a “Web” button, buried in the “More” section of Google’s familiar row of buttons that look sort of like folder tabs, alongside things like “Images” and “News.”

Searching the new “Web” tab will get you the sort of results page you’re used to, with no AI-written summary of Google’s findings — just some links.

There’s a more elaborate, but much more complete solution to your problem as well, and I’ll go into that below. Using the Web option, however, is the simplest way out of the mess you’ve found yourself in.

Read More

OpenAI and Google’s latest AI announcements make one thing clear: They’re officially rivals. | Mashable

At Google I/O earlier this week, generative AI was unsurprisingly a major focal point.

In fact, Google CEO Sundar Pichai pointed out that “AI” was said 122 times, plus two more times by Pichai as he closed out the event.

The tech giant has injected AI features into seemingly all of its products and services, including Search, Workspace, and creative tools for videos, photos, and music. But arguably the biggest news of the day was how Google’s announcements compared to those from OpenAI. Just a day before Google I/O, OpenAI unveiled GPT-4o, a “natively multimodal” model that can process visuals and audio in real-time, which ostensibly ramped up the burgeoning rivalry.

Read More

This Week in AI: Addressing racism in AI image generators | TechCrunch

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week in AI, Google paused its AI chatbot Gemini’s ability to generate images of people after a segment of users complained about historical inaccuracies. Told to depict “a Roman legion,” for instance, Gemini would show an anachronistic, cartoonish group of racially diverse foot soldiers while rendering “Zulu warriors” as Black.

It appears that Google — like some other AI vendors, including OpenAI — had implemented clumsy hardcoding under the hood to attempt to “correct” for biases in its model. In response to prompts like “show me images of only women” or “show me images of only men,” Gemini would refuse, asserting such images could “contribute to the exclusion and marginalization of other genders.” Gemini was also loath to generate images of people identified solely by their race — e.g. “white people” or “black people” — out of ostensible concern for “reducing individuals to their physical characteristics.”

Read More

Google’s best Gemini demo was faked | TechCrunch

Google’s new Gemini AI model is getting a mixed reception after its big debut yesterday, but users may have less confidence in the company’s tech or integrity after finding out that the most impressive demo of Gemini was pretty much faked.

A video called “Hands-on with Gemini: Interacting with multimodal AI” hit a million views over the last day, and it’s not hard to see why. The impressive demo “highlights some of our favorite interactions with Gemini,” showing how the multimodal model (i.e., it understands and mixes language and visual understanding) can be flexible and responsive to a variety of inputs.

To begin with, it narrates an evolving sketch of a duck from a squiggle to a completed drawing, which it says is an unrealistic color, then evinces surprise (“What the quack!”) when seeing a toy blue duck. It then responds to various voice queries about that toy, then the demo moves on to other show-off moves, like tracking a ball in a cup-switching game, recognizing shadow puppet gestures, reordering sketches of planets, and so on.

Read More