Skip to main content

Filed under:

From ChatGPT to Gemini: how AI is rewriting the internet

Big players, including Microsoft, with Copilot, Google, with Gemini, and OpenAI, with GPT-4o, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play (and so many name changes — remember when we were talking about Bing and Bard before those tools were rebranded?), but you can be sure to see it all unfold here on The Verge.

  • When AI models are past their prime.

    A recent study found that if a coding problem put before ChatGPT (using GPT-3.5) existed on coding practice site LeetCode before its 2021 training data cutoff, it did a very good job generating functional solutions, writes IEEE Spectrum.

    But when the problem was added after 2021, it sometimes didn’t even understand the questions and its success rate seemed to fall off a cliff, underscoring AI’s limitations without enough data.


  • Cloudflare is offering to block crawlers scraping information for AI bots.

    Tech giants are rewriting the rules on web scraping, blaming unnamed third parties for disregarding robots.txt, and seemingly claiming the right to reuse anything posted anywhere for AI.

    Now, Cloudflare is telling customers on its CDN that it can find and block AI bots that try to get around the rules.

    The upshot of this globally aggregated data is that we can immediately detect new scraping tools and their behavior without needing to manually fingerprint the bot, ensuring that customers stay protected from the newest waves of bot activity.


    A line graph showing user agent matches for known AI bots over the last year.
    The most popular AI bots seen on Cloudflare’s network in terms of request volume.
    Image: Cloudflare
  • Perplexity’s ‘Pro Search’ AI upgrade makes it better at math and research

    Illustration of a pixel block brain.
    Illustration: The Verge

    Perplexity has launched a major upgrade to its Pro Search AI tool, which it says “understands when a question requires planning, works through goals step-by-step, and synthesizes in-depth answers with greater efficiency.”

    Examples on Perplexity’s website of what Pro Search can do include a query asking the best time to see the northern lights in Iceland or Finland. It breaks down its research process into three searches: the best times to see the northern lights in Iceland and Finland; the top viewing locations in Iceland; and the top viewing locations in Finland. It then provides a detailed answer addressing all aspects of the question, including when to view the northern lights in either country and where.

    Read Article >
  • Figma pulls AI tool after criticism that it ripped off Apple’s design

    Vector illustration of the Figma logo.
    Image: Cath Virginia / The Verge

    Figma’s new tool Make Designs lets users quickly mock up apps using generative AI. Now, it’s been pulled after the tool drafted designs that looked strikingly similar to Apple’s iOS weather app. Figma CEO Dylan Field posted a thread on X early Tuesday morning detailing the removal, putting the blame on himself for pushing the team to meet a deadline, and defending the company’s approach to developing its AI tools.

    In posts on X, Andy Allen, CEO of Not Boring Software, showed just how closely Figma’s Make Designs tool made near-replicas of Apple’s weather app. “Just a heads up to any designers using the new Make Designs feature that you may want to thoroughly check existing apps or modify the results heavily so that you don’t unknowingly land yourself in legal trouble,” Allen wrote.

    Read Article >
  • Google’s carbon footprint balloons in its Gemini AI era

    An illustration of the Google logo.
    Illustration: The Verge

    Google’s greenhouse gas emissions have ballooned, according to the company’s latest environmental report, showing how much harder it’ll be for the company to meet its climate goals as it prioritizes AI.

    Google has a goal of cutting its planet-heating pollution in half by 2030 compared to a 2019 baseline. But its total greenhouse gas emissions have actually grown by 48 percent since 2019. Last year alone, it produced 14.3 million metric tons of carbon dioxide pollution — a 13 percent year-over-year increase from the year before and roughly equivalent to the amount of CO2 that 38 gas-fired power plants might release annually.

    Read Article >
  • Meta shows off ‘3D Gen’ AI tool that creates textured models faster than ever.

    Meta’s AI research team has a new system to create or retexture 3D objects based on a text prompt. It combines text-to-3D and text-to-texture generation models to go beyond AI-generated emoji or still images,

    Their paper (pdf) claims 3D Gen’s output is “3× to 60× faster” and preferred by professional artists in comparison to alternatives.


  • Instagram’s ‘Made with AI’ label swapped out for ‘AI info’ after photographers’ complaints

    Screenshot of Instagram’s mobile app displaying a picture with the “AI Info” tag applied to it.
    Image: Meta

    On Monday, Meta announced that it is “updating the ‘Made with AI’ label to ‘AI info’ across our apps, which people can click for more information,” after people complained that their pictures had the tag applied incorrectly. Former White House photographer Pete Souza pointed out the tag popping up on an upload of a photo originally taken on film during a basketball game 40 years ago, speculating that using Adobe’s cropping tool and flattening images might have triggered it.

    “As we’ve said from the beginning, we’re consistently improving our AI products, and we are working closely with our industry partners on our approach to AI labeling,” said Meta spokesperson Kate McLaughlin. The new label is supposed to more accurately represent that the content may simply be modified rather than making it seem like it is entirely AI-generated.

    Read Article >
  • The Center for Investigative Reporting is suing OpenAI and Microsoft

    ChatGPT logo in mint green and black colors.
    Illustration: The Verge

    The Center for Investigative Reporting (CIR), the nonprofit that produces Mother Jones and Reveal, announced on Thursday that it’s suing Microsoft and OpenAI over alleged copyright infringement, following similar actions by The New York Times and several other media outlets.

    “OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material,” Monika Bauerlein, CEO of the Center for Investigative Reporting, said in a statement. “This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it.” 

    Read Article >
  • The RIAA versus AI, explained

    A smiling computer surrounded by music notes connected like data points.
    Cath Virginia / The Verge | Photo from Getty Images

    Udio and Suno are not, despite their names, the hottest new restaurants on the Lower East Side. They’re AI startups that let people generate impressively real-sounding songs — complete with instrumentation and vocal performances — from prompts. And on Monday, a group of major record labels sued them, alleging copyright infringement “on an almost unimaginable scale,” claiming that the companies can only do this because they illegally ingested huge amounts of copyrighted music to train their AI models. 

    These two lawsuits contribute to a mounting pile of legal headaches for the AI industry. Some of the most successful firms in the space have trained their models with data acquired via the unsanctioned scraping of massive amounts of information from the internet. ChatGPT, for example, was initially trained on millions of documents collected from links posted to Reddit.

    Read Article >
  • ChatGPT’s Mac app is here, but its flirty advanced voice mode has been delayed

    Vector illustration of the Chat GPT logo.
    Image: The Verge

    The advanced voice mode for ChatGPT that sparked a tussle with Scarlett Johansson was an important element of OpenAI’s Spring Update event, where it also revealed a desktop app for ChatGPT.

    Now, OpenAI says it will “need one more month to reach our bar to launch” an alpha version of the new voice mode to a small group of ChatGPT Plus subscribers, with plans to allow access for all Plus customers in the fall. One specific area that OpenAI says it’s improving is the ability to “detect and refuse certain content.”

    Read Article >
  • Apple has talked about AI partnerships with Meta and a few others.

    At WWDC, Apple announced a deal with OpenAI to make ChatGPT available for certain tasks on iPhones with iOS 18 and other devices (as long as you aren’t in the EU). Execs also mentioned Google Gemini, but the list doesn’t end there, according to the Wall Street Journal.

    In addition to Google and Meta, AI startups Anthropic and Perplexity also have been in discussions with Apple to bring their generative AI to Apple Intelligence, said people familiar with the talks.


  • OpenAI exec: “Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place.”

    During a recent talk at Dartmouth’s school of engineering, OpenAI CTO Mira Murati said the quiet part out loud. I’ll let you watch and be the judge:


  • OpenAI’s first acquisition is an enterprise data startup

    Vector illustration of the Chat GPT logo.
    Image: The Verge

    OpenAI has acquired Rockset, an enterprise analytics startup, to “power our retrieval infrastructure across products,” according to a Friday blog post.

    This acquisition is OpenAI’s first where the company will integrate both a company’s technology and its team, a spokesperson tells Bloomberg. The two companies didn’t share the terms of the acquisition. Rockset has raised $105 million in funding to date.

    Read Article >
  • An AI video tool just launched, and it’s already copying Disney’s IP

    Last week, AI startup Luma posted a series of videos created using its new video-generating tool Dream Machine, which the company describes as a “highly scalable and efficient transformer model trained directly on videos.”

    The only problem? At about 57 seconds in, the Dream Machine-generated trailer for Monster Camp — an animated story about furry creatures journeying to a sleepaway camp — features a slightly AI-smudged but still recognizable Mike Wazowski from Pixar’s Monsters, Inc. Many people noticed that multiple characters and its overall aesthetic look borrowed from the franchise, and the questions quickly started pouring in.

    Read Article >
  • AIs are coming for social networks

    Screenshots of the Butterflies AI app.
    Butterflies AI.

    So far, generative AI has been mostly confined to chatbots like ChatGPT. Startups like Character.AI and Replika are seeing early traction by making chatbots more like companions. But what happens when you dump a bunch of AI characters into something that looks like Instagram and let them talk to each other?

    That’s the idea behind Butterflies, one of the most provocative — and, at times, unsettling — takes on social media that I’ve seen in quite a while. After a private beta period with tens of thousands of users, the app is now available for free in the Apple App Store and Google Play Store. There’s no short-term pressure on Butterflies to make money; the six-month-old startup just raised $4.8 million from tech investors Coatue, SV Angel, and others.

    Read Article >
  • Google still recommends glue for your pizza

    Photo illustration of a helpful chatbot.
    An eighth of a cup of Elmer’s, to be precise.
    Cath Virginia / The Verge | Photos by Getty Images

    You may remember we all had a fun little laugh at Google’s AI search results telling us to put glue in our pizza. Internet legend Katie Notopoulos made and ate a glue pizza. A good time was had by all! Except, whoopsie, Google’s AI is training on our good time.

    I will grant the query “how much glue to add to pizza” is an unusual one — but not that unusual given the recent uproar around glue pizza. As spotted by Colin McMillen on Bluesky, if you ask Google how much glue to add to your pizza, the right answer — none! — does not appear. Instead, it cites our girl Katie suggesting you add an eighth of a cup. Whoops!

    Read Article >
  • Google’s June Pixel update brings Gemini AI to cheaper phones

    Google’s latest feature drop for Pixel devices is a big one for people who want to run its AI tech on cheaper phones, folks who constantly misplace their phones, and photographers who want a little more control.

    The latest update, which starts rolling out today, will make the mobile-ready Gemini Nano model that was already available to Pixel 8 Pro owners available as an option on the Pixel 8 and Pixel 8A phones, too. Apple just announced a slew of new AI features for its platforms, but similar to Google’s initial announcement that it eventually walked back, Apple has restricted Apple Intelligence to people with the latest iPhone 15 Pro.

    Read Article >
  • Emma Roth

    Jun 11

    Emma Roth

    Tim Cook is ‘not 100 percent’ sure Apple can stop AI hallucinations

    Photo illustration of Tim Cook.
    Illustration by Cath Virginia / The Verge | Photo by Justin Sullivan, Getty Images

    Even Apple CEO Tim Cook isn’t sure the company can fully stop AI hallucinations. In an interview with The Washington Post, Cook said he would “never claim” that its new Apple Intelligence system won’t generate false or misleading information with 100 percent confidence.

    “I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in,” Cook says. “So I am confident it will be very high quality. But I’d say in all honesty that’s short of 100 percent. I would never claim that it’s 100 percent.”

    Read Article >
  • All of Apple’s big AI news from WWDC 2024.

    Siri meets generative AI, genmoji, image generation, and some big upgrades for Apple Photos. If you just want the Cliff’s Notes on Apple’s artificial intelligence plans, we’ve got you covered.


  • Say hi, Gemini.

    Google’s new ad showcases the many things its Gemini large language model can do — from the Gemini chatbot to Circle to Search and AI Overviews.

    I can’t believe we’ve entered the LLM ad phase of this reality, but with ChatGPT becoming a household name and “Apple Intelligence” around the corner, the branding efforts will only increase.


  • ‘Apple Intelligence’ will automatically choose between on-device and cloud-powered AI

    A black-and-white graphic showing the Apple logo
    Illustration by Nick Barclay / The Verge

    Apple is gearing up to reveal a new AI system on the iPhone, iPad, and Mac next week at WWDC 2024 — and it will be called Apple Intelligence, according to a report from Bloomberg. In addition to providing new “beta” AI features across Apple’s platforms and apps, it will reportedly offer access to a new ChatGPT-like chatbot powered by OpenAI.

    Apple reportedly won’t focus on buzzy AI features like image or video generation and will instead focus on adding AI-powered summarizations, reply suggestions, and an AI overhaul for Siri that could give it more control over apps while chasing applications with “broad appeal.”

    Read Article >
  • Where did the viral “All eyes on Rafah” image come from?

    Two people from Malaysia both say they used Microsoft Image Creator to produce the graphic in support of Palestinians.

    It’s been shared over 50 million times now, and now NPR has spoken to both of them: Zila Abka, who months ago posted the version found by 404 Media on Facebook, and Amirul Shah, who shared the now-viral Instagram template.


    Composite image of two AI-generated images with the words “All Eyes on Rafah” surrounded by tents.
    Image: Zila Abka (left), Amirul Shah (right)
  • How to make bad iPhone food pics with Midjourney.

    This Reddit user’s Midjourney images in the style of bad photos from Yelp reviews are surprisingly on point. The prompt they say they used:

    iPhone photo of (food name) with many raisins on top. At a (type of) restaurant (or other location). —ar 3:4 —style raw —s 75

    PLUS —sref of some bad food photos you find on Yelp! :)

    Others gave it a shot on X.


  • ElevenLabs’ AI generator makes explosions or other sound effects with just a prompt

    Photo illustration of a helpful chatbot.
    Cath Virginia / The Verge | Photos by Getty Images

    ElevenLabs already offers AI-generated versions of human voices and music. Now, it will let people create sound effects for podcasts, movies, or games, too. The new Sound Effects tool can generate up to 22 seconds of sounds based on user prompts that can be combined with the company’s voice and music platform, and it gives users at least four downloadable audio clip options.

    The company says it worked with the stock media platform Shutterstock to build a library and train its model on its audio clips. Shutterstock has licensed its content libraries to many AI companies, including OpenAI, Meta, and Google.

    Read Article >
  • Emma Roth

    May 31

    Emma Roth

    OpenAI is making ChatGPT cheaper for schools and nonprofits

    Vector illustration of the ChatGPT logo.
    Image: The Verge

    OpenAI is making ChatGPT more accessible to schools and nonprofit organizations. In a pair of blog posts, the company shared that it’s launching a version of ChatGPT for universities, along with a program that lets nonprofits access ChatGPT at a discounted rate.

    OpenAI says ChatGPT Edu will allow universities to “responsibly deploy AI to students, faculty, researchers, and campus operations.” It’s built on its faster GPT-4o model, which offers improved multimodal capabilities across text, vision, and audio.

    Read Article >