Meta has invested $15 billion into data-labeling startup Scale AI and hired its co-founder, Alexandr Wang, as part of its bid to attract talent from rivals in a fiercely competitive market.
The deal values Scale at $29 billion, double its valuation last year. Scale said it would “substantially expand” its commercial relationship with Meta “to accelerate deployment of Scale’s data solutions,” without giving further details. Scale helps companies improve their artificial intelligence models by providing labeled training data.
Scale will distribute proceeds from Meta’s investment to shareholders, and Meta will own 49 percent of Scale’s equity following the transaction.
Meta has developed plans to create a new artificial intelligence research lab dedicated to pursuing “superintelligence,” according to reporting from The New York Times. The social media giant chose 28-year-old Alexandr Wang, founder and CEO of Scale AI, to join the new lab as part of a broader reorganization of Meta’s AI efforts under CEO Mark Zuckerberg.
Superintelligence refers to a hypothetical AI system that would exceed human cognitive abilities—a step beyond artificial general intelligence (AGI), which aims to match an intelligent human’s capability for learning new tasks without intensive specialized training.
However, much like AGI, superintelligence remains a nebulous term in the field. Since scientists still poorly understand the mechanics of human intelligence, and because human intelligence resists simple quantification with no single definition, identifying superintelligence when it arrives will present significant challenges.
Amazon (AMZN) announced on Monday that it will invest at least $20 billion in Pennsylvania to build two data centers as it expands its cloud computing infrastructure and advances AI.
Meta (META) has secured a 20-year agreement with Constellation Energy (CEG) to purchase nuclear power from the Clinton Clean Energy Center in Illinois, as it works to secure more energy for its growing AI operations, the company said Tuesday.
American soldiers on the battlefield will soon be receiving a boost from Facebook. Meta (META), Facebook’s parent company, has entered into a partnership with defense technology company Anduril to design, build, and field a range of integrated extended reality (XR) products that provide soldiers with enhanced…
After weeks of arguments in the Federal Trade Commission’s monopoly trial, Meta is done defending its decade-plus-old acquisitions of Instagram and WhatsApp—at least for now.
The seven-week trial ended Tuesday, with the FTC urging Judge James Boasberg to rule that a breakup is necessary to end Meta’s alleged monopoly in the “personal social networking services” market, where Meta currently faces sparse competition among other apps connecting friends and family. As alleged by the FTC, Meta’s internal emails laid bare that Meta’s motive in acquiring both Instagram and WhatsApp was to pay whatever it took to snuff out dominant rivals threatening to lure users away from Facebook—Mark Zuckerberg’s jewel.
Talking to Bloomberg, Meta has maintained that the FTC’s case is weak, seeking to undo deals that the FTC approved long ago while ignoring the competition Meta faces from rivals in the broader social media market, like TikTok. But Meta’s attempt to shut down the case mid-trial was rebuffed by Boasberg, who has signaled he will take months to weigh his decision.
An outdated Meta AI model was apparently at the center of the Department of Government Efficiency’s initial ploy to purge parts of the federal government.
Wired reviewed materials showing that affiliates of Elon Musk’s DOGE working in the Office of Personnel Management “tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous ‘Fork in the Road’ email that was sent across the government in late January.”
The “Fork in the Road” memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be “loyal”—and accept the government’s return-to-office policy—or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned.
If you ask the man who has largely shaped how friends and family connect on social media over the past two decades about the future of social media, you may not get a straight answer.
At the Federal Trade Commission’s monopoly trial, Meta CEO Mark Zuckerberg attempted what seemed like an artful dodge to avoid criticism that his company allegedly bought out rivals Instagram and WhatsApp to lock users into Meta’s family of apps so they would never post about their personal lives anywhere else. He testified that people actually engage with social media less often these days to connect with loved ones, preferring instead to discover entertaining content on platforms to share in private messages with friends and family.
As Zuckerberg spins it, Meta no longer perceives much advantage in dominating the so-called personal social networking market where Facebook made its name and cemented what the FTC alleged is an illegal monopoly.
Late in 2024, Meta introduced Instagram Teen accounts, a safety net intended to protect young minds from sensitive content and ensure that they have safe online interactions, bolstered by age detection tech. Accounts for teens are automatically classified as private, offensive words are hidden, and messages from strangers are blocked.
According to an investigation by youth-focused non-profit, Design It For Us, and Accountable Tech, Instagram’s Teen guardrails aren’t delivering on their promise. Over a span of two weeks, five test accounts belonging to teens were tested, and all of them were shown sexual content despite Meta’s promises.
A barrage of sexualized content
Accountable Tech
All the test accounts were served unfit content despite enabling the sensitive content filter in the app. “4 out of 5 of our test Teen Accounts were algorithmically recommended body image and disordered eating content,” says the report.
Moreover, 80% of the participants reported that they experienced distress while using Instagram Teen accounts. Interestingly, only one of the five test accounts was show educational images and videos.
“[Approximately] 80% of the content in my feed was related to relationships or crude sex jokes. This content generally stayed away from being absolutely explicit or showing directly graphic imagery, but also left very little to the imagination,” one of the testers was quoted as saying.
As per the 26-page report, a staggering 55% of the flagged content represented sexual acts, sexual behavior, and sexual imagery. Such videos had accumulated hundreds and thousands of likes, with one of them raking in over 3.3 million likes.
With millions of teens using Instagram and being automatically placed into Instagram Teen Accounts, we wanted to see if these accounts actually create a safer online experience. Check out what we found. pic.twitter.com/72XJg0HHCm
Instagram’s algorithm also pushed content that promoted harmful concepts such as “ideal” body types, body shaming, and eating habits. Another worrisome theme was videos that promoted alcohol consumption and videos that nudged users to use steroids and supplements to achieve a certain masculine body type.
A whole package of bad media
Despite Meta’s claims of filtering problematic content, especially for teen users, the test accounts were also shown racist, homophobic, and misogynistic content. Once again, such clips collectively received millions of likes. Videos showing gun violence and domestic abuse were also pushed to the teen accounts.
Accountable Tech
“Some of our test Teen Accounts did not receive Meta’s default protections. No account received sensitive content controls, while some did not receive protections from offensive comments,” adds the report.
This won’t be the first time that Instagram (and Meta’s other social media platforms, in general) have been found serving problematic content. In 2021, leaks revealed how Meta knew about the harmful impact of Instagram, especially on young girls dealing with mental health and body image issues.
In a statement shared with The Washington Post, Meta claimed that the findings of the report are flawed and downplayed the sensitivity of the flagged content. Just over a month ago, the company also expanded its Teen protections to Facebook and Messenger, as well.
“A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,” a Meta spokesperson was quoted as saying. They, however, added that the company was looking into the problematic content recommendations.