David Reger, founder and CEO of Munich-based Neura Robotics, is building humanoid robots — and a reputation with a touch of déjà vu. In the German press, he’s sometimes dubbed the “Young Elon Musk.” It’s a nickname Reger embraces, despite all of the controversy surrounding the world’s richest man. “For me, it’s a positive, not a negative,” he told TNW in an interview. “I respect how Musk builds companies, how successful he is, how fearless he is to drive things further.” Musk’s politics, Reger continues, aren’t the focus of his admiration. “I’m just thinking about technological advancement and how to…
A week ago we saw Tesla’s Optimus robot showing off some nifty dance moves. This week, you can watch it performing a bunch of mundane tasks, though admittedly with a great deal of skill — for a humanoid robot.
Instructed via natural language prompts, the so-called “Tesla bot” is shown in a new video dumping trash in a bin, cleaning food off a table with a dustpan and brush, tearing off a sheet of paper towel, stirring a pot of food, and vacuuming the floor, among other tasks.
The performance may not shake the world of humanoid robotics to its core, but it nevertheless shows the kind of steady progress that Tesla engineers are making, with the bot’s actions and movements becoming evermore complex.
Commenting on the latest clip, Optimus team boss Milan Kovac said in a post on X: “One of our goals is to have Optimus learn straight from internet videos of humans doing tasks.” Just to be clear, that doesn’t mean the robot will literally watch videos like a human. Instead, it suggests that the robot will learn from the vast amount of data available in those videos, such as demonstrations of tasks, movements, or behaviors.
Kovac said that his team recently had a “significant breakthrough” that means it can now transfer “a big chunk of the learning directly from human videos to the bots (1st- person views for now),” explaining that this allows his team to bootstrap new tasks much more quickly compared to using teleoperated bot data alone.
Next, the plan is to make Optimus more reliable by getting it to practice tasks on its own — either in the real world or in simulations — using reinforcement learning, a method that improves actions through trial and error.
Tesla boss Elon Musk, who has spoken enthusiastically of Optimus ever since the company first announced it in 2021, has claimed that “thousands” of the robots may one day be deployed alongside human staff at Tesla factories, taking care of “dangerous, repetitive, [and] boring tasks.”
The company, better known for making electric cars than humanoid robots, is racing against a growing number of tech firms globally that are intent on commercializing their humanoid robots, whether for the workplace, home, or perhaps some entirely new human-robot ecosystems yet to be imagined.
German defence tech startup ARX Robotics has secured €31mn to ramp up production of its autonomous battlefield robots, which look like mini tanks — minus the guns. ARX — backed by NATO’s Innovation Fund — will also use the fresh capital to advance its operating system, Mithras OS. The software is designed to modernise existing military vehicles through AI, sensor systems, and autonomous driving capabilities. The company estimates it can retrofit 50,000 NATO vehicles with the tech. The co-founder and CEO of ARX, Marc Wietfeld — who will speak at TNW Conference and the Assembly in June — wants his…
We’ve recently seen humanoid robots that can cartwheel, kung-fu kick, and front flip, but such attention-grabbing stunts aren’t the goal of California-based Figure AI.
Instead, its team of roboticists is focusing on designing an AI-powered bot that can move quickly and reliably and get things done.
In a video (top) shared on X on Tuesday, Figure showed its own humanoid 02 robot performing “learned, natural walking.”
Figure’s footage demonstrates the scale of the improvement in walking ability achieved by the team behind the robot. As you can see, the original 01 robot had more of a waddle about it, similar to how you might move if you were desperate for the bathroom. The latest 02 design, however, has a more relaxed walking style, with more realistic strides that help it to move more quickly — important for when the bipedal bot is deployed in the workplace or the home.
Indeed, Figure said in a post on X on Tuesday that this year is set to be “a big one” as it’s “launching into production manufacturing, scaling up robots at our commercial customers, and working on launching robots into the home.”
A few weeks ago, Figure CEO Brett Adcock revealed that Helix — the AI model that Figure uses to power its humanoid robot — was advancing more quickly than expected, enabling the team to accelerate its timeline for home deployment by two years, meaning that testing will begin sometime this year.
Figure’s impressive 02 robot stands at 5 foot and 6 inches (168 centimeters), tips the scales at 154 pounds (70 kilograms), and can function for about five hours on a single charge.
The company has already completed a trial deployment of its humanoid robot at a BMW facility in South Carolina in which a number of its robots were used to place sheet metal parts into specific fixtures that were then assembled as part of a vehicle’s chassis.
Figure says its overall ambition is “to develop general purpose humanoids that make a positive impact on humanity and create a better life for future generations,” adding that its AI-powered designs “can eliminate the need for unsafe and undesirable jobs — ultimately allowing us to live happier, more purposeful lives.”
On her first visit to orbit, NASA astronaut Nichole Ayers has just introduced herself to three robots stationed aboard the International Space Station (ISS).
“We hit the ground running (or floating??) here on the space station,” Ayers, who arrived at the ISS just over a week ago, wrote in a post on X. “In addition to data collection for one of the studies, I got to help load some software onto the Astrobees. This is Bumble!”
We hit the ground running (or floating??) here on @Space_Station! In addition to data collection for one of the studies, I got to help load some software onto the Astrobees. This is Bumble!
These robots are a technology demonstration meant to help astronauts with routine… pic.twitter.com/RBTQPkphyx
— Nichole “Vapor” Ayers (@Astro_Ayers) March 24, 2025
As Ayers said in her post, the Astrobee robots are a technology demonstration and are designed to assist the ISS astronauts with routine duties that include taking inventory, documenting experiments, or moving cargo, freeing up the astronauts to take care of tasks “that require a human touch.”
The compact, cube-shaped flying robots were developed and built at NASA’s Ames Research Center in California’s Silicon Valley, and were sent to the orbital outpost in 2019.
NASA astronaut Anne McClain unpacked the first Astrobee robot — Bumble — on the ISS in 2019.NASA
The floating bots include various cameras and sensors for navigation, and also come with a touchscreen, speaker, and microphone. They even have a mechanical arm to which various tools can be attached.
Instead of floating freely, the Astrobee robots use a fan-based propulsion system to move in a specific direction, with power for the fans and the rest of the robot provided via an onboard battery. When power runs low, the robot automatically navigates to a nearby dock to recharge.
“Robots will play a significant part in the agency’s mission to return to the moon as well as other deep space missions,” NASA says on its website. “Robots such as Astrobee have the capacity to become caretakers for future spacecraft, working to monitor and keep systems operating smoothly while crew are away.”
One of the space station’s best known robotic devices is the Canadarm2, a 17.6-meter-long robotic arm that’s been attached to the exterior of the ISS since 2001, performing tasks such as moving supplies and equipment, and assisting with spacewalks. Earlier this year, NASA shared video footage of astronaut Suni Williams taking a ride on the Canadarm2 during a spacewalk 250 miles over London, England.
Nvidia spoke about robotics quite a lot during GTC 2025, including its new model called Isaac Groot N1, but this demonstration was all I needed to get on board. In collaboration with Disney and Google, Nvidia debuted the cutest little Star Wars droids that offer real-time simulation and respond to commands. Move over, humanoid robots — give me more droids instead.
One of the droids, called Blue, joined Nvidia CEO Jensen Huang on stage during GTC. The robot was highly interactive and appeared to respond — albeit in its own language — when asked questions. It also responded to commands, such as being told to stand in a certain place.
Huang revealed that Blue was developed as a joint effort between Nvidia, Disney Research, and Google DeepMind, and the droid runs on two Nvidia computers. No word on the specs, though. (We need a DIY version to buy and own at home, please and thank you.)
Similar droids (which are BDX droids, as seen in The Mandalorian) also appeared on the GTC show floor, and it’s safe to say they were a huge hit. Reuters was able to speak to one of Disney’s researchers, Moritz Baecher, to find out more about the little droids.
“They [the robots] learn through reinforcement learning. So, we use reinforcement learning that is similar to how we, humans, learn to walk. We bring these droids into a simulation environment,” said Baecher. “At the beginning, they fall all the time if they try to take their first steps, but we ask them to imitate the artist-provided motion. So, over time, they learn how to walk in a stylized way that is authentic to the character.”
When interacting with people, the robots appear shy or excited at times, and Baecher confirms that impression: “It [the robot] can express happiness, it dances, but it can also express shyness. If you come too close, it can express that it’s angry. It’s sort of a fully complete character that we were able to build.”
In a world where it’s all “AI this, AI that,” it’s easy to grow numb to all these new developments, but as a huge Star Wars fan, I wish I could’ve met these in person. Let’s get a real-life C-3PO next, shall we?
Nvidia’s GTC conference has become a central point in the calendar for the ever-expanding AI industry.
Emma Cosgrove
Nvidia’s GTC conference in San Jose, California, featured CEO Jensen Huang’s keynote on AI advances.
Huang’s speech highlighted Nvidia’s new AI partnerships, software tools, and chip architectures.
With crowded sessions and a bustling exhibition floor, Nvidia’s immense growth was on display.
The party started as so many do — with pancakes in a parking lot.
I attended Nvidia’s GTC conference, which has taken over downtown San Jose, California, this week. Tuesday was the biggest day for the AI juggernaut. At 10 a.m. Nvidia CEO Jensen Huang began his keynote address, which lasted more than two and a half hours.
But first, breakfast.
The legendary Denny’s breakfast
It was a chilly early morning in San Jose. The “pregame” started at 6:30 a.m. with breakfast from Denny’s, the restaurant where Huang came up with the idea for Nvidia. I needed to know who would show up more than three hours early for a speech about computer chips.
When I arrived just before 7 a.m., the line was already substantial. A massive red mobile Denny’s kitchen was cooking up “Nvidia bytes” — essentially sausages and pancakes. Diners were encouraged to wrap up their bytes like a taco and add syrup on top, like Huang does.
Conference-goers line up outside a Denny’s pop-up restaurant outside Nvidia’s GTC AI event.
Emma Cosgrove/Business Insider
I chatted with some of the early birds. Some were die-hard Nvidia fans. Some were jet-lagged, having flown in the day before from London or Toronto, so they were up anyway. Some wanted to get into the SAP Center as soon as the stadium doors opened to avoid the massive lines that would form the hour before the speech. Some heard a rumor that Huang himself might stop by the tailgate.
And sure enough, by 7:25 a.m., muscled men in suits with earpieces started multiplying. With no fanfare, Huang walked out from behind the registration tent wearing his signature uniform, all black and a leather moto jacket. The bleary-eyed crowds sprung into action — phones up for photos.
Nvidia CEO Jensen Huang made an appearance at the company’s GTC AI conference Denny’s breakfast pop-up.
Emma Cosgrove/Business Insider
Huang donned an apron and went inside the food truck to make some pancakes, as he had as a 15-year-old Denny’s employee.
“At this pace, I’d run the company out of business. I used to be a lot faster,” he said of his chef skills after emerging from the kitchen and immediately meeting CNBC reporter Kristina Partsinevelos and a camera crew.
Partsinevelos tried to ground the conversation, but Huang was all jokes.
“You’re talking about the stock? I’m talking about Denny’s!” he said.
By 8:15 a.m., Huang disappeared into the SAP Center, where he turned up on the pre-show panel airing live inside the stadium.
Nvidia Jensen Huang served breakfast to the panel on Nvidia’s pregame show at his GTC keynote speech.
Emma Cosgrove/Business Insider
As I reached myfloor seat, the panel was giving a reverent retrospective of the company — including its many brushes with failure before AI changed everything.
Huang ‘without a net’
Leading up to the speech, Nvidia’s partner companies were eager to find out if they would garner a mention on one of tech’s brightest stages. One Nvidia employee told me that up to the last minute, a local war room of Nvidia employees was tweaking the company’s dozens of announcements.
Once the speech starts, it was all in Huang’s hands.
He kicked off by firing T-shirts into the crowd from an Nvidia-green T-shirt cannon.
“I just want you to know that I’m up here without a net. There are no scripts, there’s no teleprompter, and I’ve got a lot of things to cover. So let’s get started,” Huang said.
Nvidia CEO Jensen Huang started his 2025 GTC keynote address by firing T-shirts into the crowd.
Emma Cosgrove/Business Insider
The 62-year-old CEO proceeded to blow through his scheduled two hours.
He focused on Nvidia’s advancements, a flurry of new partnerships and software tools for AI developers, and coming chip architecturesthat could underpin the computation speed and efficiency that creates new industries. These are already creating what Huang calls “AI Factories.”
In his keynote address, Nvidia CEO Jensen Huang took the audience on a virtual tour of Nvidia HQ as he moved from subject to subject.
Emma Cosgrove/Business Insider
The world of computing has reached a “tipping point,” and the “platform shift” to accelerated computing is well underway, he said.
The crowd stayed rapt, although a little antsy at the two-hour mark. But the final video clip reenergized the room. A Disney-designed robot named Blue, which looked like part of the Star Wars universe, toddled through a desert and then ascended — for real — from below the stage.
Then the crowd jumped to their feet and raised their phones.
“Have a great GTC! Thank you! Hey, Blue, let’s go home. Good job,” said Jensen.
Nvidia CEO Jensen Huang talks to the Disney robot “Blue,” which was controlled by Disney Imagineers off-stage at his 2025 GTC keynote.
Emma Cosgrove/Business Insider
‘We’re going to have to grow San Jose’
After the speech, thousands of attendees streamed into the downtown San Jose streets. The SAP Center, which had only a few empty seats, holds 17,500, and 25,000 people were expected at this year’s event.
The crowds made their way back to Plaza de Cesar Chavez, temporarily renamed GTC Park, to find lunch at the procession of food trucks on-site daily. Attendees again had to wait in long lines.
Nvidia’s GTC took over downtown San Jose this week.
Emma Cosgrove/Business Insider
The lunch lines were just one of many signs that GTC has outgrown its traditional home. Lines to get into the San Jose Convention Center’s conference sessions snaked through the hallways.
Nvidia still calls GTC a developer conference, though the evolution from technical developer confab to serious dealmaking destination was on display at a swanky building next to the convention center dedicated only to business meetings. The elevators couldn’t handle the volume of people constantly coming in and out.
Massive queues formed outside the building designated for business meetings at GTC as attendees waited for elevators.
Emma Cosgrove/Business Insider
Even Nvidia team members arriving just behind me balked and abandoned ship to relocate when they saw the lines. Getting from the sidewalk to the meeting room inside the building took 35 minutes.
“The only way to hold more people at GTC is we’re going to have to grow San Jose, and we’re working on it,” said Huang during the keynote.
Nvidia’s robotic future
Logistics aside, I soon met with Kimberly Powell, Nvidia’s vice president of healthcare, who detailed the many ways Nvidia’s accelerated computing is changing how doctors and hospitals work.
She said it could be decades before robots can actually perform surgeries without human assistance. But companies like Moon Surgical are already creating surgical assistance robots to hold cameras and tools with arms that never tire. Nvidia also works with da Vinci robots, which can suture wounds, among other tasks.
Robotics assistants for surgery are on the way, according to Nvidia.
Emma Cosgrove/Business Insider
I then headed back to the convention center to walk the exhibition floor during happy hour, where I saw some of the technology Powell championed on display. Because Nvidia’s impact spans many industries, the floor showcased cars, vacuuming robots, simulated human bodies ready for surgery, and all the biggest names in cloud computing.
Robots from JotBot, Agility Robotics, Unitree, and more were on display at Nvidia GTC.
Emma Cosgrove/Business Insider
I also passed the Nvidia gear store, which was booming. A worker there told me the 2025 GTC T-shirt and puffer vests were the biggest sellers.
Nvidia’s store was busy throughout the GTC conference.
Emma Cosgrove/Business Insider
My 12-hour Tuesday at the conference ended at the GTC Night Market back in the park. The setup was an homage to Huang’s love of Taiwan’s night markets, with live music, drinks, local food like bao buns, yakitori, cupcakes, and a punnily-named “juice” bar sponsored by GPU cloud provider Coreweave.
Nvidia’s Night Market is inspired by CEO Jensen Huang’s childhood in Taiwan.
Emma Cosgrove/Business Insider
If Nvidia has its way, AI is going to continue to do a lot of hard work for us going forward. But 12 hour–days are here to stay, at least for a while. On my way back to my hotel — via San Jose bike share past a now-silent SAP Center — I thought of these two I had spotted inside the convention center:
Nvidia’s GTC conference is a marathon, not a sprint.
The tech wizards at Boston Dynamics have been hard at work, according to an astonishing new video released by the Massachussetts-based company on Wednesday.
“In this video, Atlas is demonstrating policies developed using reinforcement learning with references from human motion capture and animation,” reads the somewhat dry description accompanying the footage.
Instead of “demonstrating policies,” a more accurate account would surely be, “ … busting a slew of jaw-dropping moves that’ll have you sitting straight up in your seat hollering, ‘Woah, did that robot really just do that?!?”
Atlas has been impressing us for years. Who can forget the first time we saw it performing a backflip, or doing parkour — all very impressive stuff. But since last year, when Atlas was relaunched as a fully electric humanoid robot with AI and machine-learning tools, the team of engineers at Boston Dynamics has taken its capabilities up a notch, to the point where it’s moving just like a human, in a totally natural way.
But that’s not all.
The footage also shows Atlas doing a spot of breakdancing, which, let’s be honest, if most of us tried would likely end with a slipped disc, a scream of pain, and a trip to the hospital. But Atlas performs like a pro.
But it’s the finale that really impresses. Not even bothering with a run-up, Atlas does a perfect cartwheel. It’s the kind of gleeful move that you can imagine them making after subjugating the masses.
This latest work was done as part of a research partnership between Boston Dynamics and the Robotics and the Massachussetts-based AI Institute (RAI Institute), the company said.
Boston Dynamics has described the latest iteration of Atlas as “one of the most advanced humanoid robots ever built,” adding that it’s now stronger, more dexterous, and more agile, and “able to move in ways that exceed human capabilities.”
The goal is to make Atlas fit for manufacturing scenarios, helping to perform mundane tasks in a more efficient manner, freeing up human workers for more meaningful jobs. And we’re kind of hoping that’s all it does.
On Wednesday, Google DeepMind announced two new AI models designed to control robots: Gemini Robotics and Gemini Robotics-ER. The company claims these models will help robots of many shapes and sizes understand and interact with the physical world more effectively and delicately than previous systems, paving the way for applications such as humanoid robot assistants.
It’s worth noting that even though hardware for robot platforms appears to be advancing at a steady pace (well, maybe not always), creating a capable AI model that can pilot these robots autonomously through novel scenarios with safety and precision has proven elusive. What the industry calls “embodied AI” is a moonshot goal of Nvidia, for example, and it remains a holy grail that could potentially turn robotics into general-use laborers in the physical world.
Along those lines, Google’s new models build upon its Gemini 2.0 large language model foundation, adding capabilities specifically for robotic applications. Gemini Robotics includes what Google calls “vision-language-action” (VLA) abilities, allowing it to process visual information, understand language commands, and generate physical movements. By contrast, Gemini Robotics-ER focuses on “embodied reasoning” with enhanced spatial understanding, letting roboticists connect it to their existing robot control systems.
The inevitable outcome of artificial intelligence was always its use in robots, and that future might be closer than you think. Google today announced Gemini Robotics, an initiative to bring the world closer than ever to “truly general purpose robots.”
Google says AI robotics have to meet three principal qualities. First, they should be able to adapt on the fly to different situations. They must be able to not only understand but also respond to changing environments. Finally, the robots have to be dexterous enough to perform the same kind of tasks that humans can with their hands and fingers.
Google writes, “While our previous work demonstrated progress in these areas, Gemini Robotics represents a substantial step in performance on all three axes, getting us closer to truly general purpose robots.”
Gemini Robotics: Dynamic interactions
Research and studies into robotics have advanced by leaps and bounds over the past few years. Boston Dynamics is particularly well known for its bots that can walk and navigate in public, and you’ve no doubt seen footage of the robot dogs on TikTok. If Gemini Robotics fulfills its mission statement, we could be on the cusp of introducing true household assistants that can do everything from cleaning the house to packing your lunch.
The earliest tests of Gemini Robotics used mounted arms to perform tasks like playing tic-tac-toe, packing a lunchbox, and even playing cards — and removing individual cards from a hand without bending them. It does this by incorporating an advanced vision-language model dubbed Gemini Robotics-ER (Embodied Reasoning).
Google is bringing together cutting-edge research from each of these areas of robotics to combine it into one form, governed by a set of safety guidelines that dictate robotic behavior. In January 2024, Google suggested a “Robot Constitution” that would govern how these machines behaved, based largely on Isaac Asimov’s Three Laws of Robotics. The team has since expanded that to something called the ASIMOV dataset that will allow researchers to better measure and test robot behavior in the real world.
Gemini Robotics: Dexterous skills
Google published a paper that’s free to read, but fair warning: it’s highly technical and complex. However, if you’re interested in the world of robotics and what implications these projects hold for the future of the world, it’s worth a read.