How roboticists are thinking about generic AI

-Gudstory

How roboticists are thinking about generic AI -Gudstory

Rate this post

[ad_1]

[A version of this piece first appeared in TechCrunch’s robotics newsletter, Actuator. Subscribe here.]

The topic of generative AI comes up frequently in my newsletter, Actuator. I admit that a few months ago I was a little hesitant to devote much time to this topic. For as long as I’ve been reporting on technology, it’s been through countless hype cycles and been burned before. Reporting on technology requires a healthy dose of skepticism, with hopefully some excitement about what can be done.

This time, it seemed as if generic AI was biding its time, waiting out the inevitable cratering of crypto. As that category was bleeding out, projects like ChatGPT and DALL-E stood up, ready to focus on all the different Kubler-Rossian stages of breathless reporting, optimism, criticism, nihilism and tech hype bubbles. Were.

Those who follow my content know that I was never particularly bullish on crypto. However, things are different with generic AI. For starters, there is almost universal agreement that artificial intelligence/machine learning will broadly play a more centralized role in our lives going forward.

Smartphones provide great insight here. Computational photography is something I write about somewhat regularly. There has been a lot of progress on that front in recent years, and I think many manufacturers have finally struck a good balance between hardware and software when it comes to improving the final product and lowering the barrier to entry. Google, for example, pulls out some really impressive tricks with editing features like Best Take and Magic Eraser.

Sure, they’re neat tricks, but they’re also useful rather than becoming features for the sake of features. However, moving forward, the real trick will be to seamlessly integrate them into the experience. With the ideal future workflow, most users would have little or no idea of ​​what’s happening behind the scenes. They’ll just be happy that it works. This is the classic Apple PlayBook.

Generative AI delivers a similar “wow” effect right out of the gate, which is different from its Hype Cycle predecessor. When your least tech-savvy relative can sit down at a computer, type a few words into a dialogue field and then watch as the black box spits out paintings and short stories, there’s not much need for a concept. This is a big part of the reason it all caught on so quickly – most times when everyday people encounter cutting-edge technologies, they need to imagine what it will look like in the next five or ten years.

With ChatGPT, DALL-E, etc., you can experience it first-hand right now. Of course, the flip side of this is how difficult it becomes to manage expectations. Without a basic understanding of AI, people are willing to include robots with human or animal intelligence, so it is easy to project intentionality here. But now things are going like this. We lead with an attention-grabbing title and hope people will stick around long enough to read about the conspiracies behind it.

Spoiler alert: nine times out of 10 they won’t, and suddenly we’re spending months and years trying to bring things back to reality.

A nice perk of my job is that I have the ability to work these things out with people much more intelligent than me. They take time to explain things and hopefully I do a good job translating it for readers (some attempts are more successful than others).

Once it became clear that generic AI had an important role to play in the future of robotics, I started looking for ways to incorporate questions into the conversation. I think most people in the field agree with the statement in the previous sentence, and it’s interesting to see what impact they believe it will have.

For example, in my recent conversation with Marc Raibert and Gil Pratt, the latter explained what role generative AI is playing in approaches to robot learning:

We have figured out how to do something, which is the use of modern generative AI techniques that enable the human to essentially teach the robot the performance of both position and force, from just a few examples. The code has not changed at all. What it is based on is called diffusion policy. We did this work in collaboration with Columbia and MIT. We have taught 60 different skills so far.

Last week, when I asked Dipu Talla, Nvidia’s VP and GM of Embedded and Edge Computing, why the company believes generative AI is more than a fad, he told me:

I think it shows in the results. You can already see improvements in productivity. It can write an email for me. It’s not perfect, but I don’t have to start from scratch. It is giving me 70%. There are obvious things you can already see that definitely work a step better than before. It is not correct to summarize something. I won’t let you read and summarize it for yourself. So, you can already see some signs of improvement in productivity.

Meanwhile, during my last conversation with Daniela Russ, the MIT CSAIL head explained how researchers are actually using generative AI to design robots:

It turns out that generative AI can be quite powerful for solving motion planning problems. You can get much faster solutions to control and much more fluid and human-like solutions than model predicted solutions. I think this is very powerful, because the robots of the future will be much less robotized. They will be much more fluid and human-like in their movements.

We also used generative AI for design. This is very powerful. It’s also very interesting, because it’s not just pattern generation for robots. You have to do something else. It cannot generate a pattern based on data alone. Machines have to be understood in the context of physics and the physical world. For this reason, we connect them to a physics-based simulation engine to ensure that the designs meet their required constraints.

This week, a team at Northwestern University unveiled their own research into AI-generated robot design. Researchers show how they designed a “successful walking robot in mere seconds.” As these things go, there isn’t much to see, but it’s quite easy to see how, with additional research, the approach could be used to create more complex systems.

“We have discovered a very fast AI-powered design algorithm that bypasses development traffic jams without falling prey to the biases of human designers,” said research leader Sam Kriegman. “We told AI we wanted a robot that could walk on the ground. Then we just pressed a button and presto! In the blink of an eye, it created a blueprint for a robot that doesn’t look like any animal that has ever walked on Earth. I call this process ‘instant development’.”

It was the AI ​​program’s choice to step on the small, squishy robot. “It’s interesting because we didn’t tell the AI ​​that the robot had to have legs,” Kriegman said. “It turns out again that feet are a good way to move around on the ground. Walking with the help of legs is, in fact, the most efficient form of terrestrial locomotion.”

“From my perspective, generative AI and physical automation/robotics are going to change everything we know about life on Earth,” Jeff Linnell, founder and CEO of Formant, told me this week. “I think we all agree on the fact that AI is a thing and are expecting it to impact all of our jobs, every company and student. I think it’s symbiotic with robotics. You will not need to program the robot. You are going to talk to the robot in English, request an action and then it will be figured out. “It’s going to take a minute.”

Prior to Formant, Linnell founded and served as CEO of Bot&Dolly. The San Francisco-based firm, best known for its work on Gravity, was spun off by Google in 2013 as the software giant shifted its focus to accelerating the industry (best plans, etc.). The executive told me that his main takeaway from that experience is that it’s all about the software (given DeepMind’s absorption of internal and everyday robots, I’d say Google agrees).

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *