Recently, Meta announced the artificial intelligence-powered “universal speech translator”, the role that artificial intelligence can play in the Metaverse. had highlighted. Mark Zuckerberg thinks that artificial intelligence will play a critical role in the development of the metaverse.
Facebook AI leader Jérôme Pesenti and Facebook AI Research executive director Joelle Pineau shared how artificial intelligence will impact the Metaverse. Pesenti emphasized that artificial intelligence is one of the keys to the Metaverse, while explaining that Meta AI’s mission is to contribute to Meta products by advancing artificial intelligence through artificial intelligence research breakthroughs.
Meta AI’s work on language and imagery
According to Pesenti, Meta AI has made notable advances in embodiment (or somatization) and robotics, creativity and self-learning. Stating that until now, the machine learning process has been carried out under human supervision and task-based, Pesenti suggested that this approach restricts machine learning to “tasks”. Likewise, he stated that because of this approach, people’s prejudices are included in the machine learning process.
The self-supervised learning approach enables artificial intelligence to learn only under its own supervision, without human supervision. For example, in studies on language, the artificial intelligence system can extract words from the text given as “input”. Artificial intelligence tries to reach the words in question by extracting patterns from the words surrounding the words given as input.
It should be noted that Meta AI uses a self-learning technique on images beyond words. Researchers break an image into small pieces, give 80 percent of the pieces randomly to the AI, and ask the AI to reconstruct the image.
Meta AI researchers also aim for the artificial intelligence models they use to perform multiple tasks at the same time. According to this vision, artificial intelligence will also be able to perform the listening function to improve the speech recognition ability while lip-reading. When we look at what it can do on the social network, it will be able to detect posts that violate the social network’s policies while analyzing all the components in the posts.
Metaverse and artificial intelligence
According to Joelle Pineau, the basis of the developments in the field of artificial intelligence so far is based on the internet environment. Stating that we see the most progress in data modalities in the areas of speech, language and imaging, Pineau said that these are for the internet. native modalities declared that.
AI, AR and VR
Pineau stated that the experiences offered by AR and VR technologies are changing the situation. He stated that with hand movements and facial movements, the whole body turns into a giant vector to receive and give information. This brings with it new opportunities and the development of artificial intelligence models.
“world model” in artificial intelligence
Sharing the company’s goal of developing unified models, Pineau explained that this model has not yet reached the sufficient stage for now. He stated that it is important to develop functional unified models that can be valid worldwide. Pineau added that artificial intelligence researchers have been talking about the concept of developing a “model of the world” for years.
He said the idea presented in the world model is beyond making predictions to move the future forward, to create a rich representation that can be used to compare alternative options for action and intervention. Pineau; He summed up the world model, self-learning, artificial intelligence in AR and VR with these sentences:
As we move towards building AI tools that can work fluidly between true reality, augmented reality, and virtual reality, our models of the world will need to be trained in a stream of interactive experiences as well as being driven by a mix of pre-recorded static data just like supervised models.
Somatization and robotics
Stating that for now, the algorithms and methods that we will encounter in the coming years remain unclear, Pineau talked about two different studies that can shape the future.
The first of these is research in the field of somatization and robotics. According to Pineau, Meta AI is after the concept of “limitless robotics”. In this context, the company aims to take robots out of the laboratory or factory environment and work in natural interaction with people and objects at home or in the office.
According to Pineau, building robots that learn from rich interaction will be one of the most important steps the company will take.
Communication with robots via avatars
In this context, Meta AI, in collaboration with Carnegie Mellon University and MIT, is developing new touch sensors. Using artificial intelligence techniques, these sensors can make sense of the communication location. In addition, it can measure the contact strength through the images recorded in the camera inside the sensor.
Just as Boston Dynamics robots merge with the metaverse in the concept of metamobility Meta also aims to communicate with robots through avatars. She believes that this will fill the gap between virtual reality and the real world. In short, the company is working on developing artificial intelligence for an embodied interactive metaverse.