GTA V: Intel uses AI and machine learning to make the game life-like

Intel has made improvements in the fields of Computer Vision and Pattern Recognition with this project. Let’s check all the details around the AI-powered Intel experiment that turned a GTA V clip into photo-real footage.

Semiconductor giant Intel has just posted information about some new AI-powered research that it is doing behind closed doors. The company has made improvements in the fields of Computer Vision and Pattern Recognition with the project. Though these fields of research are not new for the company, the use of AI and machine learning with GTA V looks intriguing. Taking a closer look, the company has used the technology at its disposal to convert the streets of LA present in the game into a photo-realistic version that is closer to reality. Intel has also improved the in-game texture to be realistic as part of the conversion. Let’s check all the details around the AI-powered Intel experiment that turned a GTA V clip into photo-real footage.

Intel converts GTA V clip into a photo-real snippet; details

An Intel Labs research team shared a 8 minutes and 33 seconds long video to showcase their work around photorealism. It shared both the final product as well as the details around how it went around to achieve this feat. Inspecting closely, the team of researchers including Stephan R. Richter, Hassan Abu Alhaija, and Vladlen Kolten posted the video with the title “Enhancing Photorealism Enhancement”. The most important aspect of this video is the fact that it can be “run at interactive rates”. This means that one can use these in real-time games in the future. This is especially important considering that video game graphics are yet to reach the level of realism that Intel achieved.

Advertisement

Popular Games

The team also shared information on all the difficulties that it faced while working on the project. Besides the problems, we also got the details on how the technology worked. The sample video outlined how the AI-powered system applies realistic textures, lighting behavior, reflections, and more. Intel first shared the details around the research along with a research paper in a keynote address at Eurographics 2021. Following this, it shared the code, white paper, video comparisons, and more through the dedicated website, GitHub, and Cornell University website.

Intel used “The Cityscapes Dataset” to train its neural network for vibrant, high-resolution results. According to the comparison website, this project rebuilt the roads present in the scene while removing “distant haze”. We also got “more voluminous” grass in the scenes. Given the source of the images used to train the neural engine, the final product takes on similar color and white balance characteristics. Finally, the Intel research team also noted that it will share more “soon” around the project.