close
close

Solondais

Where news breaks first, every time

AMD Advancing AI 2024 Event Shows PC AI Making Progress, But Still Needs Time to Cook
sinolod

AMD Advancing AI 2024 Event Shows PC AI Making Progress, But Still Needs Time to Cook

When you purchase through links on our articles, Future and its syndication partners may earn a commission.

    The AMD Ryzen AI logo on a screen at the AMD Advancing AI 2024 event.     The AMD Ryzen AI logo on a screen at the AMD Advancing AI 2024 event.

Credit: Avenir / John Loeffler

This week in San Francisco, AMD hosted its Advancing AI 2024 event for press, vendors and industry analysts. Announcements have been made, including data center products such as the new AMD EPYC and AMD Instinct, as well as the new AMD Ryzen AI Pro 300 series processors aimed at professional users.

But I was particularly interested in the software on display. AI has been full of promise, but so far it has failed to provide users with software that would justify purchasing a new laptop with a dedicated NPU (neural processing unit) , the so-called AI PCs that so many people have been talking about in 2024.

Much of what I saw is pretty much what we’ve all seen over the past year: lots of image generation, chatbots, and video conferencing tools like replacing the back -plane or blur, and these are not exactly the killer applications for the A PC AI that could be a game-changer.

But I’ve also seen some interesting AI tools and demos that show we’re starting to move beyond these basic use cases and into more creative AI territory, and that makes me makes you think there might be something to all this AI talk. after all, although it will still take some time for the technology to really bear fruit.

Something different for a change

An AI demo at the AMD Advancing AI 2024 eventAn AI demo at the AMD Advancing AI 2024 event

An AI demo at the AMD Advancing AI 2024 event

At the AMD Advancing AI 2024 event, a few notable AI tools caught my eye as I walked around the demo room after Dr. Lisa Su’s morning keynote.

The first was a series of demonstrations of intelligent agents in a 3D gaming environment, the kind of simple 3D space that will be familiar to anyone who has ever used Unity, Unreal Engine, or similar game development platforms.

In one case, the demo video showed an AI playing multiple instances of 3D Pong, which wouldn’t be that impressive on its own except that the simple paddles controlled by the AI ​​were only given a simple rule for the game, at know that the ball I couldn’t hit the wall behind.

This may not seem like a big deal, but you have to remember that traditional computer programs have to define all kinds of rules that you might take for granted in a case like this. For example, does an AI agent know that it is supposed to move the racket to block the ball? Does he know he’s supposed to kick the ball into an opponent’s wall to score points? Does he know how ball physics should work? These are all things that normally need to be coded into a game for its AI to play according to a set of rules; the AI ​​agent controlling the paddles in the demo didn’t know any of this, but he still learned the rules and played the game the way it was meant to be played.

Another demo in the same environment involved a game of tag between four humanoid agents in a room with obstacles, with three agents having to track down and “tag” the fourth agent. None of the agents receive rules at the start nor do they have beacons programmed to bypass the various obstacles. Released into the environment, the three red agents quickly identified the blue agent and set off in pursuit. The blue agent, meanwhile, was doing his best to escape his pursuers.

There were similar demonstrations that showed agents being exposed to an environment and learning the rules of the space as they went. Of course, these demos only showed the ultimate success of these agents in doing what they were supposed to do, so I didn’t get to see how the agents were trained, what their training data was, or how long it took. they needed to obtain. things are good. Still, when it comes to AI use cases, having smarter AI agents in games would be quite a development.

Image 1 of 3

An AI demo at the AMD Advancing AI 2024 eventAn AI demo at the AMD Advancing AI 2024 event

An AI demo at the AMD Advancing AI 2024 event

Image 2 of 3

An AI demo at the AMD Advancing AI 2024 eventAn AI demo at the AMD Advancing AI 2024 event

An AI demo at the AMD Advancing AI 2024 event

Image 3 of 3

An AI demo at the AMD Advancing AI 2024 eventAn AI demo at the AMD Advancing AI 2024 event

An AI demo at the AMD Advancing AI 2024 event

Other use cases I saw included Autodesk 3DS Max 2025 with a tyDiffusion plugin capable of generating a rudimentary 3D scene based on a text prompt. The 3D rendered result wasn’t going to win any design awards, but for engineers and designers, I can absolutely see something like this as a rough sketch that they could develop further to produce something that actually looks detailed and professional .

However, the demo that interested me the most was one that featured local video playback that could be controlled (i.e. paused) using only facial movements. For people with disabilities, this type of AI application could be revolutionary. Again, this was a simple test of concept, but it’s much more compelling than using another image generator to create weird memes or videos for social media.

Media Generation Remains the Fallback for Consumer AI

Image 1 of 6

An AI demo at the AMD Advancing AI 2024 eventAn AI demo at the AMD Advancing AI 2024 event

An AI demo at the AMD Advancing AI 2024 event

Image 2 of 6

An AI demo at the AMD Advancing AI 2024 eventAn AI demo at the AMD Advancing AI 2024 event

An AI demo at the AMD Advancing AI 2024 event

Image 3 of 6

An AI demo at the AMD Advancing AI 2024 eventAn AI demo at the AMD Advancing AI 2024 event

An AI demo at the AMD Advancing AI 2024 event

Image 4 of 6

An AI demo at the AMD Advancing AI 2024 eventAn AI demo at the AMD Advancing AI 2024 event

An AI demo at the AMD Advancing AI 2024 event

Image 5 of 6

An AI demo at the AMD Advancing AI 2024 eventAn AI demo at the AMD Advancing AI 2024 event

An AI demo at the AMD Advancing AI 2024 event

Image 6 of 6

An AI demo at the AMD Advancing AI 2024 eventAn AI demo at the AMD Advancing AI 2024 event

An AI demo at the AMD Advancing AI 2024 event

I saw a few other interesting demos showcasing AI-based tools, but the most developed tools on display were image generators, data aggregators, etc. One mature app I saw did a better job of presenting a summary of sports news than Google Gemini, but creating a multimedia media table of content produced elsewhere will only be useful if there is original content to aggregate, and these types of AI research summaries are not a solution. long-term viable application.

Summaries of the work of writers, journalists, photographers, etc., cut off essential revenue from those same creators who cannot continue to produce the content that AI tools like Gemini need to work, and the fear The reason for this type of AI applications is that they will put many creators out of work, reducing the quantity – and quality – of source material that applications use to produce their query responses, ultimately reducing the quality of the results they receive. They produce to the point that people will no longer have confidence. or continue to use them.

This type of AI product is ultimately a dead end, but it currently presents the most visually compelling example of how AI can and does work, so companies will continue to rush to develop this type of product to ride the AI ​​wave. Fortunately, I’ve also seen enough examples of how AI can do something different, something better, that over time will make AI and the AI ​​PCs that power it something something worth investing in.

Soon, but not yet.

You might also like…