close
close

Scientists working to install artificial intelligence in smart hardware gadgets

(TNS) — In a Fort Mason workshop with a sweeping view of the Golden Gate Bridge, most DIY enthusiasts rarely look up to take in the stunning scenery.

Instead, on the second floor of the Founders Inc. technology incubator. a dozen young inventors sit hunched over lines of code or plugging wires into circuit boards, hoping to build the next great electronic gadget powered by artificial intelligence.

Scattered throughout a lab strewn with wires, industrial drills and battery components are devices in various stages of creation or repair, from smart goggles that could interpret the world for the blind to a handheld device that could help people navigate doomsday scenarios.


In addition to software chatbots that talk like humans, answer questions, or create images out of thin air, these experiments aim to cram artificial intelligence into hardware and create smart gadgets that interact with the world around them in ways never before possible.

Founders Inc., which finds and finances promising innovations and start-ups, invited a dozen tinkerers from around the world to spend six weeks from mid-April in its spacious hardware laboratory in Fort Mason. The best ideas can count on six-figure investments in seed capital.

No wonder entrepreneurs and financiers chase opportunities. Market analyst Precedence Research estimates that the market for AI-enabled electronics has already reached more than $50 billion this year – including applications such as computer chips, storage and networking across all end markets – and could reach nearly $500 billion within a decade.

Some of the early attempts at AI-powered consumer devices, such as Humane Inc.’s Ai Pin. based in San Francisco, failed – probably because they were delivered to market prematurely. Founders Capital CEO Safwaan Khan looks for sustainable innovation.

“It’s not just about building the hardware,” he said. The residence encourages builders to learn from the smart people around them to speed up problem solving.

One of those inventors who still gets to grips with his machine was Shubh Mittal, who was easily recognizable with his thick black glasses and a built-in camera perched on the bridge of his nose above a wide smile.

When combined with artificial intelligence programs such as ChatGPT from OpenAI and Claude from Anthropic, Mittal’s camera can scan images and text and describe them in a computerized voice in about three seconds. “The idea is to help blind people become more independent” by using glasses to read menus and describe their surroundings, Mittal said.

However, ensuring that complex AI software works well with evolving hardware can be extremely difficult.

Case in point: When Mittal recently demonstrated his latest software update to a gathered group – as part of a Friday ritual designed to highlight the week’s progress – the results were mixed.

Asking the glasses to identify the pink stuffed elephant in front of him, Mittal received no response at first. Ultimately, the program denied the existence of the elephant, instead describing a forest scene completely unrelated to the hardware lab.

More reading

Reports say Alameda geoengineering experiment poses no threat to humans or wildlife

The study, the first of its kind in the country, aims to check whether spraying sea salt can…

Another OpenAI employee is leaving, the third departure since last week

Gretchen Krueger, a political research fellow at OpenAI based in San Francisco, announced…

A somewhat shy Mittal said that moving his head caused a blurred image that confused the program. Later, the glasses described the laboratory with incredible accuracy, detailing it as a research or industrial area.

Founders Lab and Mittal aren’t the only ones working on smart glasses. Engineers at SRI’s international research center in Palo Alto have sunk millions in government grants into head-mounted artificial intelligence technology that could one day help untrained soldiers tend to wounds on the battlefield or repair a damaged engine. The goal is to create a headset that uses artificial intelligence to guide users through steps using a visual display and spoken instructions.

These plans are very distant. For now, he can help you prepare lunch.

Wearing a Microsoft HoloLens headset connected to an array of computers, SRI researcher Bob Price recently demonstrated how the device could teach someone to make a tortilla.

The headset requires the user to press virtual buttons in the air, which are displayed on the heads-up display. As a robotic voice guides the user up the stairs, built-in cameras reliably detect when the tortilla has been spread with peanut butter and jelly, rolled and sliced.

Project manager Charles Ortiz said the hardest part is training the system to recognize errors and respond appropriately. Even the user’s hands getting in the way can confuse the machine.

A few miles away, in a Stanford University laboratory, another project is underway. A team led by Gordon Wetzstein, an electrical engineering professor who runs the school’s computational imaging lab, is trying to improve glasses that display 3D images into glasses that are lighter and sleeker than the bulkier HoloLens.

Wetzstein used algorithms and laser etching to model how light cast from a sideways source would reflect off the lenses and enter the user’s eye, creating a 3D hologram effect.

Wetzstein hopes that instead of headsets that display flat images, the AI ​​goggles of the future will look more like his lighter device, which uses lighting tricks to create augmented reality.

“The convergence of AI and hardware is really making new things possible,” he said.

While glasses and headsets are obvious early applications of AI hardware, tinkerers in the Founders’ lab are chasing even more esoteric possibilities.

Engineer Adam Cohen Hillel is perfecting a product he calls the Ark, a portable device designed to help users survive Armageddon.

Assuming networks and communications are destroyed, Ark comes pre-loaded with maps and uses Meta Llama 3’s artificial intelligence engine to tell users where to find water or shelter, how to administer first aid or start a fire.

Currently, it is a screen on a printed circuit board equipped with a stylus, which – as Cohen Hillel jokes – looks a bit like a makeshift bomb.

During his demonstration to the group, the map moved smoothly, but the voice chat function stopped working. Cohen Hillel hopes to lose the stylus in the future so it can operate the device solely with voice commands and add a solar panel to power the Ark when the grid goes out. He also plans to add a special casing to withstand the electromagnetic pulse of a nuclear bomb.

“Putting that much data onto a small board is the hardest part,” he said. The idea is to act fast enough.

But when they do work, devices like Mittal’s and Cohen Hillel’s offer a glimpse into the future.

Getting to the “magic moment” where something works is the easy part, said Hubert Thieblot, general partner at Founders Inc. Getting something to work every time is much more difficult.

© 2024 San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.