By Charly SHELTON
While consumer-facing AI tools like ChatGPT and Claude have taken center stage in popular culture, scientists at NASA’s Jet Propulsion Laboratory (JPL) have been quietly and effectively leveraging artificial intelligence for years to advance space exploration and Earth science.
Mars exploration is one of many missions using advanced AI, which demonstrates how smart systems can help scientists do more in space. Raymond Francis, a science operations engineer at JPL, explained how autonomous systems have drastically increased the scientific return from the Mars rovers.
“We have software that autonomously does target selection for the instruments on the rovers,” Francis said. “It takes photographs with the rover’s cameras and then on board the rover, with the rover’s computer, analyzes those photographs, identifies where in the image there are rocks and not just regolith and sand and things … then it picks the rock that is most like what the science team asked it to find.”
This system, known as AEGIS (Autonomous Exploration for Gathering Increased Science), allows the rovers to independently analyze images, identify promising rock targets and direct their instruments to make measurements – all without waiting for instructions from Earth and maximizing the working time of the rover.
This system fills a critical time gap: when a rover arrives at a new location on Mars, it typically must wait for images to be sent back to Earth, reviewed by scientists, and then wait for new instructions to be crafted and transmitted back. That process can take hours or even days during which the rover would otherwise be idle. With AEGIS, however, the rover can begin analyzing its surroundings immediately.
“With AEGIS, we’ve now measured almost 800 rocks that we never could have measured before using time that would not have been available because the rover was waiting around for humans to get in the loop. So that’s an increase in the amount of science we could do from the autonomous targeting,” Francis noted.
Beyond target selection, AI contributes to several other mission-critical functions. Rovers have long included a degree of autonomous navigation, but the newer AutoNav system on the Perseverance rover significantly expands the capability, allowing the vehicle to traverse complex terrain without direct human input.
Instruments like the PIXL X-ray spectrometer also utilize onboard AI to decide, in real time, which parts of a rock to analyze more closely based on predefined scientific criteria.
Importantly, these systems are built for efficiency.
“The AI that we use on board spacecraft like this is very different from the things that people are seeing in the media with ChatGPT. That’s an example of a large language model, and then the operative [word] there is large – that thing requires huge computing resources that we can’t fit on board spacecraft so we have to design our things very differently and they work very differently than those kinds of LLM systems,” Francis said. Instead, JPL engineers develop leaner, specialized systems tailored for the unique constraints of space missions.
On Earth, AI is just as impactful. Lukas Mandrake, program manager of Earth Data Science and Technology at JPL, described AI’s dual mission: democratizing access to scientific data and enhancing researchers’ capabilities.
“[We want] these systems and processes to work with scientists … to let them go faster or see deeper,” he said.
One such application is the use of AI to approximate the results of large-scale physics simulations. These models, often used for climate and atmospheric analysis, can be too time-and-resource-intensive for broad-scale application. AI can produce faster, cost-effective approximations allowing scientists and policymakers to run hypothetical scenarios in near real time.
“If I could have something running 10,000 times faster, I could set those [simulations] up and ask questions so fast that it’s almost like real time,” Mandrake said. “And now as a human, I’m unlocked and I can explore lots of different ideas. Some of them don’t pan out, and some of them really do. And I wouldn’t even have been able to start that investigation before.”
Mandrake also emphasized the importance of trust and transparency. Unlike consumer chatbots, JPL’s AI systems are trained on numeric data and designed to communicate confidence levels with their results. This allows scientists to know when to trust the output – or when to proceed with caution.
“We train [our model] to look for the things that we want to find, to ignore the things we want to ignore, to characterize anomalies and describe how often they occur and when and why, and we train it to be confident, to communicate that confidence to us when it gets an answer,” Lukas said. “That’s fundamental to us. Imagine if you were using an LLM [chatbot] and it said, ‘Here’s the answer to your question, but I’m actually not certain. I saw many different answers, so I don’t know if you should trust me right now.’ We need all of our systems to have that behavior so that people can trust it to be able to tell us when it’s not trustable.”
Looking forward, Mandrake sees potential for onboard AI to determine which data from spacecraft is worth transmitting back to Earth, helping overcome bandwidth limitations. The lab is also exploring the concept of a “digital twin” of Earth – a system that combines observations, simulations and AI to create an interactive model for public and professional use.
Together, these initiatives show that AI at JPL is not about replacing humans but empowering them. Whether it’s helping a rover choose its next target on Mars or helping a researcher run climate models in a fraction of the time, AI is proving to be an essential partner in scientific discovery.