This AI could predict 10 years of scientific priorities—if we let it
Every 10 years, US astronomers have to make some tough decisions. Outlined in a plan called the Decadal Survey on Astronomy and Astrophysics, a set of studies produced by the National Academies of Sciences, Engineering, and Medicine, these decisions determine the next decade’s scientific priorities for the field.
The Decadal Survey has set the stage for big leaps in space exploration since the early 1960s. The seventh report, called Astro2020, is expected at the end of this month. Scientific communities, funding institutions, and even Congress refer to these reports to make decisions about where to invest time and money.
Previous reports have announced major projects, including the construction and launch of large space telescopes and the study of extreme phenomena like supernovas and black holes. The last report, dubbed Astro2010, even delved into the nature of dark energy.
Because the Decadal Survey is a consensus study, researchers who want their project to be considered must submit their proposals more than a year in advance. All proposals are considered, and all of them (numbering more than 500 this time) are available to the public.
This year, the topics being discussed range from exploring Jupiter’s moons to forging planetary defense strategies against once-in-1,000-year events like the flyby of a large asteroid named Apophis. Meanwhile, some researchers want to take a closer look at our own pale blue dot.
The survey committee, which receives input from a host of smaller panels, takes into account a gargantuan amount of information to create research strategies. Although the Academies won’t release the committee’s final recommendation to NASA for a few more weeks, scientists are itching to know which of their questions will make it in, and which will be left out.
“The Decadal Survey really helps NASA decide how they’re going to lead the future of human discovery in space, so it’s really important that they’re well informed,” says Brant Robertson, a professor of astronomy and astrophysics at UC Santa Cruz.
One team of researchers wants to use artificial intelligence to make this process easier. Their proposal isn’t for a specific mission or line of questioning; rather, they say, their AI can help scientists make tough decisions about which other proposals to prioritize.
The idea is that by training an AI to spot research areas that are either growing or declining rapidly, the tool could make it easier for survey committees and panels to decide what should make the list.
“What we wanted was to have a system that would do a lot of the work that the Decadal Survey does, and let the scientists working on the Decadal Survey do what they will do best,” says Harley Thronson, a retired senior scientist at NASA’s Goddard Space Flight Center and lead author of the proposal.
Although members of each committee are chosen for their expertise in their respective fields, it’s impossible for every member to grasp the nuance of every scientific theme. The number of astrophysics publications increases by 5% every year, according to the authors. That’s a lot for anyone to process.
That’s where Thronson’s AI comes in.
It took just over a year to build, but eventually, Thronson’s team was able to train it on more than 400,000 pieces of research published in the decade leading up to the Astro2010 survey. They were also able to teach the AI to sift through thousands of abstracts to identify both low- and high-impact areas from two- and three-word topic phrases like “planetary system” or “extrasolar planet.”
According to the researchers’ white paper, the AI successfully “backcasted” six popular research themes of the last 10 years, including a meteoric rise in exoplanet research and observation of galaxies.
“One of the challenging aspects of artificial intelligence is that they sometimes will predict, or come up with, or analyze things that are completely surprising to the humans,” says Thronson. “And we saw this a lot.”
Thronson and his collaborators think the steering committee should use their AI to help review and summarize the vast amounts of text the panel must sift through, leaving human experts to make the final call.
Their research isn’t the first to try to use AI to analyze and shape scientific literature. Other AIs have already been used to help scientists peer-review their colleagues’ work.
But could it be trusted with a task as important and influential as the Decadal Survey?
Robertson at UC Santa Cruz agrees that astronomy’s massive amount of research should be catalogued in some way. But he says that while the idea of using AI to assist with the Decadal Survey is interesting, it’s too early to tell if it’s something scientists should rely on.
“I do think that there are some important caveats about how we leverage machine learning,” says Robertson. One of the biggest issues with any AI is how well humans understand the algorithm and its results. In this case, could the team tell why its AI had made the choice between two separate but similar topics?
And could humans have come to the same conclusion?
“As scientists, we develop reputations about whether or not our work is accurate or correct. And so I think it’s reasonable for people to apply those same kinds of criteria for the results from these sophisticated machine-learning algorithms,” Robertson says.
Thronson and his team have not tried to predict the results of this year’s survey. Instead, they’re focusing on determining where the next big areas in astronomy are.
Automated tools likely still won’t be used in the Decadal Surveys for some years to come. But if the survey committee does decide to integrate AI into its process, that will represent a new way for scientists to reach agreement on their own goals.
For now, Thronson, Robertson, and thousands of other astronomers will just have to wait to see what’s next—the old-fashioned way.