Human-AI Coworking
Though artificial intelligence decreases human error in experimentation, human experts outperform AI when identifying causation or working with small data sets.
To capitalize on AI and researcher strengths, ORNL scientists, in collaboration with colleagues at National Cheng Kung University, Taiwan, and the University of Tennessee, Knoxville, developed a human-AI collaboration recommender system for improved experimentation performance.
During experiments, the system’s machine learning algorithms, described in npj Computational Materials, display preliminary observations for human review. Researchers vote on data, telling the AI to show similar information or change direction, akin to a streaming service generating suggested content based on users’ activity. After initial guidance, algorithms improve to illuminate relevant data with little human input.
“The foundation of this research is basically not the quantity of the data but the quality of the data that we are aiming for,” ORNL’s Arpan Biswas said.
The experiments and autonomous workflows were supported by the DOE-funded Center for Nanophase Materials Sciences, and algorithm development was supported by the DOE-funded MLExchange project to expand machine learning development at national laboratories.