By Madhulika Srikumar*
The United States White House released a report (and an R&D plan) last month on preparing for a future where Artificial Intelligence (AI) would play an increasing role across critical sectors; the report addresses the current state of AI, its applications for public good and the public policy and regulatory questions that AI begets. The White House releases this report amid increasing optimism about the ability of Machine Learning, a subset of AI, to drastically improve peoples’ standards of living.
The progress of AI has been attributed to three main factors — the availability of Big Data from sources such as social media, e-commerce, government; improved machine learning approaches, algorithms and the increased capabilities of more powerful computers in processing this data. The White House report has triggered a debate in the United States on developing AI-based applications though it does not do much to address potential user — privacy and hacking concerns.
The document was developed by the National Science and Technology Council’s (NSTC) subcommittee on Machine Learning and AI which enjoyed representation from several federal departments and agencies such as the Departments of Education, Labour and Defense. The NSTC Committee’s recommendations seem largely guided by the following principles:
- That current policy should be directed towards Narrow AI ie. AI employed to perform specific tasks eg. self-driving vehicles, image recognition etc, and not on General AI where AI would display the cognitive capabilities of humans — think super-intelligent machines,
- That AI’s economic effect in the short term will lie in automation of tasks leading to loss of some jobs while increasing the demand for other skills and jobs that can augment AI, and
- That AI practitioners need to adopt interdisciplinary approaches to build ethical AI; further, fluency in data science will be crucial to participate in policy debates surrounding AI.
Existing institutions should adapt to arrival of AI
The report recognises that AI has potential applications in several crucial areas such as health, education, energy and environment and recommends that existing sector-specific regulations be adapted as necessary to account for effects of AI. For example, the regulation of autonomous vehicles in future must be carried out within the current structure of vehicle safety regulation.
Regulation, according to the report, will serve two main purposes: to protect the public from harm and ensure fairness in economic competition. The challenge of developing an extensive “training set” is identified — the need for federal actors to focus on gathering rich sets of data, consistent with consumer privacy, that can better inform policy making as technologies mature, is highlighted. Conceiving standards of information sharing that the private sector is comfortable with considering their intellectual property and competition concerns — will be a task for regulators. Other recommendations made on AI regulation include hiring technical expertise at the senior level, and fostering a federal workforce with more diverse perspectives, when setting policy for AI-enabled products.
Prioritise AI research and develop a skilled workforce
The NSTC Committee places a lot of emphasis on the important role that government has to play in the growth of AI through investment in R&D and development of a skilled, diverse workforce. The United States is currently the leader in AI R&D and the US Government is seeking to maintain this through a dedicated strategy directed at recognising areas of opportunity, coordinating R&D to maximise benefit and using AI in government to improve services. The Committee calls for building a “data-literate citizenry” by imparting AI education to school students to empower the next generation to participate in policy debates on AI in the future.
Make AI-based processes fair, accountable and safe
Realising that the purpose behind AI-based applications is to automate tasks and reduce human intervention; the Committee suggests that the rationale and ethics behind AI-based processes be made accountable to stakeholders. The processes must account for justice, fairness and safety and not allow for the developers’ biases to creep in, whether intentional or not. The report notes that data needs to be complete and unbiased to enable machine learning processes to glean just and fair outcomes. The Committee calls for AI practitioners to adapt best practices from other safety-critical industries like aircraft, power plants and vehicles and integrate AI methods with safety engineering.
Preparing for AI-enabled societies
The rest of the world would do good to heed the lessons from US’s approach to the future of AI. Governments have the option of waiting and watching — or they can be proactive and turn to investing in R&D in AI to find indigenous applications. An important takeaway from the US approach is for states to coordinate efforts across various government departments in developing a strategic plan to train researchers and discover uses for AI across sectors like education and criminal justice. States should try and move the focus away from using AI purely for consumer goods.
As for AI and its possible impact on jobs and the economy, while jobs will be lost to automation (even without AI playing a role) — future AI systems could on the contrary create new jobs where human-machine collaboration would be required. It is up to governments to monitor developments in AI as they occur, evolve methods to understand safety and fairness of AI applications and finally set standards and frameworks to regulate AI — in collaboration with industry and civil society.
The author is a fellow at ORF.