In the International Journal of Human-Computer Interaction, a paper titled “Six Human-Centered Artificial Intelligence Grand Challenges” was released.
The study’s principal investigator was Ozlem Garibay, an assistant professor in the department of industrial engineering and management systems at UCF (’01 MS, ’08 Ph.D.). According to her, technology is now more prevalent in many facets of our life, but it has also created numerous problems that need to be researched.
For instance, according to Garibay, who studies how AI is applied in the design and discovery of materials and drugs as well as how it affects social systems, the broad adoption of artificial intelligence could have a tremendous impact on human life in ways that are still not fully understood.
The six issues that Garibay and his group of researchers found are as follows:
Challenge 1: Improving Human Well-Being: AI should be able to identify chances for implementation that will improve human wellbeing. When interacting with AI, it should also be sensitive to support the user’s wellbeing.
Challenge 2: Responsible AI is the idea of giving human and society welfare priority throughout the AI lifetime. By doing so, the risk of unexpected effects or ethical transgressions is reduced while also ensuring that the potential benefits of AI are utilized in a way that is consistent with human values and goals.
Challenge 3: Privacy. To preserve individual privacy and avoid unfair usage against specific people or groups, the acquisition, use, and dissemination of data in AI systems should be carefully evaluated.
Challenge 4: AI system design should adhere to human-centered design concepts that can help practitioners. This paradigm would make a distinction between AI that carries an extremely low danger, AI that does not require any additional precautions, AI that carries an extremely high risk, and AI that should not be permitted.
Challenge 5: A governance structure that takes into account the whole AI lifecycle, from conception to development to deployment, is required to address Challenge 5: Governance and Oversight.
Challenge 6: Human-AI interaction: It is essential that interactions be based on the core premise of respecting human cognition in order to develop an ethical and equitable relationship between people and AI systems. Particularly, humans must continue to exercise total control over and accountability for the actions and results of AI systems.
Twenty months were spent conducting the study, which includes the opinions of 26 foreign specialists with various backgrounds in AI technology.
These problems demand the development of ethical, just, and improved human welfare-focused artificial intelligence systems, according to Garibay.
The issues demand the adoption of a human-centered strategy that incorporates ethical design, privacy protection, adherence to human-centered design principles, suitable governance and monitoring, and respectful engagement with human cognitive abilities.
Overall, she believes, these difficulties should serve as a call to action for the scientific community to create and use artificial intelligence technologies that put humans first.
Members of the National Academy of Engineering and researchers from North America, Europe, and Asia who have extensive backgrounds in academia, business, and government make up the group of 26 specialists. The group’s educational backgrounds likewise range widely, from psychology and medicine to computer science and engineering.