Moral guidelines that help us distinguish between right and wrong are a part of ethics. AI ethics is a set of rules that advise how to make AI and what it should do. People have all kinds of cognitive biases, like recency and confirmation biases. These biases appear in our actions and, as a result, in our data.
Several books focus on ethics and bias in AI so people can learn more about them and understand AI better.
Mark Coeckelbergh talks about important stories about AI, such as transhumanism and technological singularity. He looks at critical philosophical debates, such as questions about the fundamental differences between humans and machines and arguments about the moral status of AI. He talks about the different ways AI can be used and focuses on machine learning and data science. He gives an overview of critical ethical issues, such as privacy concerns, responsibility and the delegation of decision-making, transparency, and bias at all stages of the data science process. He also thinks about how work will change in an AI economy. Lastly, he looks at various policy ideas and discusses policymakers’ problems. He argues for ethical practices that include a vision of the good life and the good society and builds values into the design process.
This book in the Essential Knowledge series from MIT Press summarises these issues. AI Ethics, written by a tech philosopher, goes beyond the usual hype and nightmare scenarios to answer fundamental questions.
The ideas in this book are economics, new technologies, and positive psychology. The book gives the first values-driven approach to algorithmic living. It is a definitive plan to help people live in the present and define their future in a good way. Each chapter starts with a made-up story to help readers imagine how they would react in different AI situations. The book shows a vivid picture of what our lives might be like in a dystopia where robots and corporations rule or in a utopia where people use technology to improve their natural skills and become a long-lived, super-smart, and kind species.
The book starts by imagining a world where AI is so intelligent that it has surpassed human intelligence and is everywhere. Then, Tegmark talks about the different stages of human life from the beginning. He calls the biological origins of humans “Life 1.0,” cultural changes “Life 2.0,” and the technological age of humans “Life 3.0.” The book is mostly about “Life 3.0” and new technologies like artificial general intelligence, which may be able to learn and change its hardware and internal structure in the future.
James Barrat weaves together explanations of AI ideas, the history of AI, and interviews with well-known AI researchers like Eliezer Yudkowsky and Ray Kurzweil. The book describes how artificial general intelligence could improve itself repeatedly to become an artificial superintelligence. Furthermore, Barrat uses a warning tone throughout the book, focusing on the dangers that artificial superintelligence poses to human life. Barrat stresses how hard it would be to control or even predict the actions of something that could become many times smarter than the most intelligent humans.
This book helps us understand how technology works and what its limits are. It also explains why we shouldn’t always assume that computers are suitable. The writer does a great job of bringing up the issues of algorithmic bias, accountability, and representation in a tech field where men are the majority. The book gives a detailed look at AI’s social, legal, and cultural effects on the public, along with a call to design and use technologies that help everyone.
The book’s authors argue that moral judgment must be programmed into robots to ensure our safety. The authors say that even though full moral agency for machines is still a ways off, it is already necessary to develop a functional morality in which artificial moral agents have some essential ethical sensitivity. They do this by taking a quick tour of philosophical ethics and AI. However, the conventional ethical theories appear insufficient, necessitating the development of more socially conscious and exciting robots. Finally, the authors demonstrate that efforts are underway to create machines that can distinguish between right and wrong.
Nick Bostrom, a Swedish philosopher at the University of Oxford, wrote the 2014 book Superintelligence: Paths, Dangers, and Strategies. It says that if machine brains are more intelligent than human brains, this new superintelligence could replace humans as the most intelligent species on Earth. Moreover, smart machines could improve their abilities faster than human computer scientists, which could be a disaster for humans on a fundamental level.
Furthermore, no one knows if AI on par with humans will come in a few years, later this century, or not until the 21st or 22nd century. No matter how long it takes, once a machine has human-level intelligence, a “superintelligent” system in almost all domains of interest” would come along surprisingly quickly, if not immediately. A superintelligence like this would be hard to control or stop.
Reid Blackman tells you everything you need to know about AI ethics as a risk management challenge in his book Ethical Machines. He will help you build, buy, and use AI ethically and safely for your company’s reputation, legal standing, and compliance with rules. And he will help you do this at scale. Don’t worry, though. The book’s purpose is to help you get work done, not to make you think about deep, existential questions about ethics and technology. Blackman’s writing is clear and easy to understand, which makes it easy to understand a complicated and often misunderstood idea like ethics.
Most importantly, Blackman makes ethics doable by addressing AI’s three most significant ethical risks—bias, explainability, and privacy—and telling you what to do (and what not to do) to deal with them. Ethical Machines is the only book you need to ensure your AI helps your company reach its goals instead of hurting them. It shows you how to write a strong statement of AI ethics principles and build teams that can evaluate ethical risks well.
Source: indiaai.gov.in