AI Ethics: Ethical concerns about AI. Who makes sure AI will not become a humankind enemy?
What is AI Ethics?
AI ethics refers to the moral principles and values that guide the development, deployment, and use of artificial intelligence systems.
What are the ethical concerns about AI and who are into it?
For sure, you have read many articles and watched trend-jacking vloggers talking about how robots are going to take over the world. Well, it turns out that AI ethics is a topic that is way different from just bad robots trying to revive Skynet and destroying humanity. In fact, many companies are taking AI ethics seriously to make sure that their artificial intelligence systems are fair, transparent, and accountable. By the way, speaking of accountability, who could forget Elon Musk's famous warnings about the dangers of AI? Let's dig into what's going on in the world of AI ethics. The second set of things we need to know are:
What are Ethical Concerns about AI?
- Bias and Discrimination - Programmers making AI algorithms are humans that have biases and prejudices. Sometimes, bad programmers make AI systems that only like some kinds of people and not others. This can be really unfair and make some people feel bad. We need to make sure that AI systems treat everyone the same and don't make anyone feel left out.
- Privacy - AI can gather and examine gigantic amounts of information about people or anything, which may be utilized for specific advertisements or even political propaganda. This becomes a privacy issue and the possibility of personal data being misused is knocking on the door. Just like what happened to… Just kidding no name drops for this article lol.
- Accountability and Responsibility - As AI becomes more independent, it's like a teenager wanting to do things on their own without being told what to do. But if something goes wrong, it's hard to figure out who's responsible for the mess. It's like the classic teenage move of blaming someone else when things go awry. But with AI, it's not just a broken vase or a spilled drink – it could be something much more severe!
- Lack of Empathy and Emotion - AI cannot feel empathy and emotion and will just proceed with what they are tasked to do. Do you know how some people can be totally heartless and cold? Well, AI takes it to a whole new level - they literally cannot feel empathy or emotion. That means they'll just do exactly what they're told, no matter what the situation is. Unlike humans who might bend the rules in an emergency, AI doesn't have that luxury (unless someone specifically programmed them to). It's like trying to teach a robot how to love - it's just not going to happen anytime soon.
- Transparency - AI algorithms are so hard to understand due to their complexity that it's like trying to figure out your partner’s mood swings (if you have one). The transparency of these AI algorithms makes it challenging to understand how they work, let alone identify any biases or issues that might pop out. It's like trying to fix a broken television with no instructions or a toolbox.
- Job Loss - As AI advances and becomes more clever, it's starting to do things that only humans could do before. But while robots can flip burgers, fold clothes, and even write articles (cough like me cough), there's a downside: robots are also taking people's jobs. So while you might have an AI writing your next thesis and documentation, your local fast-food stores might be empty because the robots are running the show.
- Safety - Autonomous AI systems that control physical devices, must be designed to ensure safety for all users. One example of this is a self-driving car. It's like having your own personal chauffeur who doesn't need to sleep or take a dump! But here's the catch - they must be designed to ensure safety for all users. Any malfunctions or errors in these systems could lead to some serious danger. So let's hope those AI engineers hit all the checkboxes on the safety measures list to consider.
It’s really important for people who design and create AIs to consider the ethical concerns listed above. Taking action to these concerns will help us ensure that AI will be used in ways that will benefit society with minimal risk of destroying humanity or other horrible disasters.
Who are into Ethical Concerns of AI?
Now let's discuss some major companies leading the way in AI ethics. Google actually made an AI Ethics Board, including computer science experts, philosophers, and public policy professionals. The board is tasked to guide the development of Google's AI technology in an ethical and responsible way. Sad thing is that they took it down a week after they formed it. Read it here: https://www.theverge.com/2019/4/4/18296113/google-ai-ethics-board-ends-controversy-kay-coles-james-heritage-foundation
Another major company is Microsoft which has established the Aether Committee, which is responsible for the AI product that Microsoft is releasing.
But it's not just big tech companies that are thinking about AI ethics. Startups like FairFrame and Fiddler Labs are also focusing on ensuring that AI is used ethically and equitably. FairFrame, for example, provides a platform for companies to evaluate their hiring processes for fairness and to identify and address any biases that may be present. Fiddler Labs provides an AI transparency and monitoring platform, which allows companies to understand how their AI systems are making decisions and to identify any biases or ethical concerns.
Of course, our topic would not be complete without mentioning Elon Musk. The Tesla and SpaceX CEO has long been warning the public about the dangers of AI and has even called it humanity's "biggest existential threat." Musk has argued that AI needs to be regulated to prevent it from becoming too powerful and causing harm to humans. While some may dismiss Musk's warnings as alarmist, there's no denying that he's brought attention to the issue of AI ethics.
So, what does all of this mean for the future of AI? Well, it means that we're starting to take the ethical implications of AI seriously. As AI becomes more ubiquitous in our lives, it's important to ensure that it's being used in a way that's fair, transparent, and accountable. Companies like Google and Microsoft are leading the way in developing AI systems that are ethical and responsible, and startups like FairFrame and Fiddler Labs are providing tools to help companies evaluate and monitor their AI systems. And while Elon Musk may be a polarizing figure, his warnings about the dangers of AI have helped to bring attention to the issue and to spur action.
In conclusion, AI ethics is a serious issue, but that doesn't mean we can't have a little fun with it. After all, who says ethics has to be boring? By taking a smart, witty, and friendly approach to the topic, we can engage more people in the conversation and work together to ensure that AI is used in a way that benefits everyone.