(This post is part of a weekly series for Know Thyself 2019, a 365 day journal project. Start here!)
This week’s journal topic: AI and Tech Ethics
Technology is advancing at unstoppable speed. Consider how we’ve gone from black and white screen TV’s to smartphones in 70 years. In 2019, even if you’ve got the latest smartphone and home gadgets and frequently read tech news, you’re still behind, as I am. At the cutting edge of AI research, where the research is certainly much farther ahead of what is published in tech news, who is making up the ethical rules? Not to disparage computer science researchers whatsoever, but are people who spend most of the time with computers philosophically equipped to make decisions concerning the creation and use of technology?
Labs in America and China are racing to make AI ready for consumers, for economic growth, and for warfare. For example, it’s clear that 21st century warfare will be fought with smart drones instead of humans. In the economy, AI robots are being sent on suicide missions to do jobs that humans do not want to do. For you and me, it doesn’t feel so bad to be one iPhone behind, but how does it feel to know, that your next intelligent, self-driving car is programmed to either kill you or a pedestrian when it encounters an imminent collision situation? This is the tip of iceberg of ethical issues – issues that human beings have never had to think about before.
Are you content to let someone else make ethical decisions that will certainly impact your lifestyle and well-being in the future?
I’m no expert, so who am I to judge?
I’m not the most tech-savvy person. Let me put it this way: people buy phones for me because it pains them to see me using such old phones. I use my computer for the internet and word-processing – that’s all. My first computer game was Hunt the Wumpus for TI-99, and the last video game I played was Duck Hunt for Nintendo. I’ve tried to improve – I bought a PS3 – but it hasn’t been used.
Given my practical ineptness, sometimes I worry that I’m the wrong person to think about the AI and tech ethics. Maybe you feel the same. But regardless of our combined expertise being elementary when compared with experts, the fact is that AI will continue to operate in our lives in increasingly obvious ways. It’s there already. Being in control of your own life requires us to make ourselves aware of AI ethics. If we don’t make decisions, the scientists, or worse – the computers, will make them for us.
Use IT and Abuse IT
AI is celebrated for simplifying human lives and decision-making. When AI can be programmed to analyze and sort data, and thereafter provide only relevant information to the user, then AI is allowing the user to bypass having to make many small sorting decisions. As AI can be programmed to understand goals and weigh options, it simplifies the user’s life even further by suggesting solutions and facilitating decisions. AI can even supplement human actions that result, as finely-tuned robots can accomplish many physical tasks. Self-driving cars are a great example of helpful tech.
The abuse of tech can help humans behave badly and avoid ethical responsibility. For example, in warfare, drones are used as spies. They’re programmed to illegally record audio and visual. Computer programs perform data analysis and transfer it illegally. Robots can be programmed to identify targets and deploy weapons. Robots capable of “learning” can be taught to perform behaviors that humans find abhorrent to do, such as killing or torture.
Worst of all, drones and AI provide deniability in warfare. Deniability means that the technology can be abandoned and it’s difficult to trace back to an owner. Deniability allows militaries to avoid responsibility for acts committed by their tech.
Moral Status of AI
Should AI be given moral status or personhood under the law? Some robots are capable of “feeling” and showing empathy. Customizable robotic dolls have become life-companions in love-starved cultures, fulfilling human social needs when partners aren’t available. Robots can understand, weigh options, decide and perform independent actions just like humans. They can learn and develop “personalities”.
As we begin to create an approximation of human consciousness, should AI receive rights against being used in suicide missions? What about a robot who is poorly treated by her owner, is she a victim of “rape”? A robot with a bad attitude gets sent in for reprogramming – what of his freedom of thought and belief? The quest to develop closer and closer approximations of human emotion, will, and consciousness imply that, at some point, we will be forced to evaluate what sorts of beings deserve moral status. We might need to arbitrarily draw a line between “natural” and “created” consciousness, or perhaps we shouldn’t. Who will make this decision?
Below are the journal questions for this week! Technology is bound to get more advanced and more accessible – it’s not far-fetched to think that we will have our own robots. By doing these questions, you’re participating in the ethics of the future. If you’re into AI and computers, these questions will make you a leader in your industry, simply because many people in computer science are not even aware of ethical issues in technology. If you’re a tech user, you will make more informed decisions about how you are prepared to support the use of technology in warfare, for example.
- What roles of friendship does technology serve for you? I.e., what do you do with technology that humans used to need friends for?
- How much time do you spend interacting with technology each day, and how?
- Robots are created by humans, but does this mean they always have a duty to serve humans?
- Robots have been developed to be sexual and/or romantic partners – do you see any benefits or harms for the robot, the individual, and society?
- If you were charged with a crime and had a choice between a human judge and an AI judge (supposedly fair and logical), which would you prefer?
- Does a robot that has emotions and consciousness deserve dignity and personhood? What might dignity for AI mean?
- Is it right for warfare drones to be untraceable – meaning that we cannot find and hold the owners ethically/legally responsibly, no matter how horrific their actions?
How to Get the most out of Know Thyself 2019:
Don’t rush through the questions. Try to do only one question every morning, leaving space to add thoughts that might come up later during the day. The journal is designed to help you develop a consistent, daily practice of self-reflection.
If you liked this week’s topic, please offer me a like or a comment! I put time and effort into creating interesting topics and critical thinking content. A little “Like” goes a long way for my spirit! Thank you!