Thank you for a well written article. I agree with you that there are a lot of concerns when it comes to machine learning and AI in general. However, I have to wonder if most of the fears and concerns regarding AI's and their capability (or lack there of) for compassion, as well as their ethical and moral programming, is all stemming from the fact that we do not have anything that can be truly be considered as Artificial Intelligence. All we have is a bunch of very clever algorithms performing their tasks blindly (only within the scope of their programming), not artificial intelligence.
We all love to use the term AI, but in most cases it is misleading to use that term to refer to a program or a set of algorithms governing a machine. At this point in history our version of "AI" is nothing but a collection of very clever programming with a lot of clever methods and routines. Contingencies and endless "if ... , then ... " conditions and loops make up the vast majority of what we consider an artificial mind.
However, the issue you bring up is legitimate and the concerns are real. In the very near future we will have true artificial intelligence. I think it is important to have these conversations now instead of waiting until the day of the dawn of true AI is upon us. If we don't find a way to teach artificial intelligence a way of understanding human emotion and morality our greatest fears make come to pass before we even realize it.
The issue as I stated is that AI has to understand each of our individual priorities as well as societal interests. It's not going to be easy because you cannot hard code morality in as merely a set of rules. Hard coding is actually the wrong way to think about morals or value alignment because preferences do evolve, values can change, and an AI has to be able to continuously stay in alignment. In essence the AI has to know and understand us better than we know ourselves, which is possible but not something you can code in, as it has to be trained.
As far as emotions are concerned, I don't think emotions really have much to do with morality. Values have to do with morality. What you value as an individual might be based on how you feel about different things but an AI is not going to understand (nor should it) the feelings aspect, but it can know what humans value.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
It's true that hard-coding morality is likely the wrong approach to the problem, but we are going to have to find a way to teach our AI's how to tell right from wrong and why something is right or wrong in the first place.
It's a difficult issue to address with programming, but I believe teaching a machine to understand emotions the first step to understanding morality. Emotional and social consequences help drive the rules of social morality. If a machine was able to feel hurtful emotions then, perhaps, it could find value in morality and ethics.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
While there is no universal right and wrong, if you have values and the AI knows your particular values, then it can learn your expected "right and wrong". This would have to include all sorts of stuff from culture to social norms to position in society to expected outcomes.
My own approach would be to focus on producing the best outcomes for the human and humanity as a whole but taking into account the values humans hold dear. There aren't really universal values held by all humans but there are values held by the consensus of humans interacting with the AI. So for example if Google search results were training an AI then an AI might be able to figure out what the subconscious of humanity is from the search patterns but even this is speculative.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
EDIT: It won't because we'll have our brain plugged into the cloud that AI will really care, we will have to give away our uniqueness, what it means being humans. AI is a binary system... humans are trinary. There is no compromise. AI systems only copycat/mimic life. We created it and that is why IT needed us, but it take over will mean that we become binary in turn
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit