GHOST IN THE MACHINE – Pt. I =>
The issue with Artificial Intelligence, AI, is, that it is in many cases not possible, to tell WHY a decision was made. If an autonomous car decides to collide with the young woman on the street instead of jamming into the bus, when a collision had become inevitable, it is not possible to tell WHY.
We have to move forward, strive to achieve and develop the AI further in order to bring humanity to the next level.
But as it is not possible to look up Code and see the „IF collisionInevitable THEN collideWithYoungWoman“ part anymore, we must be aware of the greater challenge which awaits.
We need to tackle questions of morale, ethics and responsibility. How much rights are granted to artificial minds?
There will be a moment in time on which we take the Turing Test and suddenly we will think that there are feelings inside the machine. Perhaps there are. How will we proceed from there?