Can a machine think? Even today the question on which Leibniz or Turing questioned themslves remains open and neurophysiology, computer science, cognitive sciences and philosophy move between scientific theories and hypotheses, with contrasting implications and jagged profile.
To use a language that combines philosophy and technology we can say that both at the brain level, our hardware, and at the mental level, which corresponds to the concept of software, man turns out to be a rather articulated machine, where the binary logic on which instead they base computers and artificial intelligence cannot be applied or exhaustive.
Referring to the central role of action in intelligence (ie the concept of reasoning belonging to Aristotle’s Nicomachean ethics) and quoting Von Wright not always, indeed, our actions can be traced back to a practical syllogism such as: A wants to provoke p, A believes he can do this if he applies b, so A puts himself in the condition to do b. This is because we are often faced with complicated choices that involve elaborate plans and strategies that involve both our theoretical rationality and ethical and moral emotions, beliefs and intentions. But hooking on to this last assumption, how is it possible to instill in the machines an ethical criterion on which to base one’s actions?
Let’s take a step back first: ethics can first of all be defined as the search for a rational process to understand what is good and what is bad. And again: for juridical theory, as for the moral philosophy of Aristotelian matrix, it is a means that element in a course of actions that contributes to cause an event, but that does not exercise any autonomy or discretion. Instead it is an agent who initiates this course of actions causing the event. Technology has always been part of the category of means, that is to say the tools placed in the hands of an agent to allow him to obtain a specific result. In this conceptual horizon, the person responsible for an event is anyone who has decided the flow of actions that caused the event itself; obviously, his responsibility (in a moral and juridical sense) can be graduated up to disappear, depending on the level of conscience and will of the circumstance occurred; but there is no doubt that, if until a few years ago no one would have ever thought of holding a motor vehicle responsible for the death of a man, because “technically” it was the means that, hitting the person, caused the event death, today, in front of cars without drivers or autonomously-guided weapons, such questions are becoming much more complex.
With reference to artificial intelligence, more and more often we ask ourselves about his ethics, trying to apply the same rational process and therefore to define and understand, deontologically, how an AI should behave. When in fact artificial intelligence makes decisions that affect the lives of human beings, from deciding whether to grant a loan, to hiring someone up to establish independently whether or not to shoot a human being, as in the case of AWS (Autonomous Weapon System), there is a need to agree on an ethic according to which it acts. But what are the parameters on which an ethics can be defined? What is good and what is evil and to what extent? First of all, we will have to assume shared guidelines, if not yet prescriptive, that at least indicate which direction to take in the future. The law, the norms, have a fundamental importance, but they are necessarily the result of a job that must be done before, just to be able to structure them. And it is here that the complex work of different fields, but in this case complementary, will remain, such as the philosophical, programming, sociological and deontological one that will have to meet and cooperate in order to reach a common goal, that of safeguarding the human race and res publica.
Recent Comments