The Sunday appointment with an in-depth study by Paola Liberace, scientific coordinator of the Institute for the Culture of Innovation
It is no longer even news: artificial intelligence, after having taken upon itself the most varied repetitive, automatable and therefore "low-level" tasks, has gradually ascended to preside over more complex and articulated activities - from medical diagnoses to driving vehicles - and intellectually refined to the point of writing texts that are not only meaningful but pleasant, even indistinguishable from those composed by a human author. Finally, AI has learned to program, and to do it by translating natural language into code: this officially since August last year, when Codex, the software developed by OpenAI, the research company on AI founded by Elon Musk (to whom we owe among other things GPT-3, the artificial intelligence that produces the above texts). So why bother with AlphaCode, the new "automatic" coding system whose release has just been announced by DeepMind? Because the program devised by the company that deals with artificial intelligence within Alphabet / Google not only generates working code, but apparently does so using problem solving skills and critical thinking skills. By presenting AlphaCode, Deepmind highlighted the system's ability to compete - positioning itself over the middle of the table - in programming competitions, very widespread and popular among developers, where reusing already written code is not enough to establish itself. One more step, therefore, with respect to the patchwork of GPT-3, which draws words and phrases from the Web to combine them in a sensible, even elegant way; but also with respect to that of Codex, which fetches lines of code from the GitHub repository in response to commands given by voice. To be honest, a half step, given that - as Cnbc has pointed out - the positioning obtained by AlphaCode according to some would be explained in the light of the presence in the rankings of students and other competitors who lack, in whole or in part, rudiments of programming, and it wouldn't be so flattering. Even without wanting ungenerously to minimize the result obtained, however, we are once again faced with an excellent solution for specific problems, not a system capable of assuming responsibility for the operation of a software it has programmed. To provide its answer to the request for software to be written, AlphaCode needs a question formulated in an extremely correct and precise way, as well as an exorbitant number of examples of already programmed code to examine, test and discard - up to 10 raised to the 60th. for a 200-line school program, according to Ernest Davis, a professor of computer science at New York University. Rather than returning to the origins of the path that led to today's development of artificial intelligence, starting from the problem of symbolic reasoning, we seem closer to the well-known theorem of tireless monkeys, pressing for an infinite time on a PC keyboard could almost certainly compose any text, even Hamlet. Many attempts, many errors, and among these inevitably some solutions: a description that is not too inaccurate for systems supporting human operations, such as AlphaCode, but ungenerous for human intelligence - including that of programmers, who can always be made from these systems help, but never replace.