A panel of experts should prepare Germany for the use of artificial intelligence. The final report is still pending. But there is already criticism of the results.
Düsseldorf, Munich, Berlin It should control machines, steer cars and develop new drugs: Artificial intelligence (AI) is one of the most important key technologies for the German government. The goal: Germany must become a leader in research and application of artificial intelligence. For this reason, the federal government has launched a national AI strategy, and a commission of inquiry of the German Bundestag has been working on AI since autumn 2018.
Members of the Bundestag and experts have intensively discussed how industry and society should adapt to the age of self-learning machines. The final report of the expert committee, scheduled for the end of October, is expected to fill almost 1000 pages. But even many members of the commission are dissatisfied with the findings of the report.
“We have had far too little continuous and well-founded discussion,” criticizes Green Member of Parliament Tabea Rößner. The result is often “a minimal consensus that does not get us much further”. Instead of a trend-setting report, according to FDP chairman Mario Brandenburg, something like the “basic pension of innovation” has emerged. In some parts of the report, one “lost sight of the core of the topic of AI”, regrets expert Tina Klüwer, who heads the Berlin-based company Parlamind.
The Commission’s findings are symptomatic of the many obstacles that are slowing down AI progress in Germany: Politicians focus primarily on the theoretical risks rather than the practical opportunities offered by the new technology. Already approved AI research funds are not being spent, and science and business communicate too little. And companies prefer to keep their data to themselves rather than share it in the interests of AI progress.
While U.S. and Chinese providers are rapidly developing sensitive applications such as automated facial recognition, German politicians are tired of discussions – and are even unsettling entrepreneurs. From the IT industry association Bitkom to the Federal Association of German Start-ups and the Institute of the German Economy (IW), associations and experts are warning against overregulation.
Marco Junk, Managing Director of the Federal Association of the Digital Economy (BVDW), is one of them: “As the country of machine manufacturers, we must realize that in the future, value creation will no longer lie solely in the machines, but in the AI-based services on and with our machines. The decision will now be made “whether in future we will only be a supplier for the providers of AI services with our machines or whether we will integrate these services ourselves”.
On Tuesday, the IW and the BVDW will present for the first time a report on the use of artificial intelligence in the German economy and society. The “AI Monitor” is now to examine the development and framework conditions on an annual basis and is already available to the Handelsblatt.
“The result is alarming,” says BVDW President Matthias Wahl. It is true that companies are demonstrably driving forward the use of learning systems. But: “The political framework conditions for the key technology of the 21st century have worsened compared to 2019”.
Ethics standards could help
The main reason for the negative trend in the AI Monitor is a decline in collaboration between scientists and companies. Although there is an increasing number of computer science students and scientific publications, for example on neural networks and deep learning, the findings are rarely applied directly.
The authors, led by IW economist Vera Demary, identify three central obstacles: unresolved legal questions about liability, intellectual property and ethical standards. In connection with this, they observe uncertainty. “Many small and medium-sized companies do not know who owns the data in their company and what they are allowed to do with it,” says Demary.
She gives examples: Whether the vehicle manufacturer, the software provider or the sensor supplier is liable in the case of a self-propelled car has not yet been regulated. And: “If artificial intelligence is used to generate art, such as pictures or pieces of music, the affected parties have to regulate contractually who gets how much of the proceeds.
Ethical standards could also help, because “there is still a lot of skepticism in society,” says Demary. On the other hand, she warns: “We have to be careful now that there is no overregulation.