Last week, the French Competition Authority fined Google for training its Artificial Intelligence model with editorial content without prior permission. The New York Times sued OpenAI for allegedly having trained the GPT-4 model (the engine behind ChatGPT) with its articles.

As humans, we must be able to protect the digital Socrates and Galileos of the future without making them drink the hemlock prematurely, writes Andre Vltchek. He argues that many times we are not aware of the double standard of measurement that we have at our disposal when it comes to applying certain rules to Artificial Intelligence. We demand that AI be bias-free. We are willing to assume that a human makes mistakes and tolerate it, but we aren't willing to assumes that a machine makes them, he says. The same is true for cars, he writes. We should consider whether there could be different AIs with different thoughts and points of view instead of demanding utopian neutrality, Vltcek says. After all, differences in ideas and opinions are fundamental to our progress as a society.