For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
uva.nl
There is currently a lively debate between legal experts, economists and policy-makers on the topic of algorithmic collusion. This is essentially a form of machine learning where technically advanced tools such as pricing algorithms work together - or collude - to generate prices that undermine competitiveness and potentially violate competition law.

This emerging issue is what prompted Amsterdam Business School researchers Janusz Meylahn and Arnoud den Boer to conduct research and submit their paper Learning to collude in a pricing duopoly for publication in a top-ranked journal.

Janusz Meylahn: ‘The debate, more specifically, is about whether it’s possible for pricing algorithms to learn to set prices that are higher than a competitive price when playing against one another. If that's the case, it could be detrimental for consumer welfare. Suppose the pricing algorithms of three competing retailers, Amazon, Bol.com and Coolblue “learn” without human intervention that it is more profitable for each of them individually to maximize the profit selling an iPhone 12 for the 3 of them all together instead of maximising the profit for each of them individually. This will result in consumers paying too much for the product.

Guaranteed collusion

There is little empirical evidence at the moment of algorithmic collusion because it’s difficult to detect. Many of the examples are based on simulations in a virtual environment, but the findings indicate that collusion can occur in a duopoly (a situation involving 2 parties).  The key word here is ‘can’, and this is where Meylahn’s and Den Boer’s work comes in.

This “guarantee of collusion” makes it more probable that firms will use such algorithms in the future.

They have designed an algorithm that guarantees collusion that is profitable to both parties in a duopoly. Meylahn explains: ‘The key to our algorithm is that it is actually composed of 2 algorithms. The first estimates the competitive price and the second estimates the collusive price (maximising the joint profit). Then, the algorithm decides which of the 2 performed better and continues with that algorithm for some time. This process is repeated, but the amount of time spent using each of the algorithms is chosen in such a way as to optimise the performance. This “guarantee of collusion” makes it more probable that firms will use such algorithms in the future.’

Marc Salomon, Dean of the Amsterdam Business School, is very enthusiastic about this new type of ground-breaking research on algorithmic development. ‘I hope that in the future we can help national and international regulators and competition authorities develop algorithms and tools to protect consumers against this kind of malpractice, because it may ultimately harm consumers who will end up paying too much.’

Legal dilemmas

What makes their algorithm particularly interesting is that it could be legally employed by firms today. With machine learning and artificial intelligence playing an increasingly important role in people’s lives, are these developments something we should be concerned about?
The topic raises all kinds of fascinating questions about the legality of algorithmic collusion. The researchers’ work is relevant to policy makers and legal experts alike in terms of anti-trust legislation. For example, if algorithmic collusion results in joint profit maximisation (both firms benefit), should this be made illegal? But this gives rise to other issues: proof of communication is needed to determine liability. Can this proof be found? Have the algorithms simply spontaneously learned to collude? Who should be held responsible if this is the case?

Now that Meylahn and Den Boer have shown that algorithmic collusion is theoretically possible, more research is needed to analyse the data and find ways to detect it. After recently presenting their work at 2 international conferences (INFORMS 2020 in November and the Algorithmic Collusion: working mechanisms conference in December), their next step will be to expand the results to more than two firms in a market.