For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
uva.nl
Flying has become safer and safer in the past few decades. Human error, however, continues to be a key factor. If we are to learn more quickly from our failures, it is important that aviation specialists can investigate incidents and accidents more efficiently. Researchers at the Amsterdam Business School offer help by applying AI models to large amounts of information.

Georgios Sidiropoulos is doing his PhD research at the Amsterdam Business School. He strongly believes his work could mean a more efficient way of investigating incidents. He is participating in a project called SAFEMODE, an initiative aimed at safety in aviation and the maritime sector. Sidiropoulos focusses on techniques that speed up the availability to investigators of information gleaned from numerous incident reports. 'It’s interesting to be active in a domain where the use of machine learning techniques cannot yet be taken for granted.'

Natural Language Processing

Sidiropoulos obtained a bachelor’s degree in computer science from the Athens University of Economics and Business. He then worked for the Athena research centre in the same city, where he primarily addressed the subject of digital archiving. For 6 years now, he has been living in Amsterdam, where he completed a master’s degree in the area of AI. Currently a PhD student,  he is writing a thesis about improving the accessibility of information using what are known as 'information retrieval models' for large volumes of structured and unstructured data. He has examined a variety of knowledge bases, including Freebase, which is part of Wikidata. 'It’s a huge challenge for users to retrieve the right information with a query in these enormous data sets. AI can be of use here. We work with deep-learning models, which are complex enough to encompass all information. An example is BERT, an advanced model for natural language processing.'

The Greek researcher applies the knowledge gained from this research to his work for the SAFEMODE project. The most important goal of the project is the development of a new framework that helps identify, collect and assess data about so-called 'human factors'. 'Was a pilot extremely tired during an incident for instance? Crews very often put in more hours than can actually be justified. But it’s also possible that an engine malfunctioned or communication broke down.' Observations by researchers working on SAFEMODE should help improve the safety of systems and procedures in aviation and shipping. The project is a collaboration involving various European universities, large companies and knowledge centres such as the Koninklijk Nederlands Lucht- en Ruimtevaartcentrum or NLR (Netherlands Aerospace Centre).

Applying knowledge where it is needed most

'We assist specialists who investigate incidents and accidents in building a system that helps them arrive more quickly at an initial assessment of the causes,' Sidiropoulos explains. 'They study reports and statements from different sources, such as the crew involved in an incident. There’s a lot of data available but it’s often rough, unstructured and not labelled. Our focus is on reducing their workload.' What the ABS researcher finds really appealing is not only that the application of machine learning in this field is in its infancy, but that the problems he works on are from the 'real'world. 'As an academic, you sometimes tackle problems you never come across in real life. This project is an opportunity to apply knowledge where it’s needed most.'

A challenging aspect of the research was that Sidiropoulos had to brush up on his knowledge of aviation. 'I had to learn a lot about documentation procedures and methods. Fortunately, we often work together with NLR.' In doing so, it quickly transpired that the collaboration between academics and aviation specialists brought with it a culture clash, partly because of a lack of knowledge about machine learning. 'When we set out, there were different expectations, especially of what we could achieve with the information we got. As an example, they might give us 100 reports but you can’t use that to build a model. We noticed that we had to clarify the limits of what was possible with the data we received. We also had to learn better ways of presenting our research findings. This was essential because, at first, these were not perceived as very convincing.'

Research ready in summer of 2022

The project is now more or less finished. 'The initial plan was to be ready in May 2022 but the project has been extended because of the COVID-19 pandemic. Our involvement in SAFEMODE will now end in the summer of 2022. Our final contribution will be delivered in June.' Sidiropoulos has already moved on to his next study. 'We’re trying to make information retrieval models more robust. So far, these models have been tested and evaluated mostly on clean data sets. Also, queries made as part of a study never contain any typos or other mistakes like an incorrect formulation. The performance of a model is considerably poorer when these do occur. That’s something we want to keep working on.'