Generative AI applications such as ChatGPT, GitHub Copilot, Bard, Midjourney, and others have created worldwide buzz and excitement due to their ease of use, broad utility, and perceived capabilities. This talk will introduce two ongoing research projects both first attempts to understand the impact of ChatGPT on analytics.
In the first part, I will introduce ChatSQC, an innovative chatbot system that combines the power of OpenAI’s Large Language Models (LLM) with a specific knowledge base in Statistical Quality Control (SQC). Our research focuses on enhancing LLMs using specific SQC references, shedding light on how data preprocessing parameters and LLM selection impact the quality of generated responses.
In the second part, I will share ongoing work focused on defining quality metrics to evaluate Generative AI’s analytics capabilities. Currently, Generative AI systems are evaluated mainly in designing and training the LLM models that generate output in various forms depending on the user’s request. The models are not, however, universally evaluated based on the quality of the output in terms of the output’s fitness for use by the user. We therefore define user oriented quality metrics and evaluate, from a user perspective, the LLMs generated output in a variety of analytics tasks.
This seminar will be organised in a hybrid setup. If you are interested in joining this seminar, please send an email to the secretariat of Amsterdam Business School at secbs-abs@uva.nl.