Large Language Models (LLMs) as decision makers: A student run workshop at the STAI CDT Annual Retreat 2024

29th August 2024 | Student News

News > Large Language Models (LLMs) as decision makers: A student run workshop at the STAI CDT Annual Retreat 2024

During the STAI CDT Annual Retreat, STAI CDT PhD students had the opportunity to engage in discussions led by their fellow students. Peter Tisnikar and Lennart Wachowiak led a workshop on Large Language Models (LLMs) as decision makers.

Peter and Lennart organised the workshop because they felt that the recent years’ hype surrounding LLMs was hard to overlook. Over the past two years, these text-generation tools have found their way from research labs around the world into the eyes of the public and industry. At the same time, LLMs are plagued by a plethora of issues, from being prone to make up facts (confabulations) and defending them adamantly, failing at simple reasoning tasks, propagating biases, to incurring extreme financial and energy costs during training. The workshop was an opportunity for CDT students to discuss the use, dangers, and potential of using LLMs in a variety of decision-making applications.

During the session, students debated in which context LLMs should and shouldn’t be used. Interestingly, students didn’t always agree: some, for instance, strictly opposed any use in law due to issues with confabulations and weak reasoning capabilities, while others saw potential value when used for accessing and summarising documents.

LLMs are now studied and used by many different disciplines, and the workshop reflected that by having students discuss a large range of case studies in smaller groups. Topics ranged from LLMs making strategic, domain-specific decisions in areas like law, healthcare, or finance to LLMs controlling the behaviour of robots and video game agents. As part of the workshop, Peter and Lennart also quantitatively surveyed the students’ opinions on LLMs.

As seen in the image below, most students thought that LLMs are currently overhyped, while at the same time, many students would like to increase their engagement in LLM research. Looking at LLMs’ biggest issues, a consensus seemed to form around their inability to reason and plan, as well as their tendency to confabulate.

Workshop organisers Peter and Lennart are conducting research on LLMs as part of their PhD studies. Lennart just published a paper on the use of vision–language models in human–robot interaction at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), and a paper evaluating the spatial language understanding capabilities of LLMs at ACL. Peter is interested in using LLMs as “world models” in human–robot interaction scenarios, allowing robots to better understand their human partners.