Dynamic Decision Making Lab Yearly Review

2 minute read

Date:

The Dynamic Decision Making Laboratory has issued it’s yearly review for 2024 which summarizes the work from the lab that occured during the past year. It is available online here. I have copied below my own section of the yearly review, and encourage people interested in my work to check out the annual summary. This will also be of interest to those who would like to get an idea of the work that goes into running a relatively large research lab in decision sciences.

This was my second year as a postdoctoral researcher in the DDMLab. In the beginning of this year, I created and ran an undergraduate course called ‘Human and Machine Decisions from Experience’ which taught students using the python programming language and the PyIBL package for Instance Based Learning. This was a great opportunity to share the PyIBL library with students interested in research in decision science. The final assignment of this course had students build IBL models to predict the learning and decision making of participants in a cybersecurity task, identifying emails as being either phishing or dangerous. I hope to continue this class this fall during my third and final year in the lab.

Our research into integrations of cognitive models and artificial intelligence methods has increased thanks to a grant from the Microsoft Accelerating Foundation Models Research Program, which included $20,000 of azure credits that we could use to prompt GPT-4, Dall-E, or run azure servers and services. We used these resources to generate phishing emails to test people’s ability to tell if an email generated by a large language model is safe or dangerous. This research gave us valuable insight into the potential risks associated with the misuse of AI methods, and how best to train the public to be aware of these risks. This grant has already been applied onto a publication in Frontiers in Psychology, and several submissions currently under review elsewhere.

In May I attended the Multi University Research Initiative meeting at the CMU Silicon Valley campus where I presented some of our recent work integrating AI methods and cognitive models, as well as our work in cybersecurity more broadly. There I met with other researchers investigating various social and technological applications of cognitive modeling and artificial intelligence. I received valuable feedback on our planned study of human participant abilities in identifying phishing emails generated by LLMs, as well as our work in the CybORG environment with the Individual and Teaming Defense Game tasks. I plan on continuing this research into real world applications of cognitive modeling and AI in cybersecurity applications.