Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Dynamic Decision Making Lab Yearly Review

2 minute read

Published:

The Dynamic Decision Making Laboratory has issued it’s yearly review for 2024 which summarizes the work from the lab that occured during the past year. It is available online here. I have copied below my own section of the yearly review, and encourage people interested in my work to check out the annual summary. This will also be of interest to those who would like to get an idea of the work that goes into running a relatively large research lab in decision sciences.

Descisions From Description and Experience

8 minute read

Published:

One of the interesting lines of recearch that I have recently become more involved with is the distinction between making decisions based on description and making them based on experience. In this research, these are referred to as Decisions from Experience (DfE) and Decisions from Description (DfD). In the past, I have studied differences and similarities between learning and decision making, which is as similar area of research in cognitive psychology. In the future I will be exploring whether these differences between decisions made from description and experience also occur in large language models, but for this post I will be focusing on the background of these differences.

Multi-University Research Initiative

26 minute read

Published:

This blog post contains my unedited notes that I took while attending the Multi-University Research Initiative Program Review this year. This was an interesting meeting where I presented my work in cognitive models applied onto large languge model to improve their usefullness in educational settings, specifically for phishing email education.

The Future of Cyber Deception

17 minute read

Published:

Workshop on the Future of Cyber Deception Presented by the Army Research Office

Applying LLMs in Cognitive Models

4 minute read

Published:

Two submissions our lab has been working on were recently accepted to the AAAI 2023 Fall Symposium Series on the Integration of Cognitive Architectures and Generative Models. This was an exciting series for me to see since I have written multiple blog posts on this site about LLMs, but hadn’t thought deeply on how to apply them to cognitive models. LLMs definitely have been an interest of mine, but I hadn’t had a good reason to push for their integration in the models that we work with.

IEEE S&P 2023

22 minute read

Published:

The following are my notes that I took during the IEEE European Symposium on Security and Privacy. It was a great experience and I tired to attend every lecture and find the papers that were discussed during them. Some of the

Security and Human Behaviour 2023

9 minute read

Published:

The following are my notes taken during the SHB workshop. I have included the names of presenters but it should be noted that these are my own general thoughs and reactions while attending these presentations, and not necessarily the views and opinions of the presenters.

Learning to Learn

5 minute read

Published:

One interesting development in my research interests is the unexpected focus on ‘learning to learn’ that many of my recent papers and projects have been focused on. During my PhD, I had shyed away from the more complex topics in cognitive modeling, such as meta learning. This decision was in part motivated by my interests in understanding the most basic and fundamental aspects of human and machine learning. Another aspect of this decision was the increased difficulty and potential problems that are associated with designing more complex tasks.

Job Security

7 minute read

Published:

I recently completed the last meeting with a small group of mentees from my alma mater UBC that was put together to give undergraduate students at different points in their career some insight into life after undergrad. I think it was a great experience for myself, as I had little previous work as a direct mentor, and hopefully the mentees felt the same way. The remainder of this blog post is some reflections from my experience as a mentor and the life lessons I have thought of and tried to discuss with them.

Sentience and World Representations in LLMs

4 minute read

Published:

In my last blog post I discussed a few of the presentations given at NeurIPS 2022 that I found particularly interesting. I didn’t get a chance to write much on another presentation given by David Chalmers that was a condensed version of his earlier talk “Are Large Language Models Sentient?”. In this talk Chalmers discussed several possible positions on the question of sentience in large language models, systematically looking how these positions would define sentience and whether or not it is possible for LLMs like ChatGPT to exhibit those properties.

Narrowing The Gap With LLMs

7 minute read

Published:

This will be my third blog post in a row (see the first, and second) on the topic of large language models. While this area of research has been in the news significantly recently, it is not exactly my area. However, there was a talk at the University of Pittsburgh philosophy department given by professor Colin Allen that was, at least partially, presented as a refutation to the position of David Chalmers that I discussed previously. Since I talked about Chalmers’ ideas in my previous blog post, I thought this would serve as a nice contrast to those ideas, and hopefully wrap up my thinking about it for the time being.

NeurIPS 2022

5 minute read

Published:

This year I attended the Conference on Neural Information Processing Systems for the first time to present a poster alongside a paper that was accepted at the workshop on Information Theoretic Principles in Cognitive Systems.

PhD Defense

5 minute read

Published:

It has been a while since I made a blog post, mostly becase I have been working hard on my PhD Dissertation and practicing for my defense, which you can watch a practice run through of at this link. Since the last time I made a post I have defended my dissertation and I am now in the final stages of my work at Rensselaer. I have had an amazing time working at RPI with my mentor Chris R. Sims, as well as working in collaboration with the Reinforcement Learning research group at IBM, particularly with my mentor Tim Klinger. The next big step for me is starting my postdoctoral fellowship research position working with Coty Gonzalez at Carnegie Mellon in December of this year.

What’s in a Representation?

4 minute read

Published:

As I begin the final stages of my PhD thesis, I was recently surprised by one topic that jumped out as being more relevant than I had originally thought of when I began work on the project. Broadly, the topic is in the realm of cognitive modelling, specifcially modelling and predicting the behaviour of humans performing a learning and decision making task based on visual information. Traditional approaches to predicting how humans learn and make desicions have done so by abstracting away much of the compelxity of the task presented to humans, and modelling their behaviour as some simple function of the features of the task, such as a soft-max distribution over the utilities associated with the options presented.

Visual Sciences Society Conference 2022

4 minute read

Published:

Earlier this year I attented the Visual Science Society 2022 conference where I gave a talk titled “A Beta-Variational Auto-Encoder Model of Human Visual Representation Formation in Utility-Based Learning”. The entire abstract I submitted is copied at the bottom of this blog post if you would like to read it.

Beyond Reward

6 minute read

Published:

In a previous post I mentioned an interesting paper that made the claim that much of human intelligence could be viewed under the lense of reward maximization, you can see that blog post here. This point of view may not be the most common among either psychologists or computer scientists, but it would be great news for reinforcement learning researchers who are interested in making very smart systems by training them to maximize reward. However, I previously discussed the training aspect of reward maximization and what I believe it can achieve. Instead of discussing the ways that we can train artificial intelligence, specifically reinforcement learning agents, in this blog post I am going to talk about the assessment of RL agents.

Rl Web Security

6 minute read

Published:

Since my partner is a web security expert, I often end up having long discussions about internet security, even though my personal knowledge and research is in a very different area. That has recently gotten us to thinking about the intersection of reinforcement learning and web security. Though at first these may seem like two disparate areas, as anyone who has had experience talking at length with another person about their specific research area will know, eventually there are many commonalities that can be found between the two.

Wordl with Reinforcement Learning

6 minute read

Published:

This post will discuss briefly the possiblity of constucting a reinforcement learning algorithm to play the game Wordle. Language based applications of reinforcement learning are somewhat common, though perhaps not the first thing RL researchers think of as examples of applications in RL. However, Wordle is a single player game with a discrete number of actions and states, the proverbial bread and butter of RL algorithms such as one of the first successful game players TD-gammon, which palyed backgammon.

Is Reward Enough

5 minute read

Published:

In this post I provide a review and opinion on the paper “Reward Is Enough” by D Silver, S Singh, D Precup, and RS Sutton. In this work, the authors provide a broad perspective on reinforcement learning research and put forward the opinion that much of the behavior that interests cognitive science and artificial intelligence researchers can be viewed in relation to reward. Specifically, they propose that many cognitive faculties such as perception, language, generalization, imitation, and even general intelligence can be achieved through reward maximization and experience in an environment. They describe a hypothesis alongside these claims that is essentially stated in the short title, that reward is enough to learn these types of complex behaviors. The following figure borrowed from the paper describes several phenomenon which could hypothetically be trained through reward based reinforcement style learning.

New Year 2022

3 minute read

Published:

Rather than look back at previous research I have done, as the previous posts on this blog have done, this post will look forward to my hopes for 2022 and new research ideas I am interested in. Firstly, the major plans for this year include completing the website hosting my thesis project, submitting a paper based on my work in Theory of Mind for reinforcement learning, and completing my PhD Thesis. Aside from that, on a more personal level I will be taking some time this year to look for possible post-doc positions or research centric positions in other areas. As a part of that I hope to continue with this blog and additionally go back to some of my previous posts and add a bit of information. Additionally I hope to expand on some of my background knowledge of other areas of cognitive science, psychology, and machine learning.

Masters Thesis 2020

3 minute read

Published:

This post is another retrospective, but instead of a conference or journal paper it takes a look at my masters thesis, titled “Modelling Learning and Decision Making Under Information Processing Constraints”. This blog post will go through the begninning stages of the project and how it ultimately narrowed down the focus of the thesis and project into what it eventually became.

AAAI 2021

2 minute read

Published:

This is the second post in a series of retrospectives on previous work I have done that shaped my PhD and are related to my future research goals. If you would like to read the paper you can find it on my Researchgate.

Predicting Human Choice

2 minute read

Published:

This is the first post in a series of retrospectives looking back at papers and conferences I have attended. Now that I am entering my final year of my PhD, I will be begining this as a chronicle of projects, papers and conferences that influenced my time during my PhD.

cv

portfolio

publications

Modeling Capacity-Limited Decision Making Using a Variational Autoencoder

Published in In the proceedings of Proceedings of the Annual Meeting of the Cognitive Science Society, 2021

Use Google Scholar for full citation

Recommended citation: Tyler Malloy, Tim Klinger, Miao Liu, Gerald Tesauro, Matthew Riemer, Chris Sims, "Modeling Capacity-Limited Decision Making Using a Variational Autoencoder." In the proceedings of Proceedings of the Annual Meeting of the Cognitive Science Society, 2021.

Modelling Visual Decision Making Using a Variational Autoencoder

Published in In the proceedings of Proceedings of the 19th International Conference on Cognitive Modelling, 2021

Use Google Scholar for full citation

Recommended citation: Tyler Malloy, Chris Sims, "Modelling Visual Decision Making Using a Variational Autoencoder." In the proceedings of Proceedings of the 19th International Conference on Cognitive Modelling, 2021.

Learning in Factored Domains with Information-Constrained Visual Representations

Published in In the proceedings of NeurIPS 2022 Workshop on Information-Theoretic Principles in Cognitive Systems, 2022

Use Google Scholar for full citation

Recommended citation: Tyler Malloy, Chris Sims, Tim Klinger, Matthew Riemer, Miao Liu, Gerald Tesauro, "Learning in Factored Domains with Information-Constrained Visual Representations." In the proceedings of NeurIPS 2022 Workshop on Information-Theoretic Principles in Cognitive Systems, 2022.

Accounting for Transfer of Learning in Human Behavior Models

Published in In the proceedings of Human Computation and Crowdsourcing, 2023

Use Google Scholar for full citation

Recommended citation: Tyler Malloy, Du Yinuo, Fang Fei, Gonzalez Cleotilde, "Accounting for Transfer of Learning in Human Behavior Models." In the proceedings of Human Computation and Crowdsourcing, 2023.

Generative Environment-Representation Instance-Based Learning: A Cognitive Model

Published in In the proceedings of AAAI Symposium on Integration of Cognitive Architectures and Generative Models, 2023

Use Google Scholar for full citation

Recommended citation: Tyler Malloy, Du Yinuo, Fang Fei, Gonzalez. Cleotilde, "Generative Environment-Representation Instance-Based Learning: A Cognitive Model." In the proceedings of AAAI Symposium on Integration of Cognitive Architectures and Generative Models, 2023.

talks

teaching

year-archive