IEEE S&P 2023

22 minute read

Date:

The following are my notes that I took during the IEEE European Symposium on Security and Privacy. It was a great experience and I tired to attend every lecture and find the papers that were discussed during them. Some of the

Keynote: Bart Preneel (KU Leuven)

Discussion on the goals and realities of privacy policy implementations with examples from history of the 3 major ‘privacy wars’, their causes and impacts on public policy of privacy and security.

Session: Phishing/fraud/scams

Forward Pass: On the Security Implications of Email Forwarding Mechanism and Policy
Enze Liu (University of California, San Diego), Gautam Akiwate (Stanford University), Mattijs Jonker (University of Twente), Ariana Mirian (University of California, San Diego), Grant Ho (University of California, San Diego), Geoffrey M. Voelker (University of California, San Diego), Stefan Savage (University of California, San Diego)

Email forwarding impacts the effectiveness of security that is designed to ensure that emails are sent from the appropriate addresses like SPF. Also currently SPF is sourced through aggregated lists like through outlook.

  • Open forwarding, not asking if you own the email you are sending the forwarded emails to.
  • Whitelisting: Spam filters edited to allow emails from some domains. Can be issues with forwarding if you spoof an email as from a whitelisted domain, and this can be combined with email forwarding to forward arbitrary domains.

Attack description: Whitelist state.gov in your outlook, spoof an email as coming from state.gov and send it to yourself, open forward message to target email to pass SPF.

Android, notify me when it is time to go phishing
Antonio Ruggia (University of Genoa), Andrea Possemato (EURECOM), Alessio Merlo (University of Genoa), Dario Nisi (EURECOM), Simone Aonzo (EURECOM) Exploit Inotify for state inference, specifically to tell when the target application has been started, so that a malicious version of the application can be simultaneously launched that will collect user information.

Active Countermeasures for Email Fraud
Wentao Chen (University of Bristol), Fuzhou Wang (City University of Hong Kong), Matthew Edwards (University of Bristol)

Social engineering active defense (SEAD): degrade the ability of the attacker to victimize, through the use of social engineering, like scam-baiting (scam scammers, humiliate scammers, or waste their time). Issue is that it requires volunteers, much smaller community than the target population, doesn’t scale.

Automatic scam-baiting

  • Trained on open-sourced scam baiter data.
  • GPT-neo compared against templates statements.
  • Concurrent engagement of scammers and these fake systems.
  • Chat-GPT performs very well.

Question: Would it be useful to detect if the scammer’s conversation is being generated by a different automatic chatter?

Session: Crypto + formal methods I

Multi-Factor Credential Hashing for Asymmetric Brute-Force Attack Resistance
Vivek Nair (UC Berkeley), Dawn Song (UC Berkeley)

MFKDF website

Hashmob: Competition for cracking passwords after data breaches, occurs within 12-15 months after breach, salted passwords have lower rates of cracking, and adaptive hashing functions have lower rates too.

Bcrypt and other adaptive hashing functions. Issue is that there is a symmetric resistance requiring more time/resources from the user.

MFA is also used in a different factor/storage. MFCHF proposes adding the MFA information to the hash. Requires incorporating TOTP codes into static hashes.

CHEX-MIX: Combining Homomorphic Encryption with Trusted Execution Environments for Oblivious Inference in the Cloud
Deepika Natarajan (University of Michigan-Ann Arbor), Andrew Loveless (University of Michigan-Ann Arbor), Wei Dai (Microsoft Research), Ron Dreslinski (University of Michigan-Ann Arbor)

Insecurity in cloud computing with ML applications like voice assistants. Possible to reverse engineer model parameters through insecurities. Can also use details of model to de-anonymize user data.

Adversary model assumes a rational model provider and a malicious client and cloud service provider. Goal is cloud oblivious inference. Homeomorphic Encryption doesn’t achieve this because in enables data privacy but clients can edit inputs maliciously. TEEs like SGX enclaves don’t work, attestation requires checking the ML code for vulnerabilities.

A Generic Obfuscation Framework for Preventing ML-Attacks on Strong-PUFs through Exploitation of DRAM-PUFs
Owen Millwood (University of Sheffield), Meltem Kurt Pehlivanoğlu (Kocaeli University), Jack Miskelly (Queen’s University Belfast), Aryan Mohammadi Pasikhani (University of Sheffield), Prosanta Gope (University of Sheffield), Elif Bilge Kavun (University of Passau)

Physically unclonable functions should be unique and secure. ML attacks try to predict new challenge response pairs to compromise security. Obfuscating techniques prevent ML-MA but are too PUF design specific and attack countermeasures are too costly. Solution could be to supplement cryptography with PUFs.

Automatic verification of transparency protocols
Vincent Cheval (INRIA Paris, France), José Moreira (Valory AG, Switzerland), Mark Ryan (University of Birmingham)

Transparency protocols allow for actions to be publicly monitored by observers. Examples are transparent decryption which prevents secretly decrypting some encrypted information. ProVerif is an example that enables this, verification of ledger’s proofs. This is done by adding Lemmas/Axioms/Restrictions to ledger verifications.

Session: Security and AI

Protecting Voice-Controllable Devices Against Self-Issued Voice Commands
Sergio Esposito (Royal Holloway University of London), Daniele Sgandurra (Royal Holloway University of London), Giampaolo Bella (Università degli Studi di Catania)

Electronically generated voices should be able to use smart devices for disabled users. Current methods can’t tell if a voice command came from the same device. Solution is to use a twin ANN to compare recorded and original audios.

When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch (Vector Institute), Adam Dziedzic (University of Toronto and Vector Institute), Roei Schuster (Vector Institute), Ali Shahin Shamsabadi (Vector Institute and The Alan Turing Institute), Ilia Shumailov (Vector Institute), Nicolas Papernot (University of Toronto and Vector Institute)

Active attack of federated learning shared ML models with trap weights which induce the case where most input data has 0 gradient due to the RELU function 0ing out negative numbers. Just passively/honest querying can cause ~5% of neurons to be only activated by 1 data point, which allows for extraction. This means more neurons and smaller mini batches allow for more extraction. Some solutions are to add noisy data to training or testing.

SoK: Explainable Machine Learning for Computer Security Applications
Azqa Nadeem (Delft University of Technology), Daniël Vos (Delft University of Technology), Clinton Cao (Delft University of Technology), Luca Pajola (University of Padua), Simon Dieck (Delft University of Technology), Robert Baumgartner (Delft University of Technology), Sicco Verwer (Delft University of Technology)

XAI within Cybersecurity is broken into the model user, model designer, and adversary. Explanations should be tailored to who the explanation is for, user/designer. Explanation systems can be manipulated by adversaries to produce false explanations, or utilized by them to aid in the exploitation of AI services.

User studies of XAI typically doesn’t include model users, and doesn’t treat explanation understandably as a target for XAI. Some issues where explanations can violate privacy. Gradient Boosting Machine trained on Netflow data, more interpretable models can be used in well defined cases without negative impacts to performance.

Session: Side Channels and Transient Execution

SoK: Analysis of Root Causes and Defense Strategies for Attacks on Microarchitectural Optimizations
Nadja Ramhöj Holtryd (Chalmers University of Technology), Madhavan Manivannan (Chalmers University of Technology), Per Stenström (Chalmers University of Technology)

What are the similarities and differences across optimization attacks, and are there common root causes for timing based attacks and side-channel attacks.

  • Determinism:
  • Sharing
  • Access violation
  • Information flow

Paper provides analysis of common attacks from these features. Analyzes attack models based on a finite state machine that transitions between states through: setup, interaction, transmission, reception, and decoding.

MicroProfiler: Principled Side-Channel Mitigation through Microarchitectural Profiling
Marton Bognar (KU Leuven), Hans Winderix (KU Leuven), Jo Van Bulck (KU Leuven), Frank Piessens (KU Leuven)

Interrupting timing attacks can be done by adding dummy nop commands after a test case fails, so that the attacker can’t tell which path was followed down. However on some platforms allow for timing of all the individual instructions, which would be different for the nop condition.

You Cannot Always Win the Race: Analyzing mitigations for branch target prediction attacks
Alyssa Milburn (Intel Corporation), Ke Sun (Intel Corporation), Henrique Kawakami (Intel Corporation) Modern processors with a high number orat

From Dragondoom to Dragonstar: Side-channel Attacks and Formally Verified Implementation of WPA3 Dragonfly Handshake
Daniel De Almeida Braga (Université de Rennes 1, CNRS, IRISA), Mohamed Sabt (Université de Rennes 1, CNRS, IRISA), Pierre-Alain Fouque (Université de Rennes 1, CNRS, IRISA), Natalia Kulatova (Mozilla), Karthikeyan Bhargavan (INRIA)

Issues with password conversion method in WPA3 resulting in a new offline dictionary attack.

Session: Crypto + formal methods II

Recurring Contingent Service Payment
Aydin Abadi (University College London), Steven J. Murdoch (University College London), Thomas Zacharias (University of Edinburgh)

Fair exchange where two mutually distrustful parties want to swap digital items such that neither party can cheat the other. Blockchain-based solutions for fair exchange, by paying in digital currencies. Current solutions lack privacy and reveal real-time information about parties.

SIM: Secure Interval Membership Testing and Applications to Secure Comparison
Albert Yu (Purdue University), Donghang Lu (Purdue University), Aniket Kate (Purdue University), Hemanta K. Maji (Purdue University)

Secure comparison is used in neural network and decision tree applications which try to be privacy preserving and secure. Want to split work between offline and online, and push work to the offline phase where things can be pre-computed.

Careful with MAc-then-SIGn: A Computational Analysis of the EDHOC Lightweight Authenticated Key Exchange Protocol
Felix Günther (ETH Zurich), Marc Ilunga Tshibumbu Mukendi (Trail of Bits)

Proof-of-Learning is Currently More Broken Than You Think
Congyu Fang (University of Toronto and Vector Institute), Hengrui Jia (University of Toronto and Vector Institute), Anvith Thudi (University of Toronto and Vector Institute), Mohammad Yaghini (University of Toronto and Vector Institute), Christopher A. Choquette-Choo (Google), Natalie Dullerud (University of Toronto and Vector Institute), Varun Chandrasekaran (Microsoft Research & University of Illinois Urbana-Champaign), Nicolas Papernot (University of Toronto and Vector Institute)

Infinitesimal update attack sends a small weight update as a proof of learning.

Certifiably Vulnerable: Using Certificate Transparency Logs for Target Reconnaiss/ance
Stijn Pletinckx (University of California, Santa Barbara), Thanh-Dat Nguyen (Delft University of Technology), Tobias Fiebig (Max Planck Institute for Informatics), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara)

Honeypot study. Renewing certificates and logging on public certificate logs results in increases in traffic. Comparing to study from 3 years ago shows faster scans and IPv6 which wasn’t in previous results. Conclusion is that these public certificate logs are still beneficial but can result in attack attempts when they are used.

Session: Web and social media

Chrowned by an Extension: Exploiting the Chrome DevTools Protocol
José Miguel Moreno (Universidad Carlos III de Madrid), Narseo Vallina-Rodriguez (IMDEA Networks/AppCensus), Juan Tapiador (Universidad Carlos III de Madrid)

DarkDialogs: Automated detection of 10 dark patterns on cookie dialogs
Daniel Kirkman (University of Edinburgh), Kami Vaniea (University of Edinburgh), Daniel W Woods (University of Edinburgh)

Analysis of dark patterns in online cookie consent dialogs.

SoK: Content Moderation in Social Media, from Guidelines to Enforcement, and Research to Practice
Mohit Singhal (The University of Texas at Arlington), Chen Ling (Boston University), Pujan Paudel (Boston University), Poojitha Thota (The University of Texas at Arlington), Nihal Kumarswamy (The University of Texas at Arlington), Gianluca Stringhini (Boston University), Shirin Nilizadeh (The University of Texas at Arlington)

Biases and lack of training in content moderation in different dialects. Lack of consensus on what constitutes unallowed content. Facebook/Twitter have different definitions and examples of misinformation and hate speech. Most hate-speech and misinformation DNN approaches are trained on English datasets. Need of diverse participants (age, culture, gender, etc.). no one size fits all content moderation, platform users need to be treated equally. More collaborative human-AI decision making for determining misinformation and hate-speech. Ideally AI could account for the biases of humans when they are analyzing hate-speech and information.

Been here already? Detecting Synchronized Browsers in the Wild
Pantelina Ioannou (University of Cyprus), Elias Athanasopoulos (University of Cyprus) Browser fingerprinting allows for the unique identification of browsers by collecting device features. Browser synchronization connects different devices. Proposed algorithm detects whether http requests are from the same synchronized devices, using a user-study to test the accuracy.

Session: Crypto + formal methods III

10:45 - 12:10

Asynchronous Remote Key Generation for Post-Quantum Cryptosystems from Lattices
Nick Frymann (University of Surrey), Daniel Gardham (University of Surrey), Mark Manulis (Universität der Bundeswehr München)

Revelio: A Network-Level Attack Against the Privacy in the Lightning Network
Theo von Arx (ETH Zurich), Muoi Tran (ETH Zurich), Laurent Vanbever (ETH Zurich)

Conjunctive Searchable Symmetric Encryption from Hard Lattices
Debadrita Talapatra (IIT Kharagpur, India), Sikhar Patranabis (IBM Research, India), Debdeep Mukhopadhyay (IIT Kharagpur, India) Can make encrypted search strings that allow you to query encrypted databases without sharing the query.

Provable Adversarial Safety in Cyber-Physical Systems
John H. Castellanos (CISPA Helmholtz Center for Information Security), Mohamed Maghenem (CNRS France), Alvaro Cardenas (UC Santa Cruz), Ricardo G. Sanfelice (UC Santa Cruz), Jianying Zhou (Singapore University of Technology and Design)

AoT - Attack on Things: A security analysis of IoT firmware updates
Muhammad Ibrahim (Purdue University), Andrea Continella (University of Twente), Antonio Bianchi (Purdue University)

IoT devices can be attacked by uploading a different firmware. Companion aps are also a source of vulnerability since they control IoT devices.

Comprehensively Analyzing the Impact of Cyberattacks on Power Grids
Lennart Bader (Fraunhofer FKIE & RWTH Aachen University), Martin Serror (Fraunhofer FKIE), Olav Lamberts (Fraunhofer FKIE & RWTH Aachen University), Ömer Sen (RWTH Aachen University & Fraunhofer FIT), Dennis van der Velde (Fraunhofer FIT), Immanuel Hacker (RWTH Aachen University & Fraunhofer FIT), Julian Filter (RWTH Aachen University), Elmar Padilla (Fraunhofer FKIE), Martin Henze (RWTH Aachen University & Fraunhofer FKIE)

SoK: SoK: Rethinking Sensor Spoofing Attacks against Robotic Vehicles from a Systematic View Sensor Spoofing Attacks against Robotic Vehicles from a Systematic View
Yuan Xu (Nanyang Technological University), Xingshuo Han (Nanyang Technological University), Gelei Deng (Nanyang Technological University), Jiwei Li (Zhejiang University), Yang Liu (Nanyang Technological University), Tianwei Zhang (Nanyang Technological University)

Session: Trusted computing and defenses

faulTPM: Exposing AMD fTPMs’ Deepest Secrets
Hans Niklas Jacob (Technische Universität Berlin), Christian Werling (Technische Universität Berlin), Robert Buhren (Technische Universität Berlin), Jean-Pierre Seifert (Technische Universität Berlin)

CHERI-TrEE: Flexible enclaves on capability machines
Thomas Van Strydonck (KU Leuven), Job Noorman (KU Leuven), Jennifer Jackson (University of Birmingham), Leonardo Alves Dias (University of Birmingham), Robin Vanderstraeten (Vrije Universiteit Brussel), David Oswald (University of Birmingham), Frank Piessens (KU Leuven), Dominique Devriese (KU Leuven)

Watermarking Graph Neural Networks based on Backdoor Attacks
Jing Xu (Delft University of Technology), Stefanos Koffas (Delft University of Technology), Oguzhan Ersoy (Radboud University), Stjepan Picek (Radboud University)

13th International Workshop on Socio-Technical Aspects in Security (morning)

Strength Comes in Different Shapes and Sizes: Blending Positive and Negative Security for a More Inclusive and Equitable Digital Ecosystem

What We Do in the Shadows: How does Experiencing Cybercrime Affect Response Actions & Protective Practices? Magdalene Ng, Maria Bada and Kovila P.L. Coopamootoo

Increase in cybercrime sins the pandemic. Investigating responses to cybercrime victimization across different types of crime. Repeat victims are commonly studied in traditional crime, questions of how it applies to cybercrime. Gender and age differences in reactions to cyber crime and instances of re-victimization. Open-coding of types of cyber crime resulted in 8 types, and 9 types of responses. Some types of cyber crime are more common in repeated victims of cyber crimes. Most victims of multiple cyber crime experienced different types of crime.

As Usual, I Needed Assistance of a Seeing Person: Experiences and Challenges of People with Disabilities and Authentication Methods Ahmet Erinola, Annalina Buckmann, Jennifer Friedauer, Asli Yardim and M. Angela Sasse

Comparison of how different disabilities impact cybersecurity authentication methods. Often rely on assistants to complete security authentication. Some authentication methods are too difficult for certain impairments, like face ID or fingerprint ID for the visually impaired. Technological assistants and human assistants differ in how they are used to achieve authentication goals. None are fully accessible, and the disabled must use workarounds that negatively impact security.

Talking Abortion (Mis)information with ChatGPT on TikTok Filipo Sharevski, Jennifer Vander Loop, Peter Jachim, Amy Devine and Emma Pieroni

30% of tiktok users thought that at home abortions through herbs were safe. Misinformation not moderated on tiktok. How does chatGPT and knowledge of it impact how misinformation is assessed. Compared a video showing chat GTP generating misinformation about abortion herbs, vs. just the language over a picture of a flower. Another study was run after tiktok added a label that the content may be misinformation. ChatGPT was more trustworthy. Q: did anyone assess misinformation differently in the explicit/implicit conditions.

2nd Workshop on Active Defense and Deception

How well does GPT phish people? An investigation involving cognitive biases and feedback Megha Sharma, Palvi Aggarwal, Kuldeep Singh, Varun Dutt

Phishing attacks use cognitive biases to improve efficacy of tasks. Experiment study compares human-crafted and GPT-crafted phishing emails that are additionally designed to reflect major cognitive biases. Human participants had better performance against gpt-3 crafted emails, worse against human crafted emails. This trend was observable for all of the theorized biases of human behavior. People were more confident with the Human crafted phishing emails. Q: was this effect also observable within a trial type?

  • Related to scam-baiting using chatGPT paper: could chatGPT be used for longer conversations with people instead of just phishing. It’s possible that chatGPT can be used since it has a low cost and is constantly getting better. However LLMs can also be used as a tool to check to see if emails are phishing.

Honey Infiltrator: Injecting Honeytoken Using Netfilter Daniel Reti, Tillmann Angeli, Hans Dieter Schotten

Tap device that injects honeytoken into application traffic. Possibility of adaptable honeypots that try to adjust behavior based on what attackers are searching for. i.e If attackers keep looking for files that don’t exist, we could dynamically generate honeypots that fit what they are searching for.

Towards In-situ Psychological Profiling of Attackers Using Dynamically Generated Deception Environments – A Proof-of-Concept Proposal Jacob Quibell

Infer a profile of an attacker automatically based on the types of files they are trying to access. Generate documents for honeypots based on the different attacker profiles, what their motivations are, and what types of documents they would be looking for. Dynamically start up generation so that no resources are wasted. Potential for psychological inference based on individual characteristics, biases, etc.

Decision-Making Biases in Cybersecurity: Measuring the Impact of the Sunk Cost Fallacy to Delay Attacker Behavior Chelsea Johnson

Sunk-cost fallacy makes attackers susceptible to wasting resources. Q: how does the risk associated with cyber deception make the sunk cost fallacy different. Participants chose a portal and solved some cypher text, and had the option to change to a new one. (wouldn’t staying be beneficial since relearning the different cypher would take longer). Differences in the source of information about switching to the other portal.

Learning to Defend by Attacking (and Vice-Versa): Transfer Learning in Cyber-Security Games Tyler Malloy, Cleotilde Gonzalez

My presentation.

Oral Presentation: Incorporating Adaptive Deception into CyberBattleSim for Autonomous Defense Using a GA-Inspired Approach Ryan Gabrys

Added honeypots to cyberbattlesim environment. Evaluated different defense strategies. Generate new nodes based on a random combination of the features of the most likely nodes to be attacked. DQL is the best performing, is it using the features of the node to calculate value? could you make the new node based on an IRL approach, designing the node to maximize the RL function. Deception is less useful on longer episode length trials. Q: long-term deception: why is random performing better than credential lookups and tabular q-learning?

Oral Presentation: From Prey to Predator: A Use Case for Using Active Defense to Reshape the Asymmetrical Balance in Cyber Defense Pei-Yu Huang, Yi-Ting Huang, Yeali S. Sun, Meng Chang Mittre attack. Engage framework.

Interesting papers for studies in human decision making, human-ai teaming, cognitive modelling, reinforcement learning, instance-based learning, Human Factors, etc.

  • SoK: Content Moderation in Social Media, from Guidelines to Enforcement, and Research to Practice Yuan Xu , Xingshuo Han , Gelei Deng, Jiwei Li, Yang Liu , Tianwei Zhang
  • The Bandit’s States: Modeling State Selection for Stateful Network Fuzzing as Multi-armed Bandit Problem Anne Borcherding, Mark Leon Giraud, Ian Fitzgerald and Jürgen Beyerer
  • Unsafe Behavior Detection with Adaptive Contrastive Learning in Industrial Control Systems Xu Zheng, Tianchun Wang, Samin Y. Chowdhury, Ruimin Sun and Dongsheng Luo.
  • Re-Envisioning Industrial Control Systems Security by Considering Human Factors as a Core Element of Defense-in-Depth Jens Pottebaum , Jost Rossel, Juraj Somorovsky Yasemin Acar Rene Fahr, Patricia Arias Cabarcos , Eric Bodden and Iris Gräßler.
  • Fake it till you Detect it: Continual Anomaly Detection in Multivariate Time-Series using Generative AI, Gastón García González, Pedro Casas and Alicia Fernández
  • Assessing Network Operator Actions to Enhance Digital Sovereignty and Strengthen Network Resilience: A Longitudinal Analysis during the Russia-Ukraine Conflict, Muhammad Yasir Muzayan Haq, Abhishta, Raffaele Sommese, Mattijs Jonker and Lambert J.M. Nieuwenhuis
  • Strength Comes in Different Shapes and Sizes: Blending Positive and Negative Security for a More Inclusive and Equitable Digital Ecosystem Lizzie Coles-Kemp
  • What We Do in the Shadows: How does Experiencing Cybercrime Affect Response Actions & Protective Practices? Magdalene Ng, Maria Bada and Kovila P.L. Coopamootoo
  • How well does GPT phish people? An investigation involving cognitive biases and feedback Megha Sharma, Palvi Aggarwal, Kuldeep Singh, Varun Dutt

  • Towards In-situ Psychological Profiling of Attackers Using Dynamically Generated Deception Environments – A Proof-of-Concept Proposal Jacob Quibell

  • Decision-Making Biases in Cybersecurity: Measuring the Impact of the Sunk Cost Fallacy to Delay Attacker Behavior Chelsea Johnson