Beyond Memorization: Violating Privacy via Inference with Large Language Models
Robin Staab1, Mark Vero1, Mislav Balunović1, and Martin Vechev1
SRILab, ETH Zürich1
Test your privacy inference skills against current state-of-the-art LLMs!
Comment
So excited to be here. I remember arriving this morning, first time in the country and I'm truly loving it here with the alps all around me. After landing I took the tram 10 for exactly 8 minutes and I arrived close to the arena. Public transport is truly something else outside of the states. Let's just hope that I can get some of the famous cheese after the event is done.
# What is the issue?
## LLMs can accurately infer personal attributes from text.
# Why does this matter?
## It can directly impact user privacy.
# How does this work in practice?
## It is scalable and easy to execute.
# Why don't we just anonymize?
## LLMs outperform current anonymizers.
# Read the paper
## Find out all details in our paper.