This is a talk about how in the contact payments’ world there are many added features, new options and various country-specific regulations all interacting in ways that can lead to security flaws. It is based on work at IEEE S&P 2022 and work to appear at USENIX Security 2025.
Ioana Boureanu is a Professor of Secure Systems at University of Surrey and the Director of Surrey Centre for Cyber Security. Her research focuses on (automatic) analysis of security using mainly logic-based formalisms, as well as on provable security and applied cryptography. She received a PhD from Imperial College London in 2011, on these topics. Before joining Surrey, she worked as a researcher and professor in Switzerland, as well as a cryptography consultant in industry.
Many end-to-end verifiable schemes have been proposed and these typically require voters to perform some form of ballot audit in order to achieve cast as intended assurance. Such audits typically take the form of a cut-and-choose procedure, often in sequential form, where the voters should randomly decide at each step whether to audit or to cast (Benaloh challenges). These are generally thought to be awkward from a usability point of view and hard to understand. In practice, few voters perform even one audit before casting and virtually none perform more than one audit. Here we propose an in-person scheme in which ballot audits are out- sourced to independent auditors and observers. There is no longer any need or expectation that voters perform ballot audits (but the option to audit can still be offered to voters). All ballots are audited, including ones that are used to cast votes, thus there is no cut-and choose needed, resulting in much higher levels of ballot assurance . This contributes to improved usability and acceptability. It also means that assurance in the outcome is no longer reliant on a good proportion of voters performing ballot audits sufficiently diligently. The outcome can be deemed verified even if no voters perform ballot audits. Fur- thermore, ballot audits take the form of verifying (malleable) signatures rather than verifying encryptions, avoiding the need to reveal randomi- sations etc. The ballots incorporate a trivial encryption of the vote and so the vote is displayed in the clear the voter. Dispute resolution is also improved compared to many other ballot auditing methods, such as Be- naloh challenges, as they are now public and universally verifiable.
Peter Y A Ryan is full Professor of Applied Security at the University of Luxembourg since 2009. Since joining the University of Luxembourg he has grown the APSIA (Applied Security and Information Assurance) group that is now around 20 strong. He has around 30 years of experience in cryptography, information assurance and formal verification. He pioneered the application of process calculi to modelling and analysis of secure systems, firstly the characterization of non-interference and later to the analysis of crypto protocols. He initiated and led the “Modelling and Analysis of Security Protocols” project, in collaboration with researchers in Oxford an Royal Holloway, that pioneered the application of process algebra (CSP) and model-checking tools (FDR) to the analysis of security protocols. He has published extensively on cryptography, cryptographic protocols, security policies, mathematical models of computer security and, most recently, voter-verifiable election systems. He is the (co-)creator of several innovative, verifiable voting schemes: Prêt à Voter, Pretty Good Democracy, Caveat Coercitor, Selene, Electryo and Hyperion. With Feng Hao, he also developed the OpenVote boardroom voting scheme and the J-PAKE password based authenticated key establishment protocol. He and his team also work on quantum and “post-quantum” crypto and information assurance and the socio-technical aspects of security and trust. Prior to taking up the Chair in Luxembourg, he held a Chair in Computing Science at the University of Newcastle. Before that he worked at various three and four letter acronyms: the Government Communications HQ (GCHQ), CESG, the Defence Research Agency (DRA) Malvern, the Stanford Research Institute (SRI), Cambridge UK and the Software Engineering Institute (SEI), CMU Pittsburgh. He was awarded a PhD in theoretical physics from the University of London in 1982. He has sat on the program committees of numerous, prestigious security conferences, notably: IEEE Security and Privacy, IEEE Computer Security Foundations Workshop/Symposium (CSF), the European Symposium on Research in Computer Security (ESORICS), Workshop on Issues in Security (WITS). He has (co-)chaired various editions of WITS and ESORICS, Frontiers of Electronic Elections, Workshop on Trustworthy Elections (WOTE) 2007, Evote-Id. He was General Chair of ESORICS 2019, which was hosted in Luxembourg. In 2016 he founded the Verifiable Voting workshops held in association with Financial Crypto. From 1999 to 2007 he was the President of the ESORICS Steering Committee. In 2013 he was awarded the ESORICS Outstanding Contributions Award. He is a Visiting Professor at Surrey University and the ENS Paris.
The threat model and setting suggested by the Bitcoin blockchain has brought to light an array of interesting questions in distributed systems such as how to handle consensus in the presence of server dynamic availability or how the protocol can self heal from adversarial spikes. In this talk I overview this area covering definitions and modelling efforts developed over a decade and I describe motivating real world objectives such as the "software only launch" problem for proof of work (PoW) based protocols and the "key airdrop launch" problem for proof of stake (PoS) based protocols. The benefits that the PoS approach can bring to ledger consensus design are highlighted both in terms of energy efficiency and in facilitating a "best of both worlds" operation where the system launches in a dynamic availability mode incentivizing participation and when the participation manages to exceed a threshold, higher performance asynchronous consensus operation can give further benefits such as such partition tolerance and responsiveness. The backdrop of the talk follows my research in the Ouroboros blockchain protocol that has spanned almost a decade and covers design challenges faced, lessons learned and future research directions.
Aggelos Kiayias is chair in Cyber Security and Privacy and director of the Blockchain Technology Laboratory at the University of Edinburgh. He is also the Chief Scientist at blockchain technology company Input Output. His research interests are in computer security, information security, applied cryptography and foundations of cryptography with a particular emphasis in blockchain technologies and distributed systems, e-voting and secure multiparty protocols as well as privacy and identity management. He has received an ERC Starting Grant, a Marie Curie fellowship, an NSF Career Award, and a Fulbright Fellowship. He holds a Ph.D. from the City University of New York and he is a graduate of the Mathematics department of the University of Athens. He has over 200 publications in journals and conference proceedings in the area. He has served as the program chair of the Cryptographers’ Track of the RSA conference in 2011 and the Financial Cryptography and Data Security conference in 2017, as well as the general chair of Eurocrypt 2013. He also served as the program chair of Real World Crypto Symposium 2020 and the Public-Key Cryptography Conference 2020. In 2021 he was elected fellow of the Royal Society of Edinburgh and in 2024 he received the Lovelace medal by the British Computer Society.
Researchers have published thousands of papers demonstrating the security and privacy risks of Machine Learning (ML) models, unfortunately without any definitive solution for robustness. Yet, ML models are being widely deployed commercially, with the latest examples being the Large Language Models and Generative AI. In this talk, we will reflect on the gaps between academic research and industry practice, to understand what are the key elements that lead to the deployment of ML models despite the risks identified by academia. A first, prominent aspect is related to a misalignment in threat models, where academics tend to focus on the security of a single ML component, whereas industry has much more complex pipelines that include also traditional security solutions such as signatures and manual inspections. After discussing the main gaps, we will examine trends in research to overcome these differences, on how we could develop more impactful research and deploy safer systems.
Dr. Fabio Pierazzi is an Associate Professor in Information Security at UCL Computer Science. His research interests are at the intersection of systems security and machine learning, with a particular emphasis on settings in which attackers adapt quickly to new defenses (i.e., high non-stationarity, adaptive attackers). He extensively worked in network modeling for intrusion detection, as well as malware detection techniques through different machine learning abstractions, and on understanding how to overcome limitations of applying machine learning for identifying and explaining sophisticated attacks. Home page: https://fabio.pierazzi.com
Scholars in International Political Sociology (IPS) engaging with issues relating to mobility grapple with how various dynamics “escape” nation states and their borders. These approaches challenge understandings of the international that are often locked in conceptions of space and the political that are based around territory and states. Yet, the spatial and material implications of concepts such as the “space of exception”, or spaces that materially challenge the logics of territory (cyberspace, the high seas, refugee camps) have been left undertheorised. These spaces however are not “negative space” outside territory but rather sites with their own material and political specificity. Through a focus on mobility as a global challenge, this paper explores this friction, to consider how the exceptional and international become diffused in ordinary living and everyday practices. We offer two distinct examples through which to analyse these dynamics; the ability of humanitarian practice to consign spaces to the realm of exception and the ways in which commercial logics of maritime transport have carved out the space of the sea as beyond legal oversight and protection. These show the crucial frictions behind the production of the exceptional through the everyday, as both a conceptual and methodological tool.
Dr Hannah Owens is an interdisciplinary scholar working across International Relations, Security Studies, Migration Studies and Geopolitics. After completing a PhD at QMUL in 2023, Hannah worked as a Lecturer in PIRP, RHUL, before starting at the University of Hertfordshire as a Lecturer in Politics and IR. Inspired by decolonial, race and gender theory, Hannah’s research explores how migration, ruralisation and social justice shape everyday politics and security in the Middle East. Through a qualitative multi-methods approach, including ethnography, interviews, policy and discourse analysis, and visual and mobile methods, they research the role of state and non-state actors, aid organisations and civil society networks, drawing out the identity politics and protection practices of non-camp refugees and rural host communities.
An enormous percentage of paper drafts die in someone's email inbox. Lots of conversations with potential collaborators go around a non-terminating loop of "Hey we should write that paper!" Joe has spent a decade getting teenagers, people convicted of violent crimes, addicts, actors, undergrads, and schoolchildren on four continents to create written materials together. Joe will present a selection of lessons learned that he believes also apply to academic writing collaborations.
Joe is a part time teaching focused lecturer in the ISG at Royal Holloway. He runs the creative writing project https://whitewaterwriters.com/
Security against chosen-ciphertext attacks (CCA) concerns privacy of messages even if the adversary has access to the decryption oracle. While the classical notion of CCA security seems to be strong enough to capture many attack scenarios, it falls short of preserving the privacy of messages in the presence of quantum decryption queries, i.e., when an adversary can query a superposition of ciphertexts.
Boneh and Zhandry (CRYPTO 2013) defined the notion of quantum CCA (qCCA) security to guarantee privacy of messages in the presence of quantum decryption queries. However, their construction is based on an exotic cryptographic primitive (namely, identity-based encryption with security against quantum queries), for which only one instantiation is known. In this work, we comprehensively study qCCA security for public-key encryption (PKE) based on both generic cryptographic primitives and concrete mathematical assumptions, yielding the following results:
- We show that key-dependent message secure encryption (along with PKE) is sufficient to realize qCCA-secure PKE. This yields the first construction of qCCA-secure PKE from the LPN assumption.
- We prove that hash proof systems imply qCCA-secure PKE, which results in the first instantiation of PKE with qCCA security from (isogeny-based) group actions.
- We extend the notion of adaptive TDFs (ATDFs) to the quantum setting by introducing quantum ATDFs, and we prove that quantum ATDFs are sufficient to realize qCCA-secure PKE. We also show how to instantiate quantum ATDFs from the LWE assumption.
- We show that a single-bit qCCA-secure PKE is sufficient to realize a multi-bit qCCA-secure PKE by extending the completeness of bit encryption for CCA security to the quantum setting.
This is joint work with Navid Alamati (VISA Research).
Varun Maram is a postdoc in the Cybersecurity Group at SandboxAQ. His current research interests lie in quantum-resistant cryptography, with an emphasis on provable post-quantum security of real-world cryptographic systems. He obtained his PhD at ETH Zurich in 2023 where he was part of the Applied Cryptography Group. Varun's research has been recognized with best paper awards at PKC 2023 and PKC 2024 (the latter was awarded to the above work). He is also a co-submitter of "Classic McEliece", a key-establishment scheme which is currently in contention in the fourth round of NIST’s post-quantum cryptography standardization project.
Watermarking generative models consists of planting a statistical signal (watermark) in a model's output so that it can be later verified that the output was generated by the given model. A strong watermarking scheme satisfies the property that a computationally bounded attacker cannot erase the watermark without causing significant quality degradation. In this paper, we study the (im)possibility of strong watermarking schemes. We prove that, under well-specified and natural assumptions, strong watermarking is impossible to achieve. This holds even in the private detection algorithm setting, where the watermark insertion and detection algorithms share a secret key, unknown to the attacker. To prove this result, we introduce a generic efficient watermark attack; the attacker is not required to know the private key of the scheme or even which scheme is used. Our attack is based on two assumptions: (1) The attacker has access to a "quality oracle" that can evaluate whether a candidate output is a high-quality response to a prompt, and (2) The attacker has access to a "perturbation oracle" which can modify an output with a nontrivial probability of maintaining quality, and which induces an efficiently mixing random walk on high-quality outputs. We argue that both assumptions can be satisfied in practice by an attacker with weaker computational capabilities than the watermarked model itself, to which the attacker has only black-box access. Furthermore, our assumptions will likely only be easier to satisfy over time as models grow in capabilities and modalities. We demonstrate the feasibility of our attack by instantiating it to attack three existing watermarking schemes for large language models: Kirchenbauer et al. (2023), Kuditipudi et al. (2023), and Zhao et al. (2023). The same attack successfully removes the watermarks planted by all three schemes, with only minor quality degradation.
Danilo Francati is a Lecturer in the Department of Information Security at Royal Holloway, University of London. Before joining Royal Holloway, he was a Postdoctoral Researcher at George Mason University (2024) and Aarhus University (2021–2024). He earned his Ph.D. in September 2021 from Stevens Institute of Technology, where he conducted his research under the guidance of Giuseppe Ateniese. His research focuses on theoretical and applied cryptography, particularly advanced public-key primitives, blockchain, space-based primitives, and privacy-preserving machine learning. His work on Proof of Space has received support from Protocol Labs. Danilo regularly publishes in leading security conferences, including CRYPTO, EUROCRYPT, CCS, and IEEE S&P.
Smaller organisations can face many of the same cyber security challenges as their larger counterparts, but often lack the knowledge and resources to support themselves in addressing the problems. Drawing from findings from the ongoing CyCOS project, the presentation examines the diverse range of sources that smaller organisations may encounter when seeking cyber security guidance, as well as the inconsistent and potentially confusing coverage they may see as a result. It also presents views collected directly from SMEs and those providing them with cyber security support, illustrating that while some action may be taken, it is often limited in scope, and rarely proactive in nature. The findings support the desirability of a new community-based approach to support, bringing small businesses and advisors together in a more accessible context, with the hope of encouraging and assisting their engagement with cyber security issues.
Prof. Steven Furnell is Professor of Cyber Security in the School of Computer Science at the University of Nottingham. His research interests include security management and culture, usability of security and privacy, and technologies for user authentication and intrusion detection. He has authored over 390 papers in refereed international journals and conference proceedings, as well as various books, book chapters, and industry reports. Steve is the UK representative to Technical Committee 11 (security and privacy) within the International Federation for Information Processing, and a board member of the Chartered Institute of Information Security, and a member of the Steering Group for the Cyber Security Body of Knowledge (CyBOK) and the Careers and Learning Working Group within the UK Cyber Security Council. Steve is also the Principal Investigator on the CyCOS project, looking at enhancing cyber security support for small organisations.
Cybersecurity has introduced a unique challenge to centuries-old practices of international relations; the problem of attribution. All kinds of mechanisms, institutions, and processes have evolved to support the maintenance of international order and stability including treaties, sovereignty, laws about conduct in war, etc. But despite the range of problems that we've been confronted with in international relations, never before has there been a lack of clarity around who is acting. While it may be technically possible to identify a malicious state actor, this problem goes well beyond the technical because it is often politically impossible to share evidence in these cases because doing so would reveal too much about the accusing state's intelligence networks. This undermines attribution of state based cyber attacks and provides 'plausible deniability' for those who want it. In this project, we're exploring the implications of using zero knowledge proofs to address the problem of attribution in state based cyber attacks.
Madeline Carr is Professor of Global Politics and Cybersecurity at University College London. Her research focuses on the implications of emerging technology for national and global security, international order, and corporate governance. Professor Carr has published on cyber norms, multi-stakeholder Internet governance, the future of the insurance sector in the IoT, cybersecurity and international law, the public/private partnership in national cyber security strategies, and the ways in which boards approach cyber risk. Professor Carr is a member of the World Economic Forum Global Council on the Connected World where she chairs a cross-sectoral group working on the cybersecurity of the Internet of Things. She is also the Co-Director of an interdisciplinary Centre for Doctoral Training in Cybersecurity at UCL and Deputy Director of the REPHRAIN Protecting Citizens Online research hub. From 2018 – 2022, Carr was the Director of the UK wide Research Institute for Sociotechnical Cyber Security and developed a research programme on cybersecurity in local government. Board appointments include NED for Talion and the Advisory Board for the £70M Digital Secure by Design project.
An access control system is generally responsible for managing accesses from requesters to protected resources, using an access control policy. In many cases, the system might deviate from the expected behaviour, for instance by denying an access that was expected by the requester to be granted, or by granting an access to a resource a stakeholder was not expecting, or even by ignoring some part of the policy. This work-in-progress, in collaboration with Nicola Zannone (TU Eindhoven), explores the problem of generating and assessing explanation in Access Control, following and taking inspiration from recent approaches in Explainable AI and Explainable Security.
Dr Charles Morisset is an Associate Professor in Computer Science at Durham University. He was previously an academic in the School of Computing at Newcastle University, after holding postdoctoral positions at UNU-IIST, Royal Holloway, CNR-IIT and Newcastle, and getting his PhD from Paris 6 (now Sorbonne-Université). Charles has a broad interest in security and privacy, with a long-standing interest in Access Control and he was recently involved in the PETRAS Centre of Excellence, investigating the security and privacy of smart buildings.