You can measure how different changes to an AI system affect participants' trust on a quantitative scale. But you'll miss a lot!
Our Ethnography of NASA engineers building and using an AI tool to build mission-critical space software finds that Trust has as much to do with a tool's location within an social and organizational context, as it has to do with aspects of the tool itself. This study also questions boundaries between AI "builders" and "users" as well as "prototype" and "final" systems, as we watched the tool be built, used, and then built and used some more, often by the same people. We propose that as AI tools proliferate into knowledge work teams, AI Trust is best conceived as an aspect of the the operator-human collaboration ("collaborative trust") as they work together and with a wider team to produce creative output.
In this presentation, I will present findings from our CHI 2021 paper, briefly contrasted with recent work on an Open Source Deepfake tool building community, in hopes that this will provoke an engaging discussion of Trust (and other oft-vague descriptors) in AI.
Link to NASA CHI 2021 paper, Link to Deepfake FAccT 2022 paper. Please email dwidder@cmu.edu if there is any way I can remove accessibility barriers to your participation in this talk and discussion.
Note that this is a Wednesday seminar
David Gray Widder (he/him) is a Doctoral Student researching AI Ethics in the School of Computer Science at Carnegie Mellon University, and previously at Intel Labs, Microsoft Research, and NASAâs Jet Propulsion Laboratory. He was born in Tillamook, Oregon and raised in Berlin and Singapore. He maintains a conceptual-realist artistic practice, advocates against police terror and pervasive surveillance, and enjoys distance running.
Commercial organisations continue to face a growing and evolving threat of data breaches and system compromises, making their cyber-security function critically important. Many organisations employ a Chief Information Security Officer (CISO) to lead such a function. In this talk, based on a paper to be presented later this year at CSCW'22, I discuss findings from in-depth, semi-structured interviews with 15 CISOs and six senior organisational leaders, representing 18 difference commercial businesses. This work employs broader security scholarship related to ontological security and sociological notions of identity work to provide an interpretative analysis of the CISO role in organisations. The findings reveal that the CISO is an interpreter of something mystical, unknown and fearful to the uninitiated. They show how the fearful nature of cyber security contributes to it being considered an ontological threat by the organisation, while responding to that threat contributes to the organisation's overall identity. I further discuss how cyber security is analogous to a belief system and how one of the roles of the CISO is akin to that of a modern-day soothsayer for senior management; that this role is precarious and, at the same time, superior, leading to alienation within the organisation. The study also highlights that the CISO identity of protector-from-threat, linked to the precarious position, motivates self-serving actions, termed `cyber sophistry'. It also discusses a series of implications for both organisations and CISOs.
Joseph is a PhD researcher within the Information Security Group at Royal Holloway, University of London and is performing multidisciplinary research into the purpose of Chief Information Security Officers (CISOs) and cyber-security functions within commercial organisations. He is interested in the broader social dimensions of cyber security and risk management and how they are used to influence society through power and control. Joseph currently works full-time as a CISO.
Witness Encryption was introduced by Garg, Gentry, Sahai and Waters in 2013 and has attracted great reserach interest since then. The reasons for this are twofold; firstly, witness encryption can be viewed as a generalisation of public key encryption (and its variants) but with "single use keys" and secondly because it cana be used as a building block for many other cryptographic primitives. This talk provides a brief introduction to defintions and constructions of witness encryption, aswell as some open questions.
Saqib Kakvi is a lecturer in the Information Security Group. His research focuses on bridging the gaps between the theory and practice of cryptography. He has had results about digital signature schemes, with recent results of standardised signature schemes. Recently, he has also been interested in advanced encryption primitives such as time-lock encryption and witness encryption.
Saqib received his doctorate at the Ruhr-University Bochum under the supervision of Prof. Dr. Eike Kiltz. He was then took a postdoc position at University of Bristol in the group of Prof. Nigel Smart. Following that he was a postdoc at Paderborn University and University of Wuppertal in the group of Prof. Dr.-Ing. Tibor Jager.
In this talk, I will first motivate circuit-based private set intersection (PSI) as a promising tool for jointly conducting privacy-preserving analyses over databases of two or more parties. Then, I will give an overview of constructions that substantially improved the performance of such cryptographic protocols over the last decade. Finally, I will touch on circuit compilation for secure computation, which is essential to automatically generate representations of analytic functions that can be efficiently evaluated on "encrypted" data.
Christian Weinert is a lecturer in the Information Security Group (ISG) at Royal Holloway, University of London. Before, he was a doctoral researcher (2016/09 â 2021/08) and postdoctoral researcher (2021/09 â 2022/02) in the Cryptography and Privacy Engineering Group (ENCRYPTO) at the Department of Computer Science of TU Darmstadt, Germany. His research focuses on the design, implementation, and evaluation of privacy-preserving protocols at large scale. During his bachelor and master studies, he worked in the area of long-term storage.
Software is often viewed as a means to program devices. However, software is also a medium of communication between developers. This communication occurs through meaningful identifiers and comments in source code. State-of-the-art Software Analysis tools ignore this communication and use an intermediate representation of software that is devoid of any Natural Language tokens. In this talk, I will present a novel software representation called Name-Flow Graphs (NFGs) that improves traditional forms of Software Analysis by augmenting software representations with identifiers. I will demonstrate how NFGs can be used to identify more precise and consequently, more secure types for variables. I will also show how NFGs can be used to auto-decompose software by using it to separate conflated commits into individual concerns.
Santanu Dash is a Lecturer in the Information Security Group at Royal Holloway, University of London. He is interested in applications of Software Analysis to the maintenance and security of large software ecosystems, such as the Android Open Source Project. His work on Bimodal Software Analysis, which combines symbolic and probabilistic techniques in a unified framework, has led to publications in flagship venues (ESEC/FSEâ20 and ESEC/FSEâ18). He has recently been awarded a 3-year research grant by EPSRC to apply Bimodal Software Analysis to automated software maintenance. Santanu was previously a Lecturer at University of Surrey and a post-doctoral researcher in the Systems Software Engineering Group at University College London and in the Information Security Group at Royal Holloway. He holds a PhD in Type-driven Software Security from University of Hertfordshire.
This talk will cover recent research in distributed key generation (DKG) protocols, focusing on its definitions of security and on aggregable DKG, in which the parties can produce an aggregated and publicly verifiable transcript. It will also explore the applications of DKG and, if time permits, how DKG can be achieved in asynchronous environments.
Sarah is a Professor in Cryptography and Security at University College London (UCL) and a Staff Research Scientist at Google. At UCL, she affiliated with the Information Security Group in the Computer Science department, and at Google she is a member of the Certificate Transparency team. She is also an Associate Director of the Initiative for Cryptocurrencies and Contracts (IC3).
Based on the 2020 publication "Cryptic Commonalities", Adrienne will speak about how research commonalities were carved out among mathematicians, engineers and anthropologists in an interdisciplinary research project about advances in a cryptographic technique called Secure Multiparty Computation (MPC). Cryptographyââa sub-genre of mathematics and often-invisible infrastructure enabling secure digital communicationââhas received less attention. The article argues that the ubiquity of digital computing in our lives necessitates the creation of socio-mathematical vocabularies. Such vocabularies have the potential to lead to new situated data security practices based on local perceptions of rights and protection. STS scholars and anthropologists are uniquely situated to do this work. The article follows three anthropologists in their endeavors to find âcryptic commonalitiesâ by âtacking back and forthâ (Cf. Helmreich 2009) between mathematiciansâ, engineersâ and their own scientific vocabularies. Despite these attempts, however, the parties often âtalk past each otherâ. Instead of shying away from the awkwardness that such moments produce, the authors embrace âepistemic disconcertmentâ (Cf. Verran 2013a), carving out a space in which they can communicate productively with each other. Growing out of these insights, Adrienne will also present collaborative work from a transdisciplinary workshop and interactive exhibition called "Cryptic Commoms" held in 2021.
Note that this seminar will be in person.
Adrienne Mannov is an assistant professor at Moesgaard's department of anthropology at Aarhus University, Denmark. She has a PhD in anthropology from the University of Copenhagen where she did an industrial PhD in collaboration with Sea Health & Welfare. In her PhD, she examined merchant seafarer's perceptions of maritime piracy as an occupational risk. Adrienne's research areas address security more generally. She has conducted fieldwork on mapping and security in Israel, investigated the effects of autoamtion on job security in the transporatation sector at the World Maritime University (UN institution under the IMO) and more recently, she was a postdoc in an interdisciplinary research project with engineers, mathematicians and anthropologists about AI, data security and cryptography at Aalborg University. In addition to her teaching responsiblities at Moesgaard, she is research project leader for MANTRA's co-financed project with SEGES about migrant's worksplace safety on Danish farms.
In this talk we will give a brief introduction on oblivious pseudorandom functions (OPRFs) and their applications. Then, we present a cryptanalysis of the SIDH-based oblivious pseudorandom function from supersingular isogenies proposed at Asiacrypt'20 by Boneh, Kogan and Woo. We demonstrate an attack on one assumption, the auxiliary one-more assumption, underlying the security of the scheme. This leads to an attack on the oblivious PRF itself. The attack allows adversaries to evaluate the OPRF without further interactions with the server after some initial OPRF evaluations and some offline computations. This breaks the pseudorandomness of the OPRF. We first propose a polynomial-time attack. Then, we argue it is easy to change the OPRF protocol to include some countermeasures, and present a second subexponential attack that succeeds in the presence of said countermeasures. Both attacks break the security parameters suggested by Boneh et al. Finally, we examine the generation of one of the OPRF parameters and argue that a trusted third party is needed to guarantee provable security.
Joint work with Andrea Basso, Péter Kutas, Christophe Petit and Antonio Sanso.
Simon is a PhD student at Royal Holloway, University of London, in the Information Security Group. His research interests span various aspects of post-quantum cryptography, with a special focus on cryptanalysis and isogeny-based cryptography. More generally, Simon is interested in various applications of pure mathematics to cryptography.
Will cryptography survive quantum adversaries? Public-key cryptosystems based on post-quantum assumptions provide part of the answer. But what about the security of the many other cryptographic protocols and primitives? While some of these primitives directly inherit the post-quantum security of the underlying assumptions, many classical cryptosystems are proved secure by ârewindingâ an interactive adversary to record its responses to multiple different challenges. Unfortunately, this technique is inapplicable if the adversary is running a quantum algorithm, since measuring the response can irreversibly disturb the adversaryâs state.
In this talk, I will present a new quantum rewinding technique that enables recording the adversary's responses on any number of challenges. This opens the door to quantum security for many tasks. Our primary application here is to prove that Kilianâs four-message succinct argument system for NP is secure against quantum attack (assuming the post-quantum hardness of the Learning with Errors problem).
Joint work with Alessandro Chiesa, Fermi Ma, and Mark Zhandry.
Note the changed time.
Nick is an Assistant Professor of Computer Science at the University of Warwick, and a member of the Theory and Foundations group. His work focuses on post-quantum cryptographic proof systems, and his interests include interactive proof systems and zero knowledge in general, post-quantum cryptography, quantum information, coding theory and computational complexity. Previously, Nick was a postdoc at Boston University, and he received his PhD from UC Berkeley.
The web constitutes a complex infrastructure, with entities such as DNS servers, web servers, and web browsers interacting and communicating using diverse technologies. In the light of numerous attacks on the web infrastructure and web applications that have been reported and the growing complexity of the web, rigorous and systematic security and privacy analyses of the web infrastructure, web standards, and web applications are essential.
To carry out such analyses, we created the most comprehensive formal model of the web infrastructure to date. This model, called Web Infrastructure Model (WIM), allows us to precisely state and prove security and privacy properties of web standards and applications. Using this holistic approach, we can identify vulnerabilities and develop fixes. Moreover, we can even exclude unknown classes of attacks on the systems we analyze.
We successfully applied the WIM to formally analyze several crucial and high-risk web protocols, including the very popular single sign-on and authorization standards OAuth and OpenID Connect (used, e.g., by Google, Facebook, Microsoft, and Amazon). Our analyses uncovered several severe attacks; we suggested fixes, and for the first time proved security and privacy properties for the fixed standards. Several protocol specifications have been revised according to our proposals. Regarding privacy in single sign-on, our results, however, show that no existing protocol provides this property. To fill this gap, we used the WIM to design a new single sign-on system, SPRESSO (Secure Privacy-Respecting Single Sign-On), which provably provides strong security and privacy guarantees.
In an ongoing project, we are currently mechanizing the WIM, which enables us to extract verified code for usage in practice and eases the re-usage of proofs for future, modular analyses. Our approach builds upon F*, an ecosystem consisting of a functional programming language with a rich dependent type system and an SMT-based theorem prover. We already applied the first version of our new mechanized framework to prove security of the Signal protocol (used, e.g., by WhatsApp and Skype) and the ACME standard (used, e.g., by Let's Encrypt).
Guido Schmitz graduated in Computer Science at the University of Trier (Germany) and obtained his doctorate in 2019 at the University of Stuttgart. He became a lecturer in the Information Security Group at Royal Holloway in December 2021.
Women in Saudi Arabia were until 2018 banned from driving. This study used the theory of connective action to explore the role of social media in the women campaign for the right to drive, and looked at how activists used digital media platforms to get their messages across to the Saudi publics and the international community. Findings showed how both connective action and collective action offer tactics that can complement each other in an online movement.
Leysan Khakimova Storie is an assistant professor in the Department of Strategic Communication at Lund University, Sweden. After getting her MA degree in intercultural communication and conflict resolution from the University of Kansas, she worked and studied in the PhD program at the University of Maryland. Her dissertation focused on networked public diplomacy. Prior to working at Lund University, she worked as Assistant Professor in strategic communication at Zayed University in the United Arab Emirates. Leysan Storieâs current research interests relate to global strategic communication, public diplomacy, and womenâs contributions to communication. Her previous studies have been published in the New Media and Society journal, the International Journal of Press/Politics, Journal of Public Relations Research, Public Relations Review, and the International Journal of Strategic Communication.
Ali Khalil is assistant professor at the Department of International affairs and Social Sciences at Zayed University in the United Arab Emirates. He holds a PhD and MA in Middle East Politics from Durham University and a BA in Journalism from the Lebanese University in Beirut. He has more than 15 years of experience in journalism, including 13 years with the global newswire Agence France-Presse (AFP) as Gulf correspondent based in Dubai, and with the pan-Arab newspaper Asharq Al-Awsat in London. He also served as an adjunct faculty at the Department of Mass Communication at the American University of Sharjah from 2012 to 2016. His current research interests are interdisciplinary, combining media studies with social sciences areas, with particular focus on social movement activism, women rights and gender equality.
Accessible and powerful machine learning has its downsides. A recent New York Times article profiled clearview.ai, an unregulated facial recognition service that has downloaded over 3 billion photos of people from the Internet and used them to build facial recognition models for citizens without their knowledge or permission. Clearview.ai demonstrates just how easy it is to build invasive tools for monitoring and tracking using deep learning. So how do we protect ourselves against unauthorized third parties building facial recognition models that recognize us wherever we may go?
In this talk, I will present our system Fawkes, an algorithm and software tool that gives individuals the ability to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. At a high level, Fawkes "poisons" models that try to learn what you look like, by putting hidden changes into your photos that poison any facial recognition models of you. Fawkes takes your personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking. You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you, thus protecting your privacy.
Note the changed time.
Shawn Shan is a Ph.D. student at University of Chicago. He works in the SAND Lab, co-advised by Professor Ben Y. Zhao and Professor Heather Zheng. His research lies in the intersection of machine learning, and security and privacy, exploring the limitations, vulnerabilities, and privacy implications of neural networks. Shawn received Bachelor of Science in computer science from University of Chicago in 2020. He has also spent two summers at Facebook as a software engineer on the privacy team.
Perceptual hashing is widely used to search similar images for digital forensics and cybercrime study. Unfortunately, the robustness of perceptual hashing algorithms is not well understood in these contexts. In this paper, we examine the robustness of perceptual hashing and its dependent security applications both experimentally and empirically.
We develop a series of attack algorithms to subvert perceptual hashing based image search. This is done by generating attack images that effectively enlarge the hash distance to the original image while having minimal visual changes. So the original image will not be returned or ranked high when searching the attack image. We design the attack algorithms under a black-box setting, augmented with novel designs (e.g., grayscale initialization) to improve efficiency and transferability. We evaluate our attack against the standard pHash as well as its robust variants. We then empirically test against real-world reverse image search engines including TinEye, Google, Microsoft Bing, and Yandex. We find that our attack is highly successful on TinEye and Bing, and is moderately successful on Google and Yandex.
Qingying Hao is currently a CS PhD student at University of Illinois Urbana-Champaign. Her research focuses on the intersection area of security and machine learning. Some of her recent work explores developing robust ML systems to construct effective security detection and defense for online attacks.
For more than a decade, the United States military has conceptualized and discussed the internet and related systems as âcyberspace,â understood as a âdomainâ of conflict like land, sea, air, and outer space. How and why did this concept become entrenched in U.S. doctrine? What are its effects? Focusing on the emergence and consolidation of this terminology, this article makes three arguments about the role of language in cybersecurity policy. First, I propose a new, politically consequential category of metaphor: foundational metaphors, implied by using particular labels rather than stated outright. These metaphors support specific ways to understand complex issues, provide discursive resources to some arguments over others, and shape policy contestation and outcomes. Second, I present a detailed empirical study of U.S. military strategy and doctrine that traces the emergence and consolidation of terminology built on the âcyberspace domain.â This concept supported implicit metaphorical correspondences between the internet and physical space, yielding specific analogies and arguments for understanding the internet and its effects. Third, I focus on the rhetorical effects of this terminology to reveal two important institutional consequences: this language has been essential to expanding the militaryâs role in cybersecurity, and specific interests within the Department of Defense have used this framework to support the creation of U.S. Cyber Command. These linguistic effects in the United States also have implications for how other states approach cybersecurity, for how international law is applied to cyber operations, and for how International Relations understands language and technological change.
Note the changed time.
Jordan Branch is Assistant Professor of Government at Claremont McKenna College. He is a former fellow at the American Council of Learned Societies, and has held positions at Brown University and the University of Southern California. His publications include The Cartographic State: Maps, Territory, and the Origins of Sovereignty (2014, Cambridge) and articles in International Organization, International Studies Quarterly, the European Journal of International Relations, International Theory, and Territory, Politics, Governance.
Money creation is typically misrepresented in economics textbooks, and as a result also in popular understanding. The two most common misrepresentations are: (1) That banks receive deposits when households save, and then lend them out. Instead, new bank loans create new deposits, and repayment of bank loans extinguishes deposits. (2) That the central bank either fixes the amount of money in circulation, or that a given amount of central bank money is multiplied up into more loans and deposits. Instead, the quantity of money in circulation is mainly determined by the commercial decisions of banks, but with monetary policy and prudential regulation acting as constraints on private money creation. The author will discuss the mechanics of money creation in the modern economy, with an application to CBDC, a potential new form of central bank money.
Michael Kumhof is Senior Research Advisor in the Research Hub of the Bank of England. He is responsible for co-leading this unit, and for helping to formulate its research agenda. His previous position was Deputy Division Chief, Economic Modeling Division, IMF, where his responsibilities included the development of the IMFâs global DSGE simulation model, GIMF. His main research interests are monetary reform (including central bank digital currencies and full reserve banking), the macroeconomic implications of the fact that banks are creators of money rather than intermediaries of savings, the role of economic inequality in causing imbalances and crises, and the macroeconomic effects of fossil fuel depletion. Michael taught economics at Stanford University from 1998 to 2004. He worked in corporate banking, for Barclays Bank PLC, from 1988 to 1993. His work has been published by AER, JME, AEJ Macro, JIE, JEDC, JMCB, EER, and Journal of Macroeconomics, among others. Dr. Kumhof is a citizen of Germany.
We present passive attacks against CKKS, the homomorphic encryption scheme for arithmetic on approximate numbers presented at Asiacrypt 2017. The attacks are both theoretically efficient (running in expected polynomial time) and very practical, leading to complete key recovery with high probability and very modest running times. The attacks have been implemented and tested against all major open source homomorphic encryption libraries, including HEAAN, SEAL, HElib and PALISADE, and when computing several functions that often arise in applications of the CKKS scheme to machine learning on encrypted data, like mean and variance computations, and approximation of logistic and exponential functions using their Maclaurin series.
The attack shows that the traditional formulation of INDCPA security (or indistinguishability against chosen plaintext attacks) achieved by CKKS does not adequately capture security against passive adversaries when applied to approximate encryption schemes, and that a different, stronger definition is required to evaluate the security of such schemes. We provide a solid theoretical basis for the security evaluation of homomorphic encryption on approximate numbers (against passive attacks) by proposing new definitions, that naturally extend the traditional notion of INDCPA security to the approximate computation setting. We then discuss implications and separations among different definitional variants, and possible methods to modify the CKKS to avoid our attack and provably achieve our stronger security definition.
Daniele Micciancio received his PhD in Computer Science from MIT in 1998, and in 1999 he joined the faculty of the Computer Science and Engineering department at UC San Diego, where he has been a full professor since 2009. He has worked in many areas of theoretical computer science and cryptography, but he is most known for his pioneering work on the foundations of lattice based cryptography. Among his best known results are the proof that the Shortest Vector Problem in NP-hard to approximate within some constant factor (FOCS1998), the first (deterministic) single exponential time algorithm to solve the closest vector problem (with Voulgaris, STOC 2010), and the development of Gaussian techniques for the analysis of lattice cryptography (with Regev, FOCS 2004). But the result that he is most proud of is the first efficient cryptographic construction provably secure based on the worst case hardness of algebraic lattices (FOCS 2002), which opened the door to the development of efficient lattice based cryptography, including the design of the SWIFFT hash function (with Lyubashevsky, Peikert and Rosen, FSE 2008), and the FHEW fully homomorphic encryption scheme (with Ducas, Eurocrypt 2015). His work has been recognized by several awards, including the Matchey best paper award (FOCS 1998), Sprowls PhD thesis award (1999), NSF CAREER award (2001), Hellman fellowship (2001), and Alfred P. Sloan Fellowship (2003). He was invited speaker at PKC 2010 and Eurocrypt 2019, and Beeger lecturer at the Netherland math congress in 2014. He served as program chair of TCC 2010, Crypto 2019 and Crypto 2020, general chair of TCC 2014, and associate editor of Information and Computation, SIAM J. on Computing and the Journal of Cryptology. In 2019 he was named Fellow of the International Association of Cryptologic Research for his many research and service contributions.
Political activism is a worldwide force in geopolitical change and has, historically, helped lead to greater justice, equality, and stopping human rights abuses. A modern revolution---an extreme form of political activism---pits activists, who rely on technology for critical operational tasks, against a resource-rich government that controls the very telecommunications network they must use to operationalize, putting the technology they use under extreme stress. Our work presents insights about activists' technological defense strategies from interviews with 13 political activists who were active during the 2018-2019 Sudanese revolution. We find that politics and society are driving factors of security and privacy behavior and app adoption. Moreover, a social media blockade can trigger a series of anti-censorship approaches at scale, while a complete internet blackout can cripple activists' use of technology. Even though the activists' technological defenses against the threats of surveillance, arrest and physical device seizure were low tech, they were largely sufficient against their adversary. Through these results, we surface key design principles, but we observe that the generalization of design recommendations often runs into fundamental tensions between the security and usability needs of different user groups. Thus, we provide a set of structured questions in an attempt to turn these tensions into opportunities for technology designers and policy makers.
Note the changed time.
Alaa Daffalla recently completed her Masters degree in Computer Science at the University of Kansas. Her interests are broadly in usable security and privacy, in specific she's interested in surfacing the security and privacy practices and behaviors in non-western populations.
I will discuss how to break GEA-1. The attack is based on a very particular relation between two out of the three LFSRs internally used by GEA-1. As this is very unlikely to happen by chance, there is a strong indication that the security of GEA-1 was deliberately weakened to 40 bits in order to fulfil European export restrictions. I will also briefly how to construct corresponding sets of LFSRs.
Gregor Leander received his diploma degree in mathematics in 2001 from the University of Bremen, Germany. In 2004 he received his Ph.D.\@ degree, focusing on the study of Boolean functions, from the Ruhr University Bochum, Germany under the supervision of Hans Dobbertin. From 2006 to 2007 he spent one year as a post-doctoral researcher at the University of Toulon in France. From 2008 to 2012 he was an associate professor at the Technical University of Denmark in Lyngby, before he returned to the Ruhr University Bochum in 2013. Since 2015 he is a professor at the Ruhr University Bochum. His research interests include the study of Boolean functions and the design and analysis of symmetric cryptography
We construct public-coin time- and space-efficient zero-knowledge arguments for NP. For every time T and space S non-deterministic RAM computation, the prover runs in time Tâ polylog(T) and space Sâ polylog(T), and the verifier runs in time nâ polylog(T), where n is the input length. Our protocol relies on hidden order groups, which can be instantiated, assuming a trusted setup, from the hardness of factoring (products of safe primes), or, without a trusted setup, using class groups. The argument-system can heuristically be made non-interactive using the Fiat-Shamir transform.
Our proof builds on DARK (BĂŒnz et al., Eurocrypt 2020), a recent succinct and efficiently verifiable polynomial commitment scheme. We show how to implement a variant of DARK in a time- and space-efficient way. Along the way we:
Identify a significant gap in the proof of security of DARK.
Give a non-trivial modification of the DARK scheme that overcomes the aforementioned gap. The modified version also relies on significantly weaker cryptographic assumptions than those in the original DARK scheme. Our proof utilizes ideas from the theory of integer lattices in a novel way.
Generalize Pietrzak's (ITCS 2019) proof of exponentiation (PoE) protocol to work with general groups of unknown order (without relying on any cryptographic assumption).
In proving these results, we develop general-purpose techniques for working with (hidden order) groups, which may be of independent interest.
This is joint work with Alex Block, Justin Holmgren, Alon Rosen and Ron Rothblum, and will appear at CRYPTO '21. The full version of the paper is available here: https://eprint.iacr.org/2021/358.
Note the changed time.
Pratik is a Postdoctoral Research Fellow at Carnegie Mellon University working with Prof. Vipul Goyal. His research interests broadly lie in theory of cryptography, and more specifically in non-malleability, zero-knowledge and secure multi-party computation. Prior to this, he received his Ph.D. from University of California Santa Barbara under Prof. Stefano Tessaro and Prof. Huijia Lin.
This work investigates how a population of end-users with especially salient security and privacy risksâ sex workersâconceptualizes and manages their digital safety. The commercial sex industry is increasingly Internet-mediated. As such, sex workers are facing new challenges in protecting their digital privacy and security and avoiding serious consequences such as stalking, blackmail, and social exclusion. Through interviews (n=29) and a survey (n=65) with sex workers in European countries where sex work is legal and regulated, we find that sex workers have well-defined safety goals and clear awareness of the risks to their safety: clients, deficient legal protections, and hostile digital platforms. In response to these risks, our participants share sophisticated strategies for protecting their safety, but use few tools specifically designed for security and privacy. Our results suggest that if even high-risk users with clear risk conceptions view existing tools as insufficiently effective to merit the cost of use, these tools are not actually addressing their real security needs. Our findings underscore the importance of more holistic design of security tools to address both online and offline axes of safety.
Note the changed time.
Allison is a PhD candidate in computer science at the University of Michigan. Her research sits in the intersection between technology and society, particularly in the areas of privacy and security. Her recent work has focused on how technology exacerbates marginalization and impacts digital safety. Allison is also a Research Fellow at the Center on Privacy & Technology at Georgetown Law, where she contributes to research on immigrant surveillance.
We show that a constant factor approximation of the shortest and closest lattice vector problem in any norm can be computed in time $2^{0.802\, n}$. This contrasts the $2^n$ (gap)-SETH based lower bound that already applies for some tiny, but constant approximation factor. We achieve this result by reducing to the Euclidean case by means of geometric ideas related to high-dimensional coverings. This is joint work with Fritz Eisenbrand and Thomas Rothvoss respectively.
Note the changed time.
I am a Ph.D. student at Ecole Polytechnique fédérale de Lausanne (EPFL), supervised by Fritz Eisenbrand. My work is mostly on algorithmic questions related to lattice problems and integer programming.
Digital personality tracking, affective computing, and emotional artificial intelligence (AI) are major growth areas in computer science. Here, I argue the design of contemporary clinical and popular technologies for tracking human emotions, such as smart phone apps or wearable devices, is grounded in a longer, often circuitous history of diverse fields including physiology, clinical psychology, cybernetics, and communication studies. Through this historical analysis, I explain the ongoing impact of the psychological sciences on contemporary Big Data and artificial intelligence (AI) research, and social media platform design and experimentation, and critique the racial, gendered, and cultural assumptions at work in erstwhile neutral contemporary AI systems for measuring and analyzing emotional data.
Note the changed time.
Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at the University of Western Ontario. His work interrogating the historical, social, and ethical impacts of computing and AI technologies has appeared in journals including The Information Society, Social Studies of Science, and New Media & Society, and in popular venues like Slate, The Globe and Mail, and The Boston Globe. Luke was previously a Postdoctoral Researcher in AI ethics at Microsoft Research, and a Postdoctoral Fellow in Sociology at Dartmouth College; he holds a PhD from the Department of Media, Culture, and Communication at New York University.
April 2018: the Russian Internet (RuNet) watchdog Roskomnadzor (RKN) orders to block the popular Telegram messenger. RuNet users respond with an unprecedented wave of actions, ranging from satirical memes to flashmobs and rallies. The movement for the defense of Telegram, quickly baptized âDigital resistanceâ, has a rich âe-repertoire of contentionâ, inspiring a burst of technical creativity, with dozens of new obfuscation and circumvention protocols, proxies and VPNs designed by tech-savvy users -- and by the Telegram team itself -- in order to help bypass governmental censorship.
Drawing from perspectives in science and technology studies (STS), infrastructure studies in particular and relying on a qualitative approach including interviews with Russian technical experts, ISPs, and Internet freedom activists -- this presentation analyzes the Telegram ban in Russia as a socio-technical controversy that unveils the tensions between the governmental narrative of a âsovereign Internetâ based on Russian-made censorship and filtering technologies, and the transnational character of global Internet infrastructures.
The original research paper that serves as a material for this seminar was co-authored by Ksenia Ermoshina and Francesca Musiani from the Center for Internet and Society and can be accessed here: https://firstmonday.org/ojs/index.php/fm/article/view/11704
Ksenia Ermoshina is a researcher at the Center for Internet and Society of the CNRS, France. She holds a PhD in socio-economy of innovation from the Mines ParisTech High School of Engineering and also works as a UX-designer at Delta Chat messenger. Her research interests touch upon surveillance and censorship studies, social studies of encryption, civic hacking, infrastructure studies, conflict and crisis communications and information control practices in at-risk areas. Besides her main geographical zone of interest (Russia and Eastern Europe), she focuses on international circulation of Internet regulation norms and technologies. She develops a multidisciplinary approach to information control studies, including such innovative methods as network measurements and traffic analysis, combined to web-ethnography and qualitative interviews.
ASCON authenticated encryption with associated data (AEAD) is a CAESAR standard for lightweight applications and also a round 2 candidate in NIST lightweight cryptography standardization competition. ASCON has been in the literature for a while, however, there has been no successful AEAD construction which is secure at the same time lighter than ASCON. In this talk, we show how to overcome the challenge of constructing a permutation that is lighter than the ASCON permutation while ensuring a similar performance. Using this permutation we achieve a lightweight AEAD which we call Sycon. Hardware implementation result shows that the Sycon permutation has 5.35% reduced area, compared to the ASCON permutation. This leads to a remarkable area reduction for Sycon AEAD which is about 14.95% as compared to ASCON AEAD. We regard Sycon as a new milestone as it is the lightest among all the AEADs belonging to the ASCON family.
Sumanta Sarkar is a Senior Research Fellow at the University of Warwick. He has both industry and academic experience. He completed his PhD in 2008 at the Indian Statistical Institute. He was a postdoctoral researcher at INRIA, France and University of Calgary, Canada. Before joining Warwick, he spent five years as a research scientist in the R&D division of Tata Consultancy Services in India. His current research topics are post-quantum cryptography, lightweight cryptography and IoT security.
The Internet isn't gender equitable. In over two-thirds of countries worldwide, there are more male than female users online. And, in India, only 29% of Internet users were reported to be women in 2017. In this talk, I will share findings on how safety & privacy threats limit women's access and free expression online, drawn from our gender equity research in seven countries, spanning nearly 2 years. I will present novel and chilling abuse threats enabled by pervasive social media platforms, resulting in cyberstalking, impersonation and personal data leakages, and how our participants experienced and coped with the threats. I will also share how inadequate privacy on devices led participants to create privacy-preserving practices while sharing phones, such as locks, deleting traces, and avoiding specific digital activities. I will then discuss design implications towards a safer, more private Internet and how they have been applied to various Google products.
Paper sources: https://research.google/pubs/pub47247/ https://research.google/pubs/pub47721/
Note the changed time.
Nithya Sambasivan is a Staff Researcher at PAIR and leads the HCI-AI group at Google Research India. Her research focuses on designing responsible AI systems by centering marginalized communities in the Global South. Sambasivan is an affiliate faculty at the Paul G. Allen Center for CS & Engineering at the University of Washington. She publishes in the areas of HCI, ICTD, and Privacy/Security. She graduated with a Ph.D. from University of California, Irvine and an MS from Georgia Tech, focusing on HCI and under-represented communities in India.
We present an ethnographic study of secure software development processes in a software company using the anthropological research method of participant observation. Two PhD students in computer science trained in qualitative methods were embedded in a software company for 1.5 years of total research time. The researchers participated in everyday work activities such as coding and meetings, and observed software (in)security phenomena both through investigating historical data (code repositories and ticketing system records), and through pen-testing the developed software and observing developersâ and managementâs reactions to the discovered vulnerabilities. Our study found that 1) security vulnerabilities are sometimes intentionally introduced and/or overlooked due to the difficulty in managing the various stakeholders' responsibilities in an economic ecosystem, and cannot be simply blamed on developersâ lack of knowledge or skills; 2) accidental vulnerabilities discovered in the pen-testing process produce different reactions in the development team, often times contrary to what a security researcher would predict. These findings highlight the nuanced nature of the root causes of software vulnerabilities and indicate the need to take into account a significant amount of contextual information to understand how and why software vulnerabilities emerge during software development. Rather than simply addressing deficits in developer knowledge or practice, this research sheds light on at times forgotten human factors that significantly impact the security of software developed by actual companies. Our analysis also shows that improving software security in the development process can benefit from a co-creation model, where security experts work side by side with software developers to better identify security concerns and provide tools that are readily applicable within the specific context of the software development workflow.
Note the changed time.
Armin Ziaie Tabari is a Ph.D. candidate in the Department of Computer Science and Engineering at the University of South Florida under the supervision of Prof. dr. Xinming Ou. His research focuses on usable security and privacy, Internet of Things security, and Honeypots.
The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this talk, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results. â A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. This tradeoff which ranges roughly from enumeration to sieving, is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling above the smoothing parameter. â A quantum algorithm that runs in time 2^{0.9535n+o(n)} and requires 2^{0.5n+o(n)} classical memory and poly(n) qubits. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [ADRSD15] that has a time and space complexity 2^{n+o(n)}. â A classical algorithm for SVP that runs in time 2^{1.741n+o(n)} time and 2^{0.5n+o(n)} space. This improves over an algorithm of [CCL18] that has the same space complexity. The time complexity of our classical and quantum algorithms are obtained using a known upper bound of a quantity related to the kissing number of a lattice, which is 2^{0.402n}. In practice this quantity is much smaller and is often 2^o(n) for most lattices. In that case, our classical algorithm runs in time 2^{1.292n} and our quantum algorithm runs in time 2^{0.750n}.
Yixin Shen is a postdoctoral research assistant in the Information Security Group under the supervision of Martin R. Albrecht. She received her PhD from Université de Paris under the supervision of Frédéric Magniez. Her research focuses on classical and quantum algorithms for lattice-based cryptanalysis.
WebAuthn, forming part of FIDO2, is a W3C standard for strong authentication, which employs digital signatures to authenticate web users whilst preserving their privacy. Owned by users, WebAuthn authenticators generate attested and unlinkable public-key credentials for each web service to authenticate users. Since the loss of authenticators prevents users from accessing web services, usable recovery solutions preserving the original WebAuthn design choices and security objectives are urgently needed.
We examine Yubicoâs recent proposal for recovering from the loss of a WebAuthn authenticator by using a secondary backup authenticator. We analyse the cryptographic core of their proposal by modelling a new primitive, called Asynchronous Remote Key Generation (ARKG), which allows some primary authenticator to generate unlinkable public keys for which the backup authenticator may later recover corresponding private keys. Both processes occur asynchronously without the need for authenticators to export or share secrets, adhering to WebAuthnâs attestation requirements. We prove that Yubicoâs proposal achieves our ARKG security properties under the discrete logarithm and PRF-ODH assumptions in the random oracle model. To prove that recovered private keys can be used securely by other cryptographic schemes, such as digital signatures or encryption schemes, we model compositional security of ARKG using composable games, extended to the case of arbitrary public-key protocols. As well as being more general, our results show that private keys generated by ARKG may be used securely to produce unforgeable signatures for challenge-response protocols, as used in WebAuthn. We conclude our analysis by discussing concrete instantiations behind Yubicoâs ARKG protocol, its integration with the WebAuthn standard, performance, and usability aspects.
Daniel is a Post-doc at the Information Security Group at University of London, Royal Holloway. He recently graduated from his PhD at the University of Surrey. His research interests include privacy-preserving protocols and the primitives that comprise them, particularly those based on lattices.
According to the United States Department of Justice, every 73 seconds, an American is sexually assaulted. However, sexual assault is under-reported. Globally, 95% of sexual assault cases are unreported, and at most, 5 out of every 1,000 perpetrators end up in prison. Online anonymous third-party reporting systems (O-TPRSs) are being developed to encourage reporting of sexual assaults and to apprehend serial offenders. We conducted focus groups and interviews with participants who are sexual assault survivors, support workers, or both. We asked questions related to participantsâ concerns with trusting an O-TPRS. Our results suggest that participants had technological and emotional concerns that are related to survivorsâ security and privacy. In this talk, I will discuss survivorsâ concerns with trusting and using an O-TPRS and provide insights into the challenges of designing O-TPRSs to increase the reporting of sexual assault.
Note the changed time
Borke is a Ph.D. candidate in the Department of Electrical and Computer Engineering at the University of British Columbia. Borke obtained her Masterâs degree in Computer Science from Carleton University. Her research interests are usable security and privacy, human-computer interaction, and designing for people, and she has published papers in top-tier HCI and usable security and privacy conferences. Borkeâs current research focuses on how technological solutions can be designed to provide support for sexual assault survivors without facilitating re-victimization.
Due to the special no-cloning principle, quantum states appear to be very useful in cryptography. But this very same property also has drawbacks: when receiving a quantum state, it is nearly impossible for the receiver to efficiently check non-trivial properties on that state without destroying it.
In this talk (which does not expect prior knowledge in quantum and post-quantum cryptography), I will introduce Non-Destructive Zero-Knowledge Proofs on Quantum States. Our method binds a quantum state to a classical encryption of that quantum state (whose security reduces to the hardness of the Learning With Error problem). That way, the receiver can obtain guarantees on the quantum state by asking to the sender to prove properties directly on this classical encryption. This method is therefore non-destructive and it is possible to verify a very large class of properties that would be impossible to verify with a more standard quantum channel. For instance, we can force the sender to send different categories of states depending on whether they know a classical secret or not.
I will also explain how to extend this method to the multi-party setting, and how it can prove useful to distribute a GHZ state between different parties. The protocol ensures that only parties knowing a secret can be part of this GHZ, and that the identity of the parties that are part of the GHZ remains hidden to any malicious party. A direct application would be to allow a server to create a secret sharing of a qubit between unknown parties, authorized for example by a third party Certification Authority.
After obtaining a Master in Theoretical Computer Science at the Ăcole Normale SupĂ©rieure Paris-Saclay, LĂ©o Colisson started in 2018 his PhD in quantum cryptography at the LIP6, Sorbonne University (France), supervised by Prof. Elham Kashefi and Prof. Antoine Joux. Beeing fascinated by both classical and quantum cryptography, he spent most of his academic time trying to improve quantum cryptography using tools coming from classical cryptography. More specifically, his main research interests are related to classical-client blind quantum computing, remote state preparation, composable security, and lattice-based cryptography.
Isogeny-based cryptography is a relatively new branch of post-quantum cryptography that makes use of elliptic curves and maps between them. Compared to other post-quantum schemes it is fairly slow (while still being practical for most applications) but uses very little memory; for some schemes the memory requirements are even on par with low-memory classical (meaning not-post-quantum) cryptography. We will look at various failed attempts to cryptanalyze the isogeny-based NIST submission 'SIKE'. We have observed many people rediscovering unsuccessful but natural attacks on this scheme and decided to write up a list of our own failed attempts to save others the time that we invested in these ideas. We will first introduce the mathematical background necessary to understand this scheme and then give an overview of the attack avenues that we have tried. This is joint work with Dr Lorenz Panny.
Dr Chloe Martindale is a Lecturer in Cryptography at the University of Bristol. She obtained her PhD in algebraic number theory in 2018 from both Leiden University and Bordeaux University before moving to Eindhoven University of Technology to do a postdoc in cryptography with Prof. dr. Tanja Lange. She now focusses her research on post-quantum cryptography, with a special focus on isogeny-based cryptography.
Digital technologies, such as mobile devices and social networks, play an increasingly significant role in intimate partner violence (IPV) settings, including domestic abuse, stalking, and surveillance of victims of abusive partners. IPV survivors increasingly report that abusers install spyware on devices, track locations, monitor communications, and cause emotional and physical harm. In collaboration with the IPV Tech Research Group and the NYC Mayor's Office to End Domestic and Gender-Based Violence, I develop tools and technologies to improve the safety, security, and privacy of survivors of IPV. In this talk, I will discuss my ongoing research in helping survivors understand, navigate, and address technology abuse and how this work led to establishing the Clinic to End Tech Abuse (CETA) to help survivors determine whether their abusers are using technology as a tool to surveil and harm them and to mitigate this abuse. I will also discuss our recent work transitioning to a remote clinic to help survivors address technology abuse as well as the privacy and security challenges that have surfaced during quarantine due to COVID-19.
Note the changed time.
Diana is a PhD candidate in the Department of Computer and Information Science at Cornell Tech. She uses computational and social science methods to develop new tools, technologies, and theories to detect and mitigate digital harms and inequities. In collaboration with the Mayorâs Office to End Domestic and Gender-Based Violence (ENDGBV) in New York City, she is working on advancing the current understanding of how digital technologies are used as tools of abuse in the context of IPV and adolescent interpersonal relationships. She is a 2020-22 Facebook Fellow, a 2019-20 Digital Life Doctoral Fellow, a recipient of the 2020 Cornell Serve in Place Grant, a recipient of the 2016 Engaged Cornell Graduate Student Grant, a 2015-18 NYU Visiting Scholar, and an Affiliate and 2015-16 Fellow at the Data and Society Research Institute. Diana is a graduate of NYU-ITP and Columbia University.
Transgender people are marginalized, facing specific privacy concerns and high risk of online and offline harassment, discrimination, and violence. They are also known to use technology more than other groups for critical purposes such as finding accepting friends and community (which may be absent in their real lives) and gathering information on topics such as health and sexuality. In this talk, I'll discuss our recent research studying American transgender people's computer security and privacy experiences. While our questions were broadly construed, participants frequently returned to themes of activism and prosocial behavior, such as protest organization, political speech, and role-modeling transgender identities, so we focused our analysis on these themes. I'll discuss models of risk that participants described influencing many of their security and privacy decisions, and the ways that these risk perceptions may heavily influence transgender peopleâs defensive behaviors and self-efficacy, jeopardizing their ability to defend themselves or gain technologyâs benefits. I'll then discuss currently underway NSF-funded follow-on research aiming to quantify the prevalence of these trends at a population level, discover which of these trends apply also to other marginalized groups, and design new technology that can support the needs of transgender people and other groups in light of these findings.
Note the changed time.
Ada Lerner (pronouns: she/her or they/them) is a computer scientist who focuses their research on the area of Inclusive Security and Privacy, which they define as the study of the security and privacy needs of groups which are marginalized (such as queer and trans folks) or which are critical to the functioning of our free society (such as lawyers, journalists, and activists). Their work incorporates web and network measurements, qualitative methods, and quantitative methods with interdisciplinary perspectives from psychology, feminist and queer theory, and the law. They live in Boston and have a dog named Matrix, who knows the command "transpose".
This seminar builds on our 2019 CHI paper "'I make up a silly name': Understanding Children's Perception of Privacy Risks Online": https://arxiv.org/pdf/1901.10245.pdf. The paper discussed how children (aged 6-10) could identify and articulate certain privacy risks well, such as information oversharing or revealing real identities online; however, they had less awareness with respect to other risks, such as online tracking or game promotions. Our findings offer promising directions for supporting childrenâs awareness of cyber risks and the ability to protect themselves online. The talk will also discuss how our research has progressed since 2019 and how it is related to the latest development of children's data protection regulation in the UK.
Dr Jun Zhao is a Senior Researcher in the Department of Computer Science at Oxford University. Her research focuses on investigating the impact of algorithm-based decision makings upon our everyday life, especially for families and young children. For this, she takes a human-centric approach, focusing on understanding real users' needs, in order to design technologies that can make a real impact. Currently, she is leading the KOALA project and the ReEnTrust project. She works closely with schools, children, families as well as technologists for children, to understand the technological, societal and regulatory challenges that we are facing, to inform national and international policymakers, technology designers and families. She is also part of the 100 Brilliant Women in AI and Ethics global initiative, to promote diversity and equality in this critical research area.
The talk proposes to conceive of cybersecurity through the lens of care, a notion taken from feminist Science and Technology Studies. Caring for cybersecurity emphasizes on the invisible, morally charged, and experimental practices of doing cybersecurity. Cybersecurity as a "super-wicked problem" does not reside in isolated factors or normative frameworks, but it requires tacit work and uneasy decisions to be made. It requires an analysis of the concrete practices of cybersecurity rather than evaluation or judgment of those. I propose that cybersecurity research deals with practices of care and long-term commitment rather than fixing and moving on. And this must be better resonated in research and policy.
The talk mobilizes findings from a 13-month ethnographic study in two German critical infrastructure companies. Both companies are in the midst of creating new data infrastructures accommodation data science and big data. Cybersecurity was unsettled in the process and became a core concern - which is rarely the case. This aided me as an ethnographer to observe cybersecurity in "action", but more than that controversies drew me in and asked for anthropology-informed ways of dealing with conflicts between developers and security officers, security officers and management, or developers and other developers. I argue that the notion of care was helpful in these conflicts yet again as it emphasized on mutual understanding and compromise rather than following the rule book.
I take inspiration in feminist and post-Actor Network Theory approaches that emphasize less on compliance and conformity and rather on fluidity and multiplicity. I find these approaches intriguing for cybersecurity research because they offer more nuanced understandings of conflicting accountabilities, tension and non-normativity as much as commitment and care.
Laura Kocksch studied cultural anthropology, sociology, and political science. During her undergrad years she developed an interest in studying social media technologies as re-configurating forms of locality and presence in political controversies. In the following years, she began studying technologies less as tools or mediators but as themselves social and political. From this grew her fascination with cybersecurity as a mode of governing technologies and humans alike. In the interdisciplinary phd program SecHuman â Security for Humans in Cyberspace her frustration grew with "factors researchâ in cybersecurity that reduce human, technological or organizational action to quasi-mathematical factors in a âsystemâ. From her background in Anthropology and Science and Technology studies, she found approaches that focus on practices and ways of interrelating and hybridity more convincing than separating the world into isolated areas and their âfactorsâ. Laura is currently finishing her dissertation thesis with the title "Fragile Relations - On Cybersecurity Practices in German Critical Infrastructures" at the Ruhr University in Bochum. She is founding member of the Ruhr University Science and Technology Studies lab where she explores participatory methodologies for the study of cybersecurity and environmental controversies.
This seminar reflects on our 2009 CHI paper Ethnography Considered Harmful: http://www.cs.nott.ac.uk/~pszaxc/work/CHI09.pdf The paper reviewed the current status of ethnography in systems design and focused particularly on new approaches to and understandings of ethnography that emerged as the computer moved out of the workplace. These approaches sought to implement a different kind of ethnographic study. In doing so they reconfigured the relationship ethnography has to systems design, replacing detailed empirical studies of situated action with studies that provide cultural interpretations of action and critiques of the design process itself. We hold these new approaches to and understandings of ethnography in design up to scrutiny, with the purpose of enabling designers to appreciate the differences between new and existing approaches to ethnography in systems design and the practical implications this might have for design. The paper was further elaborated in the book Deconstructing Ethnography: Towards a Social Methodology for Interactive and Ubiquitous Systems Design: https://www.springer.com/gp/book/9783319219530
Andy Crabtree is Professor of Computer Science at the University of Nottingham. A sociologist by background and training, he has worked in an interdisciplinary context sensitising IT research and systems design to the social character of computing across a broad range of sectors for over 25 years. He was the first ethnographer to be awarded a Senior Fellowship by the EPSRC, focused on privacy and accountability in the Internet of Things. He has published over 150 peer-reviewed scientific works, and 3 textbooks on Design Ethnography, is a member of the EPSRC Strategic Advisory Network and Strategic Priorities Fund Evaluation Advisory Group.
We will review recent work on Quantum Machine Learning and discuss the prospects and challenges of applying this new exciting computing paradigm to machine learning applications.
Iordanis Kerenidis (CNRS and QC Ware) received his Ph.D. from the Computer Science Department at the University of California, Berkeley, in 2004. After a two-year postdoctoral position at the Massachusetts Institute of Technology, he joined the Centre National de Recherche Scientifique in Paris as a permanent researcher. He has been the coordinator of a number of EU-funded projects including an ERC Grant, and he is the founder and director of the Paris Centre for Quantum Computing. His research is focused on quantum algorithms for machine learning and optimization, including work on recommendation systems, classification and clustering. He is currently working as the Head of Algorithms Int. at QC Ware Corp.
What is deniability? Although it might sound trivial, this question have sparked a series of debates on the privacy/security community ranging from a legal to a technical perspective. In the context of secure communications and channels, this question is notoriously difficult to approach and analyze. To answer it, one needs to look at the broader picture in which deniability applies. In this talk, we will look at how a notion of deniability can be attained by making more explicit the definitions given in the work of Canetti et al., Unger, and Walfish.
Given these prior notions, does deniability can be applied in context of secure communication and channels? In this talk, we will try to clarify what deniability means in terms of communication, and specify how it can implemented (and what it needs) in the real world.
SofĂa Celi is a cryptography researcher and implementer at Cloudflare. She spends her time implementing in C and Go, and thinking about OTR, post-quantum algorithms, anonymous credentials and TLS.
Type-two constructions abound in cryptography: adversaries for encryption and authentication schemes, if active, are modeled as algorithms having access to oracles, i.e. as second-order algorithms. But how about making cryptographic schemes themselves higher-order? This paper gives an answer to this question, by first describing why higher-order cryptography is interesting as an object of study, then showing how the concept of probabilistic polynomial time algorithm can be generalized so as to encompass algorithms of order strictly higher than two, and finally proving some positive and negative results about the existence of higher-order cryptographic primitives, namely authentication schemes and pseudorandom functions.
Note the changed time.
See the attached URL for a bio. The talk will be given jointly with Ugo Dal Lago (University of Bologna & INRIA, see http://www.cs.unibo.it/~dallago/ ).
Fine-grained cryptography is concerned with adversaries that are only moderately more powerful than the honest parties. We will survey recent results in this relatively underdeveloped area of study and examine whether the time is ripe for further advances in it.
Alon Rosen is a full professor at the School of Computer Science at the Herzliya Interdisciplinary Center.
His areas of expertise are in theoretical computer science and cryptography. He has made contributions to the foundational and practical study of zero-knowledge protocols, as well as fast lattice-based cryptography, most notably in the context of collision resistant hashing and pseudo-random functions. In this context he co-introduced the ring-SIS problem and related SWIFFT hash function, as well as the Learning with Rounding problem. More recently he has been focusing on the study of the cryptographic hardness of finding a Nash equilibrium and on fine-grained cryptography.
Alon earned his PhD from the Weizmann Institute of Science (Israel) in 2003, and was a Postdoctoral Fellow at MIT (USA) in the years 2003-2005 and at Harvard University (USA) in the years 2005-2007. He is a faculty member at IDC since 2007.
The notion of zero knowledge proofs underlies many of the mechanism for obtaining verifiable, privacy-preserving delegation of computation. Loosely speaking, zero knowledge proofs are interactive proof systems that reveal nothing other than the validity of the assertion being proven.
With the rise of quantum information and the growing evidence that small-to-medium scale quantum computers may be possible in the near future, we have ample reasons to understand what possibilities quantum computing offers for privacy-preserving delegation of computation, as well as what security threats does it pose.
In this talk, I will present the first construction of zero-knowledge proofs that are sound against quantum-entangled adversaries. The talk will be self-contained, and no preliminary knowledge in quantum computing is necessary.
Based on joint work with Alessandro Chiesa, Michael Forbes, and Nicholas Spooner (JACM 2021), as well as ongoing work.
Tom Gur is an associate professor in the Department of Computer Science at the University of Warwick, and a UKRI Future Leaders Fellow. He received his Ph.D. in 2017 from the Weizmann Institute of Science, under the supervision of Oded Goldreich, and spent two years at UC Berkeley before joining the University of Warwick. He was awarded the Shimon Even Prize in Theoretical Computer Science. His research interests are primarily in the foundations of computer science and combinatorics. Specific interests include sublinear-time algorithms, complexity theory, coding theory, cryptography, quantum computing, and more.
Many modern processors expose privileged software interfaces to dynamically modify the frequency and voltage. These interfaces were introduced to cope with the ever-growing power consumption of modern computers. In this talk we show how these privileged interfaces can be exploited to undermine the systemâs security. We present the Plundervolt attack â demonstrating how we can corrupt the integrity of Intel SGX computations. We also investigate whether Intel's mitigations have worked.
Kit is currently pursuing a PhD in Cyber Security at The University of Birmingham. Her research interests include embedded hardware and software based fault injections. Kit is also researching reverse engineering of hardware faults through software emulation. Kit currently leads the Universityâs Ethical Hacking Club, AFNOM which encourages students to learn offensive security in a friendly, informal environment.
Cryptography underpins a multitude of critical security- and privacy-enhancing technologies. Recent advances in modern cryptography promise to revolutionize finance, cloud computing and data analytics. But cryptography does not affect everyone in the same way. In this talk, I will discuss how cryptography benefits some and not others and how cryptography research supports the powerful but not the disenfranchised.
Note the changed time.
Seny Kamara is an Associate Professor of Computer Science at Brown University and Chief Scientist at Aroki Systems. Before joining Brown, he was a researcher at Microsoft Research.
His research is in cryptography and is driven by real-world problems from privacy, security, and surveillance. He has worked extensively on the design and cryptanalysis of encrypted search algorithms, which are efficient algorithms to search on end-to-end encrypted data. He maintains interests in various aspects of theory and systems, including applied and theoretical cryptography, data structures and algorithms, databases, networking, game theory, and technology policy.
This talk builds and expands on the findings from the 2017 USENIX Security paper, "When the Weakest Link is Strong: Secure Collaboration in the Case of the Panama Papers" to explore when and how security practices can and are successfully applied and adopted by groups under risk. The paper's findings suggest that the sociocultural context in which security measures are introduced has an enormous impact on their effectiveness - in this case study, transforming the users from the "weakest link" to the strongest.
Note the changed time.
Susan McGregor is an Associate Research Scholar at Columbia Universityâs Data Science Institute, where she also co-chairs its Center for Data, Media & Society. McGregorâs research is centered on security and privacy issues affecting journalists and media organizations. Her current projects include NSF-funded work to provide readers with stronger guarantees about digital media by integrating cryptographic signatures into digital publishing workflows, an effort to develop novel classifiers for detecting abusive and harassing speech targeting journalists on Twitter, and using artificial intelligence and computer vision to help journalists recognize unfamiliar political graphics when reporting in the field. She is a member of the World Economic Forum's Global Future Council on Media, Entertainment & Sport, and is the author of two forthcoming books: Information Security Essentials: A Guide for Reporters, Editors and Newsroom Leaders is due out from Columbia University Press in early 2021; Practical Python Data Wrangling and Data Quality will be published by OâReilly Media in summer 2021.
We are increasingly surrounded by simple (and not so simple) devices with computational and communication capability, which assist us in everyday tasks and together comprise the idea of an Internet-of-Things. To perform their duties these devices are often required to set up ad-hoc connections to interact, and this is could often be with another device or a system where no prior trust relationship exists between the parties. Establishing a secure connection between two devices in such an unstructured environment presents some interesting research problems. Unfortunately, not all these problems can be solved with conventional cryptographic mechanisms alone, and we need to look at alternative ways to reinforce existing security mechanisms by incorporating the physical context of a device into security protocols. Distance-bounding protocols allow a verifier to both authenticate a prover and evaluate whether the latter is located in his vicinity. These protocols are of particular interest in contactless systems, e.g., electronic payment or access control systems, which are vulnerable to distance-based frauds. This talk gives a briefly introduces the use of physical context in secure mechanisms before providing an overview on distance-bounding protocols.
Gerhard Hancke received B.Eng and M.Eng degrees in Computer Engineering from the University of Pretoria, South Africa, in 2002 and 2003. He received a PhD in Computer Science from the University of Cambridge, United Kingdom, in 2009 and a LLB from the University of South Africa, South Africa, in 2014. He joined City University of Hong Kong as faculty in 2013 where he is currently an Associate Professor. Prior to this, he worked as researcher with the Smart Card and IoT Security Centre and as teaching fellow with the Department of Information Security, at Royal Holloway, University of London (RHUL). His research interests are system security and reliable communication and distributed sensing for the industrial Internet-of-Things. In 2019 he was awarded J. David Irwin Early Career Award for âresearch and educational contributions and impact on secure and reliable technology for the Industrial Internet-of-Thingsâ by IEEE IES. He is currently also an Associate Editor of the IEEE Transactions on Industrial Informatics, the IEEE Open Journal of the Industrial Electronics Society, Elsevier Ad Hoc Networks and IET Smart Cities.
Algebraic structure lies at the heart of much of Cryptomania as we know it. An interesting question is the following: instead of building (Cryptomania) primitives from concrete assumptions, can we build them from simple Minicrypt primitives endowed with additional algebraic structure? In this work, we affirmatively answer this question by adding algebraic structure to the following Minicrypt primitives: one-way functions, weak unpredictable functions and weak pseudorandom functions. The algebraic structure that we consider is group homomorphism over the input/output spaces of these primitives. We show that these structured primitives can be used to construct several Cryptomania primitives in a generic manner.
Our results make it substantially easier to show the feasibility of building many cryptosystems from novel assumptions in the future. In particular, we show how to realize any CDH/DDH-based protocol with certain properties in a generic manner from input-homomorphic weak unpredictable/pseudorandom functions, and hence, from any concrete assumption that implies the existence of these structured primitives.
Our results also allow us to categorize many cryptographic protocols based on which structured Minicrypt primitive implies them. In particular, endowing Minicrypt primitives with increasingly richer algebraic structure allows us to gradually build a wider class of cryptoprimitives. This seemingly provides a hierarchical classification of many Cryptomania primitives based on the "amount" of structure inherently necessary for realizing them.
Note the changed time.
Sikhar Patranabis is a postdoc at ETH Zurich, in the Applied Cryptography group headed by Prof. Kenny Paterson since November 2019. Prior to that, he received his PhD from IIT Kharagpur, India. His research interests span all aspects of cryptography, with special focus on cryptographic complexity, database encryption, and secure implementations of cryptographic algorithms.
Cities are experimenting with new kinds of digitally augmented street furniture that recombine urban forms, like benches, pay phones and advertising billboards, with digital technologies such as free wi-fi, sensors and digital screens (Wessels & Humphry et al., 2020). The offer of free digital services is a key selling-point for local governments confronting urban digital inequalities, ageing public utilities and state withdrawal from public infrastructure investments. This talk reports on findings from two studies, the first: on LinkNYC, a city-wide implementation of smart kiosks in New York City; the second: on the design, use and governance of InLinkUK smart kiosks in Glasgow and Strawberry Energy smart benches in London, conducted as part of an international collaboration with the University of Glasgow. These studies found that precariously connected media users such as the street homeless, students and gig workers, rely on these services to maintain access but are exposed to new kinds of security and safety risks at the point of connection. These asymmetrical connections include: 'insecure connections' because of lack of support for wireless encryption in low-end Android devices; 'reduced physical safety' as kiosk and bench services are accessed in open public spaces, 'greater exposure to commercial data exploitation' and 'to police and legal enforcement' since even with privacy protections in place there is potential for data and footage to be shared with third parties including enforcement agencies. Despite users making strategic trade-offs in their engagement with these urban objects, these are not enough to overcome the asymmetries encoded in their design.
Note the changed time.
Justine Humphry is a Lecturer in Digital Cultures in the Department of Media and Communications at the University of Sydney. She researches the cultures and politics of digital media and emerging technologies with a focus on the social consequences of mobile, smart and data-driven technologies. Her research addresses the materialisation of smart cities and the datafication of urban life with a focus on the mediation of home and urban space through smart street furniture, smart voice assistants and robotics.
In this talk I will present a quantum reduction from the problem of sampling short vectors in a code to the problem of decoding its dual. Usually codes are endowed with the Hamming metric but as I will show the reduction works for a large class of metrics (including the rank and Lee metrics). Furthermore, in many cases, we are able to show that thanks to a quantum computer solving the decoding problem at distance t we can find short codewords of weight f(t) for some explicit function. Surprisingly, for codes equipped with the Hamming metric the function f is in relationship with the first linear programming bound of McEliece-Redomich-Rumsey-Welch. This result is rather intriguing and does not seem to have a simple interpretation for now.
The main technical tool used in the proof is the discrete Fourier transform. The proof follows the same ideas of Regevâs quantum reduction between the Closest Vector Problem and sampling short lattice vectors. More precisely, the proof I will present uses with codes the same techniques as StehlĂ©, Steinfeld, Tanaka and Xagawa âs re-interpretation of Regevâs proof.
This is joint work with Maxime Remaud and Jean-Pierre Tillich
Thomas Debris-Alazard is a research scientist (chargé de recherche) at Inria in the Grace project-team. He was previously a postdoctoral research assistant in the Information Security Group under the supervision of Martin R. Albrecht. He received his PhD from Inria under the supervision of Jean-Pierre Tillich.
In this talk, I will revisit a paper Arun Kundnani, Joris van Hoboken and I started writing in 2014 and published in 2016. In the backdrop of our writing sessions in New York were the Black Lives Matter protests that started in Ferguson and spread nationwide, and human rights advocates disputing surveillance programs targeting muslim communities in New York and New Jersey. While counter-surveillance was at the heart of all these developments, they flourished in communities and spoke to constituencies that were mostly distinct from another group that some of us were circling in: privacy advocates, progressive security engineers, and policy makers, who following Edward Snowdenâs revelations of US and UK surveillance programs had been seeking to win majority support for countering surveillance. The paper studies this discrepancy by taking a closer look at the activities, discourse and solutions propose by the latter group. It describes the ways in which advocates of privacy framed the problem as the replacement of targeted surveillance with mass surveillance programs, and identified the solutions as predominantly technical and involving the use of encryption â or âcryptoâ â as a defense mechanism. The paper further illustrated that by raising the specter of an Orwellian system of mass surveillance, shifting the discussion to the technical domain, and couching that shift in economic terms undermined a political reading that would attend to the racial, gendered, classed, and colonial aspects of the US and UK surveillance programs. We asked then: how can this specific discursive framing of counter-surveillance be re-politicized and broadened to enable a wider societal debate informed by the experiences of those subjected to targeted surveillance and associated state violence? During the talk, I hope we can revisit this question anew given how in 2020 COVID-19 has come to normalize surveillance in the name of public health, replacing the "war on terror" with the "war on the virus" and we see the rise of a fresh wave of global protests around Black Lives Matter.
Seda is currently an Associate Professor in the Department of Multi-Actor Systems at TU Delft at the Faculty of Technology Policy and Management, and an affiliate at the COSIC Group at the Department of Electrical Engineering (ESAT), KU Leuven. She is also a member of the Institute for Technology in the Public Interest and the arts initiative Constant. Her work focuses on privacy enhancing and protective optimization technologies (PETs and POTs), privacy engineering, as well as questions around software infrastructures, social justice and political economy as they intersect with computer science.
Recent high-profile attacks on the Internet of Things (IoT) have brought to the forefront the vulnerability of âsmartâ devices. This has resulted in IoT technologies and end devices being subjected to numerous security analyses. One source that has the potential to provide rich and definitive information about an IoT device is the IoT firmware itself. However, analysing IoT firmware is notoriously difficult, as peripheral firmware files are predominantly available as stripped binaries, without the debugging symbols that would simplify reverse engineering. In this talk, we will present an open-source tool, argXtract, that extracts configuration information from Supervisor Calls within a stripped ARM Cortex-M binary file. Through a combination of generic ARM assembly analysis and vendor-specific configurations, argXtract is able to generate call trace chains and statically âexecuteâ a firmware file in order to retrieve and process arguments to Supervisor Calls. This enables automated bulk analysis of firmware files, to derive statistical security information. We will also present a real-world test case in which we configure argXtract to obtain Bluetooth Low Energy security configurations from Nordic Semiconductor firmware files, and execute it against a dataset of 246 firmware binaries. The results demonstrate that privacy and security vulnerabilities are prevalent in IoT.
Pallavi Sivakumaran is a final-year CDT student with the Information Security Group at Royal Holloway, University of London. Her research focuses on security and privacy concerns associated with Bluetooth Low Energy, which is a key enabling technology for the Internet-of-Things (IoT).
Recent research efforts on adversarial ML have begun to investigate problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., malware). However, the design, comparison, and real-world implications of problem-space attacks remain underexplored.
In this talk, I will present two major contributions from our recent IEEE S&P 2020 paper [1]. First, I will present our novel reformulation of adversarial ML evasion attacks in the problem-space (also known as realizable attacks). This requires us to consider and reason about additional constraints that feature-space attacks ignore, which shed light on the relationship between feature-space and problem-space attacks. Second, building on our reformulation, I will present a novel problem-space attack for generating end-to-end evasive Android malware, showing that it is feasible to generate evasive malware at scale, while evading state-of-the-art defenses.
[1] Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, Lorenzo Cavallaro. âIntriguing Properties of Adversarial ML Attacks in the Problem Spaceâ. IEEE Symp. Security & Privacy (Oakland), 2020.
Note the changed time.
Feargus is a PhD cybersecurity student in the Information Security Group at Royal Holloway, University of London and a Visiting Scholar at the Systems Security Research Lab at Kingâs College London. His research explores the limitations of machine learning when applied to security settings.
Feargus was recently a visiting student at The Alan Turing Institute, the UKâs national institute for data science and artificial intelligence, and has twice interned at Facebook, with the Abusive Accounts and Compromised Accounts teams respectively, where he developed novel techniques for detecting and measuring harmful behaviour on social media platforms.
He is also the author and maintainer of TESSERACT, a framework and Python library for performing sound ML-based evaluations without experimental bias, and a core author and maintainer of TRANSCEND, a framework for detecting concept drift using conformal evaluation.
The summer of 2020 saw the largest anti-racist protests in British history. In order to understand how deeply entrenched racism is to British policing, we must look beyond the British mainland and into the colonial policing which shaped racial governance. It is in this context that we can better explore how policing and racism are resisted in 21st century Britain. This postcolonial lens will help analyse spontaneous resistance and organised radical campaigns which envision a world in which state punishment and violence is replaced with solidarity, care and co-operation.
Adam Elliott-Cooper is a research associate in sociology the University of Greenwich. He has previously worked as a researcher in the Department of Philosophy at UCL, as a teaching fellow in the Department of Sociology at the University of Warwick and as a research associate in the Department of Geography at King's College London.
Over the last several years, numerous journalists and news organizations have reported incidents in which their communications have been hacked, intercepted, or retrieved. In 2014, Google security experts found that 21 of the worldâs 25 most popular media outlets were targets of state-sponsored hacking attempts, and many journalists have watched helplessly as hackers took control of their social media accounts, targeting confidential information in their internal servers. When journalistsâ digital accounts are vulnerable to hacks or surveillance, news organizations, journalists, and their sources are at risk, and journalistsâ ability to carry out their newsmaking function is reduced. Yet, some journalists do not believe that hacking and surveillance are significant threats, and they are not adopting information security measures to protect their data, themselves, or their sources. This research study includes 19 interviews with journalists, developers, and digital security trainers to shed light on journalistsâ perceptions of information security technologies, including motivations to adopt and barriers to adoption. The findings show that motivations to adopt information security technologies hinge on the idea of protection: protection of self, story, and the journalistâs roleâmore so than the protection of the source, contrary to contemporary discourse about why journalists need to adopt such technologies.
Note the changed time.
Jennifer R. Henrichsen is a Ph.D. Candidate at the Annenberg School for Communication at the University of Pennsylvania. She has received fellowships from Columbia University, the Knight Foundation and First Look Media and has been a consultant twice to UNESCO. A Fulbright Research Scholar, Jennifer holds masterâs degrees from the University of Pennsylvania and the University of Geneva. In 2011, she co-wrote a book, War on Words: Who Should Protect Journalists? (Praeger). She co-edited the book, Journalism After Snowden: The Future of the Free Press in the Surveillance State (Columbia University Press, 2017) and she is currently co-editing a book on national security and journalism for Oxford University Press.
How do technologies mediate security practices and how can we study them? In this talk I draw on my fieldwork in banks to show how banks rely on technologies to detect suspicious transactions that may be connected to money laundering or terrorism financing. Inspired by work at the intersection of (critical) security studies and science and technology studies, I foreground the role of digital technologies in the production of security expertise by compliance officers and intelligence analysts in banks. The research is based on multi-sited ethnography centred around âsites of experimentationâ.
EsmĂ© Bosma is a PhD candidate at the Department of Political Science of the University of Amsterdam and a member of project FOLLOW: Following the Money from Transaction to Trial, funded by the European Research Council (ERC) (www.projectfollow.org). For her research project she has conducted field research inside and around banks in Europe to analyse counter-terrorism financing practices by financial institutions. Her research lies at the intersection between (critical) security studies and science and technology studies. She holds a masterâs degree in Political Science from the University of Amsterdam. She is co-editor of the book Secrecy and Methods in Security Research. A Guide to Qualitative Fieldwork (Routledge, 2019).
In this talk I'll give an overview of my research on the security and privacy experiences of at-risk users. The talk will center on two studies with different populations: women in South Asia [1] and survivors of intimate partner abuse [2].
[1] ââThey Don't Leave Us Alone Anywhere We Goâ: Gender and Digital Abuse in South Asia.â Nithya Sambasivan, Amna Batool, Nova Ahmed, Tara Matthews, Kurt Thomas, Sane GaytĂĄn, David Nemer, Elie Bursztein, Elizabeth Churchill, Sunny Consolvo. CHI 2019  (Best Paper)
[2] âStories from survivors: Privacy & security practices when coping with intimate partner abuse.â Tara Matthews, Kathleen O'Leary, Anna Turner, Manya Sleeper, Jill Palzkill Woelfer, Martin Shelton, Cori Manthorne, Elizabeth F Churchill, Sunny Consolvo. CHI 2017 Â (Best Paper)
Note the changed time.
Tara Matthews is a consultant working on security and privacy user experience issues with tech companies. Previously, she was a Senior User Experience Researcher in Google's Security & Privacy Research & Design Group for nearly 4 years. She was also a manager and team lead. Prior to joining Google in June 2014, Tara was a Research Scientist at IBM Research - Almaden for nearly 7 years, studying and improving the design of workplace collaboration and social software. Tara earned her Ph.D. in Computer Science from the University of California, Berkeley in 2007. Her major was Human-Computer Interaction and her dissertation work informed the design and evaluation of glanceable (low attention) information visualizations.
In the 1990s the US government feared the emergence of encryption technologies that would prevent them from conducting legal intercept and signals intelligence.
To preserve their capabilities, whilst at the same time providing public key encryption to citizens, the government developed a key escrow technology, the Clipper Chip, which would allow warranted recovery of suspect's encryption keys. In parallel, the government used export regulations in an attempt to prevent strong encryption escaping their borders and reaching foreign adversaries.
Opposing government policies were the digital privacy activists, including the Cypherpunks, a group of borderline anarchist technologists. The digital privacy activists developed and disseminated encryption technologies such as PGP to undermine government policies. The privacy activists also challenged the government export regulations in the courts in an attempt to have them declared unconstitutional.
This seminar will explore the main events of the battle between the government and digital privacy activists during the 1990s.
Craig is currently studying a PhD in History & Information Security at RHUL.
Craig's research explores why the US administrations of the 1990s chose to regulate cryptography, with this being a proxy for privacy in the digital age, and how digital privacy activists such as the Cypherpunks opposed government policies.
Before studying at RHUL, Craig held the post of Chief Technology Officer at DXC Security. Craig holds Master's degrees in Cyber Security, International Security, and Classical Music.
Craig's first book, 'CryptoWars: The Fight for Privacy in the Digital Age: A Political History of Digital Encryption' will be released by Taylor and Francis in December 2020.
In this work, we consider the computer security and privacy practices and needs of recently resettled refugees in the United States. We ask: How do refugees use and rely on technology as they settle in the US? What computer security and privacy practices do they have, and what barriers do they face that may put them at risk? And how are their computer security mental models and practices shaped by the advice they receive? We study these questions through in-depth qualitative interviews with case managers and teachers who work with refugees at a local NGO, as well as through focus groups with refugees themselves. We find that refugees must rely heavily on technology (e.g., email) as they attempt to establish their lives and find jobs; that they also rely heavily on their case managers and teachers for help with those technologies; and that these pressures can push security practices into the background or make common security âbest practicesâ infeasible. At the same time, we identify fundamental challenges to computer security and privacy for refugees, including barriers due to limited technical expertise, language skills, and cultural knowledge â for example, we find that scams as a threat are a new concept for many of the refugees we studied, and that many common security practices (e.g., password creation techniques and security questions) rely on US cultural knowledge. From these and other findings, we distill recommendations for the computer security community to better serve the computer security and privacy needs and constraints of refugees, a potentially vulnerable population that has not been previously studied in this context.
Note the changed time.
Lucy is a PhD student in Computer Science & Engineering at the Univeristy of Washington. Her research focuses on the security and privacy-related needs and practices of understudied or underserved populations.
In this talk, I will revisit a qualitative research project examining how digital activists navigate risks posed to them in online environments. I examined how a group of activists across ten different non-Western countries adapted and responded to threats posed by two types of powerful actors: both the state, and technology companies that run the social media platforms on which many activists rely to conduct their advocacy. Through a series of interviews, I examined how resistance against censorship and surveillance manifested in everyday practices, not just the use of encryption and circumvention technologies, but also the choice to use commercial social media platforms to their advantage despite considerable ambivalence about the risks they pose. Much has changed in the digital landscape since I first conducted this work: in the discussion I plan to engage with how these findings prefigured larger concerns about misinformation and digital surveillance, and illustrate the importance of balancing locally contingent interpretations of risk against the larger geopolitical backdrop in which technology companies now play an important role.
Note the changed time.
Sarah Myers West is a postdoctoral researcher at the AI Now Institute, where her research engages with the culture, politics and practices of technology developers, and incorporates both historical and ethnographic methods. Her current projects explore themes of power and resistance in the history of AI. She holds a doctorate from the Annenberg School for Communication and Journalism at the University of Southern California, where her dissertation examined the cultural history and politics of encryption technologies from the 1960s to the present day. Her work is published in journals such as New Media & Society, the International Journal of Communication, and Policy & Internet.
In this talk we investigate the problem of automating the development of adaptive chosen ciphertext attacks on systems that contain vulnerable format oracles. Rather than simply automate the execution of known attacks, we consider a more challenging problem: to programmatically derive a novel attack strategy, given only a machine-readable description of the plaintext verification function and the malleability characteristics of the encryption scheme. We present a new set of algorithms that use SAT and SMT solvers to reason deeply over the design of the system, producing an automated attack strategy that can entirely decrypt protected messages.
Note the changed time.
Matthew D. Green is an Associate Professor at Johns Hopkins University. He works on topics in applied cryptography, including the design of privacy-preserving protocols and attacks on deployed cryptographic systems.
Since the 1960s we have been told that new computing technologies are ushering in a new era: the Computer Revolution and Knowledge Economy (1962), Global Village and One-Dimensional Man (1964), the Third (1975) and the Fourth (2015) Industrial Revolutions(s). There was never a consensus on which kind of computational techniques were behind the change. Then in the 1990s a number of humanities academics discovered the Internet. They used their understanding of its technical architecture to define the relevant properties of computation that were behind our emerging network society. Canonical scholarship emerged (Castells 1996) alongside cyberlibertarian visions (Barlow 1996) that told much the same story: computer networks were not only naturally decentralized and liberating, they were welcome solvents on the old, centralized order. When cryptography burst into public (and humanist) consciousness, it could only make things even better by further empowering the individual.
In my talk I want to offer a different characterization of the relationship between computer networks, cryptography, and their consequences for society. To do so, I go back to one of the beginnings, to Paul Baran's Distributed Adaptive Message Block Network, as outlined in his canonical On Distributed Communications--drawing in particular on the formerly classified twelfth volume of this series. Rather than envision networks as naturally distributed and open, Baran can help us better characterize them as naturally closed, encrypted, and at odds with individual liberty. I will discuss other reasons for this claim, as well as its consequences--offering an outline of what networks mean when we build cryptography into their identity and function.
I am a historian of computing, and use historical analysis to improve outcomes for STEM and tech policy organizations and research projects. I specialize in the evolution of computer network protocols, architectures, security, and technical management. I work as an Assistant Professor at the Stevens Institute of Technology, in the Science, Technology, and Society Program. I have projects underway for Google, Lockheed Martin, the National Science Foundation, ICANN, and MIT Press. Previously I was a researcher with the UCLA Computer Science Department.
Drawing on the experiences of a novel collaborative project between sociologists and computer scientists, this talk identifies a set of challenges for fieldwork that are generated by this 'wild interdisciplinarity'. Public Access Wi-Fi Service was a project funded by an âin-the-wildâ research programme, involving the study of digital technologies within a marginalised community, with the goal of addressing digital exclusion. I argue that similar forms of research, in which social scientists are involved in the deployment of experimental technologies within real world settings, are becoming increasingly prevalent. The fieldwork for the project was highly problematic, with the result that few users of the system were successfully enrolled. I'll analyse why this was the case, identifying three sets of issues which emerge in the juxtaposition of interdisciplinary collaboration and wild setting. I conclude with a set of recommendations for projects involving technologists and social scientists.
Murray Goulden is Assistant Professor of Sociology at the University of Nottingham, and an alumnus of the Horizon Digital Research Institute. He has worked extensively on research applying novel digital technologies to real world settings. This includes Co-I on the EPSRC TIPS2 Internet of Things project âDefence Against the Dark Artefactsâ, and earlier Researcher Co-I roles on two EPSRC-funded projects â âPublic Access WiFi Serviceâ and âCreating the Energy for Changeâ. These projects span his interests in networking, digital data, and smart energy, their role in everyday life through the reconfiguring of associated social practices, and the implications for policy making and design. He is currently the recipient of a 3 year Nottingham Research Fellowship, focused on the implications of Internet of Things technologies for patterns of life within the home.
This talk will revisit joint work with Harry Halpin and Ksenia Ermoshina, conducted in the frame of the H2020 European project NEXTLEAP (2016-2018, nextleap.eu). Due to the increased and varied deployment of secure messaging protocols, differences between what developers âbelieveâ are the needs of their users and their actual needs can have very tangible and potentially problematic consequences. Based on 90 interviews with both high and low-risk users, as well as with several developers, of popular secure messaging applications, we mapped the design choices made by developers to threat models of both high-risk and low-risk users. Our research revealed interesting and sometimes surprising results, among which: high-risk users often consider client device seizures to be more dangerous than compromised servers; key verification is important to high-risk users, but they often do not engage in cryptographic key verification, instead using other âout of bandâ means; high-risk users, unlike low-risk users, often need pseudonyms and are heavily concerned over metadata collection. Developers tend to value open standards, open-source, and decentralization, but high-risk users often find these aspects less urgent given their more pressing concerns; and while, for developers, avoiding trusted third parties is an important concern, several high-risk users are in fact happy to rely on trusted third parties âprotectedâ by specific geo-political situations. We conclude by suggesting that work still needs to be done for secure messaging protocols to be aligned with real user needs, including high-risk, and with real-world threat models.
Francesca Musiani (PhD, socio-economics of innovation, MINES ParisTech, 2012), is associate research professor at the French National Center for Scientific Research (CNRS) since 2014. She is Deputy Director of the Center for Internet and Society of CNRS, which she co-founded with MĂ©lanie Dulong de Rosnay in 2019. She is also an associate researcher at the Center for the sociology of innovation (i3/MINES ParisTech) and a Global Fellow at the Internet Governance Lab, American University in Washington, DC. Since 2006, Francescaâs research work focuses on Internet governance, in an interdisciplinary perspective merging information and communication sciences, science and technology studies (STS) and international law. Her most recent research explores, or has explored, the development and use of encryption technologies in secure messaging (H2020 European project NEXTLEAP, 2016-2018), âdigital resistancesâ to censorship and surveillance in the Russian Internet (ANR project ResisTIC, 2018-2021), and the governance of Web archives (ANR project Web90, 2014-2017 and CNRS Attentats-Recherche project ASAP, 2016). Francescaâs theoretical work explores STS approaches to Internet governance, with particular attention paid to socio-technical controversies and to governance âby architectureâ and âby infrastructureâ. Francesca is the author of several journal articles and books, including Nains sans gĂ©ants. Architecture dĂ©centralisĂ©e et services Internet (Dwarfs Without Giants: Decentralized Architecture and Internet Services, Presses des Mines [2015], recipient of the French Privacy and Data Protection Commissionâs Prix Informatique et LibertĂ©s 2013).
In just a few years, Fully Homomorphic Encryption (FHE) has gone from a theoretical âholy grailâ of cryptography to a commercial product. This is in part due to the development of Machine Learning as a Service, and the fact that our society has evolved to be data-driven. As a consequence, secure computation has become more valuable and has seen some great advances. In this talk, we will discuss some of these improvements in FHE, as well as some of the latest implementation results. We will finish by discuss one of the main challenges in FHE, the analysis of the noise growth in an FHE ciphertext.
I recently joined the ISG group at Royal Holloway as a postdoc researcher. Previously, I spent a year at Intel as a research scientist, working on Privacy-Preserving Machine Learning (PPML). Even before that, I was a PhD student at Bristol University, from where I obtained my PhD in 2018. I work on privacy-preserving machine learning, fully homomorphic encryption and more broadly, computing on encrypted data, lattice-based and post-quantum cryptography.
It is well known that older adults continue to lag behind younger adults in terms of their breadth of uptake of digital technologies, amount and quality of engagement in these tools and ability to critically engage with the online world. Can these differences be explained by older adultsâ distrust of digital technologies? Is trust, therefore, a critical design consideration for appealing to older adults? In this talk I will argue that while distrust is not, in fact, determinative of non-use and therefore does not explain these differences in tech usage, it is nonetheless key for designers to understand older adult distrust in developing socially responsible technologies.
Bran is a lecturer in the Data Science Institute at Lancaster University. Her research explores the social impacts of computing, with a particular interest in trust, privacy, and ethics. Her recent work has explored these issues at both ends of the age spectrum, with projects such as IoT4Kids, looking at the privacy, security and ethical issues of enabling children to programme IoT devices; and Mobile Age, looking at developing mobile apps for older adults. Bran currently serves as a member of the ACM Europe Technology Policy Committee.
Traditionally, âprovable securityâ was tied in the minds cryptographers to public-key cryptography, asymptotic analyses, number-theoretic primitives, and proof-of-concept designs. In this talk I survey some of the work that I have done (much of it joint with Mihir Bellare) that has helped to erode these associations. I will use the story of practice-oriented provable security as the backdrop with which to make the case for what might be called a âsocial constructionistâ view of our field. This view entails the claim that the body of work our community has produced is less the inevitable consequence of what we aim to study than the contingent consequence of sensibilities and assumptions within our disciplinary culture.
Note the changed time.
I'm a professor of Computer Science at the University of California, Davis, USA. My research has focused on obtaining provably-good solutions to practical protocol problems. I did my undergrad work at UCD and my Ph.D. at MIT. I came to UCD in 1994, but have spent some of those years on leaves and sabbaticals, most often in Thailand. In recent years I've been increasingly concerned about ethical and social problems connected to technology, and the majority of my teaching is now on that.
Distance bounding protocols constitute a special class of authentication protocol, in which participants must verify not only the identity of their partner, but also their physical location. They are important for systems such as contactless card payments or electronic doors, to avoid scenarios in which an attacker might relay messages over a longer distance than intended. This is typically achieved by using a time-sensitive challenge-response phase, where the verifying agent estimates distance by calculating the round trip time of their challenge messages. There are some difficulties in applying traditional security verification approaches to this family of protocols. Symbolic approaches, which aim to abstract away details (such as the nature of the cryptographic primitives used), must deal with the fact that many attack scenarios are intrinsically linked with the location and timing of messages.
In this talk, we present a model for analysing distance bounding protocols. The model of Basin et al., which uses a bespoke implementation in Isabelle/HOL, is adapted to remove speed-of-light calculations for message timings. Instead, a (provably) equivalent security claim is developed that instead focuses on the precise ordering of actions during a protocol execution. This approach enables an embedding into the Tamarin prover tool, allowing for rapid automated verification. Further, we discuss extensions to the model to analyse so-called "dishonest" agents -- who generally follow their specification but are willing to temporarily deviate in order to collaborate with the network adversary. Such agents are particularly relevant for modelling "Terrorist fraud" attacks, where an adversary can be (illegally) granted a one-time key. Finally, the results of an extensive literature survey is presented, discussing common pitfalls in protocol design.
Zach is a PhD candidate at the University of Luxembourg in the field of computer security. His focus is on the development of formal models for security protocols, in order to define precise security requirements. Research interests include security for RFID and IoT devices, as well as multiparty protocols. His other interests include game development, swing dance, and locking himself inside to write his PhD thesis.
In this talk, I discuss the ethical challenges and dilemmas that arise as a result of state involvement in academic research on âterrorismâ and âextremismâ. I suggest that researchers and research institutions need to be more attentive to the possibilities of co-option, compromise, conflict of interests and other ethical issues. I empirically examine the relationship between academic researchers and the security state. I highlight three key ways in which ethical and professional standards in social scientific research can be compromised: (1) Interference with the evidence base (through a lack of transparency on data and conflicts of interest); (2) Collaboration on research supporting deception by the state which undermines the ability of citizens to participate in democratic processes; and (3) Collaboration on research legitimating human rights abuses, and other coercive state practices. These issues are widespread, but neglected, across: literature on 'terrorism' and 'extremism'; literature on research ethics; and, in practical ethical safeguards and procedures within research institutions. In order to address these issues more effectively, I propose that any assessment of research ethics must consider the broader power relations that shape knowledge production as well as the societal impact of research. In focusing on the centrality of states â the most powerful actors in the field of âterrorismâ and âextremismâ â our approach moves beyond the rather narrow procedural approaches that currently predominate. I argue more attention to the power of the state in research ethics will not only help to make visible, and combat, ethically problematic issues, but will also help to protect the evidence base from contamination. I conclude by proposing a series of practical measures to address the problems highlighted.
Narzanin is a Lecturer in Criminology at the University of Exeter. Her research focuses on racism, social movements and counter-terrorism. She is currently working on a study researching the impact of counter-terrorism policy and practice on UK higher education. She is co-editor of the book What is Islamophobia? Racism, Social Movements and the State (Pluto Press, 2017) and author of Muslim Women, Social Movements and the âWar on Terror (Palgrave Macmillan, 2015).
In this talk, I will briefly present both the EasyCrypt interactive proof assistantâwhose focus is on the formalization of game-based cryptographic security proofs, before discussing its application to the SHA-3 standard. In combination with the Jasmin languageâan "assembly-in-the-head" language with formalized semantics and a certified compilerâour proof is used to produce a complete high-assurance standard, with machine-checked proofs, verified reference implementations, and a verified optimized implementation for a specific platform.
I will discuss some of the challenges encountered in formalizing the security proof, and discuss the techniques afforded by the combined use of "interactive first" technologies such as Jasmin and EasyCrypt, which allow us to produce highly-efficient, yet fully verified, implementations. Some future perspectives may be discussed
I am a Senior Lecturer in the Cryptography Group and Department of Computer Science at the University of Bristol (UK). My research revolves around proving cryptographic and side-channel security properties of concrete realizations and implementations of cryptographic primitives and protocols, sometimes in the presence of partial compromise. This involves tackling problems in modelling adversaries and systems, designing and applying proof methodologies and verification tools, and generally finding less tedious ways of verifying complex properties of large (but not vast) systems and code bases.
Hybrid Authenticated Key Exchange (AKE) protocols combine keying material from different sources (for instance, post-quantum and classical secure key exchange primitives) to build protocols that are resilient to catastrophic failures of the different components. In this talk, I will present the results of a recent work with Torben Hansen and Kenny Paterson: a new hybrid key exchange protocol called Muckle - a simple one-round-trip key exchange protocol that combines preshared keys, post-quantum and classical key encapsulation mechanisms, and quantum key distribution protocols. I will also discuss a general framework HAKE for the analysis of hybrid AKE protocols, and demonstrate the security of our approach with respect to a powerful attacker, capable of fine-grained compromise of different cryptographic components. HAKE is broad enough to allow us to capture forward secrecy, multi-stage key exchange security, and post-compromise security. I will present an implementation of our Muckle protocol, instantiating our generic construction with classical and post-quantum Diffie-Hellman-based algorithmic choices and discuss the results of benchmarking exercises against our implementation.
Ben Dowling is a postdoc at ETH Zurich, in the Applied Cryptography group headed by Prof. Kenny Paterson since July 2019, and was previously a postdoc in the Information Security Group at Royal Holloway, University of London from January 2017. His research interests focus primarily in provable security of real-world cryptographic protocols, in particular, expanding the frameworks used in the analysis of security protocols to cover novel properties and dependencies not currently examined in literature.
Mobile sensors have already proven to be helpful in different aspects of peopleâs everyday lives such as fitness, gaming, navigation, etc. However, illegitimate access to these sensors results in a malicious program running with an exploit path. While the users are benefiting from richer and more personalized apps, the growing number of sensors introduces new security and privacy risks to end users and makes the task of sensor management more complex. In this talk, first, we discuss the issues around the security and privacy of mobile sensors. We investigate the available sensors on mainstream mobile devices and study the permission policies that Android, iOS and mobile web browsers offer for them. Second, we reflect the results of two workshops that we organized on mobile sensor security. In these workshops, the participants were introduced to mobile sensors by working with sensor-enabled apps. We evaluated the risk levels perceived by the participants for these sensors after they understood the functionalities of these sensors. The results showed that knowing sensors by working with sensor-enabled apps would not immediately improve the usersâ security inference of the actual risks of these sensors. However, other factors such as the prior general knowledge about these sensors and their risks had a strong impact on the usersâ perception. We also taught the participants about the ways that they could audit their apps and their permissions. Our findings showed that when mobile users were provided with reasonable choices and intuitive teaching, they could easily self-direct themselves to improve their security and privacy. Finally, we provide recommendations for educators, app developers, and mobile users to contribute toward awareness and education on this topic.
*** I have a PhD studentship for Sep 2020 on "Cyber Security in Farm and Companion Animal Technologies" (schools of computing and agriculture) at Newcastle University. If you are interested, come and talk to me after the presentation, or email me any time.
I am a Research Fellow in Cyber Security, School of Computing, Newcastle University (NU), UK. I have a PhD in Computing Science, MSc and BSc in Computer Engineering. I work on Sensor, Mobile, and IoT Security, Security Standardisation, and Usable Security and Privacy. I work with W3C as an invited expert on sensor specifications. I am particularly interested in real-world multi-disciplinary projects. I am an advocate for Equality, Diversity and Inclusion (EDI) (a member of EDI committee in the School of Computing, Newcastle University) and particularly support women in STEM.
This talk will explore the disruptive and transformative effects of digital technology on gendered security asymmetries in Greenland. Through extended ethnographic fieldwork conducted in Greenland and Denmark, research findings emerged through in-depth interviews, collaborative mappings and field observations with 51 participants. Employing a critical feminist lens, the paper identifies how Greenlandic women develop digital security practices to respond to Greenland's ecologically, politically and socially induced transformation processes. By connecting individual security concerns of Greenlandic women with the broader regional context, the findings highlight how digital technology has created transitory spaces in which collective security is cultivated, shaped and challenged. The contribution to security scholarship is therefore threefold: (1) identification and acknowledgement of gendered effects of increased usage of digital technology in remote and hard-to-reach communities, (2) a broader conceptualisation of digital security and (3) a recommendation for more contextualised, pluralistic digitalisation design.
This talk is based on: Wendt, Nicola, Rikke Bjerg Jensen and Lizzie Coles-Kemp. "Civic Empowerment through Digitalisation: the Case of Greenlandic Women." In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems - CHI'20, New York, 2020. ACM Press.
Nicola is a PhD candidate supervised across the Information Security Group (Dr Rikke Bjerg Jensen) and the Geography Department (Prof Klaus Dodds) at Royal Holloway and funded by the Leverhulme Trust. In her PhD she focuses on identity formation within an increasingly digitalised public sphere in Greenland and, through this, explores gendered notions of security. Ethnographic in nature and using community-based participatory research methods, Nicolaâs research investigates the intersection of digital technology and social practices, looking at how experiences of technological transitions are negotiated against a backdrop of historic and contemporary inequalities. She received her BA in International Relations from the University of Groningen and her MA from the Universities of Uppsala and Strasbourg.
Academic research on machine learning-based malware classification appears to leave very little room for improvement, boasting F1 performance figures of up to 0.99. Is the problem solved? In this talk, we argue that there is an endemic issue of inflated results due to two pervasive sources of experimental bias: spatial bias, caused by distributions of training and testing data not representative of a real-world deployment, and temporal bias, caused by incorrect splits of training and testing sets (e.g., in cross-validation) leading to impossible configurations. To overcome this issue, we propose a set of space and time constraints for experiment design. Furthermore, we introduce a new metric that summarizes the performance of a classifier over time, i.e., its expected robustness in a real-world setting. Finally, we present an algorithm to tune the performance of a given classifier. We have implemented our solutions in TESSERACT, an open source evaluation framework that allows a fair comparison of malware classifiers in a realistic setting. We used TESSERACT to evaluate two well-known malware classifiers from the literature on a dataset of 129K applications, demonstrating the distortion of results due to experimental bias and showcasing significant improvements from tuning.
The main results of this talk are published in: - Feargus Pendlebury, Fabio Pierazzi, Roberto Jordaney, Johannes Kinder, Lorenzo Cavallaro . TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time. USENIX Security Symposium, 2019.
Fabio Pierazzi is currently a Lecturer (Assistant Professor) in Computer Science at King's College London, where he is also a member of the Cybersecurity (CYS) group. His research expertise is on statistical methods for malware analysis and intrusion detection, with a particular emphasis on settings in which attackers adapt quickly to new defenses (i.e., high non-stationarity). Before joining Kingâs College London as a Lecturer in Sep 2019, he obtained his Ph.D. in Computer Science in 2017 from University of Modena and Reggio Emilia, Italy, under the supervision of Prof. Michele Colajanni; he spent most of 2016 as a Visiting Researcher at the University of Maryland, College Park, USA, under the supervision of Prof. V.S. Subrahmanian; between Oct 2017 and Sep 2019, he has been a Post-Doctoral Researcher in the Systems Security Research Lab (S2Lab), first at Royal Holloway University of London and then at Kingâs College London, under the supervision of Prof. Johannes Kinder and Prof. Lorenzo Cavallaro. Home page: https://fabio.pierazzi.com
It is just over ten years since the first academic work on Automatic Exploit Generation (AEG). In this talk I will provide a brief history of the topic, and explain the current state of the art and open problems. I will then discuss our most recent work on greybox exploit generation against language interpreters. Language interpreters, such as those for Python, PHP, Javascript etc., are typically large and complex applications and difficult to analyse using whitebox methods, such as symbolic execution. In this work we have sought to create an entirely greybox pipeline for AEG. To do so we have broken down the exploit generation problem into several subproblems, constructed greybox solutions for each, and chained these solutions together to produce exploits. Our current implementation can produce exploits for the Python and PHP interpreters, and I will outline our ongoing efforts to extend this to Javascript interpreters.
Sean Heelan is a co-founder/CTO of Optimyze and a PhD candidate at the University of Oxford. In the former role he develops products for increasing the efficiency of large-scale, cloud based systems, and in the latter he is investigating automated approaches to exploit generation. Previously he ran Persistence Labs, a reverse engineering tooling company, and worked as a Senior Security Researcher at Immunity Inc. At Immunity he lead a team under DARPA's Cyber Fast Track programme, investigating hybrid approaches to vulnerability detection using a mix of static and dynamic analyses.
Much attention in cyber security has turned to new technologies and new materialities of information. These overlook the fact that much of security attention in everyday life is oriented around more conventional objects of security, such as documents. In this talk, I discuss why scholars should take documents and other everyday materialities more seriously. I build my argument based on ethnographic fieldwork conducted in the South Korean corporate world between 2011 and 2017. First, I suggest that even as organizations are increasingly paperless, documents nevertheless persist as focal objects, serving as idealised informational containers. Second, I suggest that digital security is not distinct from older material forms, such as paper; in contrast, new digital infrastructures are increasingly developed to protect older forms, such as cloud storage. Third, documents fit within social practices of protection beyond formal demands of information protection. I demonstrate how Korean employees I researched with treated documents with extra protection beyond legal requirements. These arguments point to new ways of thinking about how 'everyday' dimensions of security and securitisation are mediated by specific material objects and practices.
Michael Prentice was trained as a linguistic and cultural anthropologist at the University of Michigan, Ann Arbor. His doctoral research focused on the role of genres of communication in modern workplaces, and how they come to articulate ideas of democracy, progress, and global management. He has carried out field research in the South Korean corporate world since 2011. His book manuscript looks at efforts to reform hierarchy in the Korean corporate world. At Manchester, he is a research fellow with the Digital Trust & Security initiative, focused on issues around workplace security. In particular, he is interested in addressing issues surrounding the effects of securitization on everyday work life.
Underground communities attract people interested in illicit activities and easy-money making methods. In this joint talk, we will discuss the role of these forums in two different activities: eWhoring and the use of malware for illicit cryptocurrency mining.
On the one hand, eWhoring is the term used by offenders to refer an online fraud where they imitate partners in cyber-sexual encounters. Using all sort of social engineering skills, offenders aim at scamming their victims into paying for sexual-related material of a third-party person. We have analysed material and tutorials posted in underground forums to sed light into this previously-unknown deviant activity.
On the other hand, illicit crypto-mining uses stolen resources to mine cryptocurrencies for free. This threat is now pervasive and growing rapidly. Our talk will cover how this ecosystem is evolving, how much harm it is causing, and how can it be stopped. Our measurement shows that criminals have illicitly mined about 4.4% of the Monero cryptocurrency (we estimate that this accounts for 58 million USD). We also observe that there is a considerably small number of actors that hold sway this crime. Furthermore, we note that there is an increasing level of support offered by criminals in underground markets, that allow other criminals to run inexpensive malware-driven mining campaigns. This explains why this threat has grown sharply in 2018.
Guillermo Suarez-Tangil is a Lecturer of Computer Science at King's College London (KCL). His research focuses on systems security and malware analysis and detection. In particular, his area of expertise lies in the study of smart malware, ranging from the detection of advanced obfuscated malware to automated analysis of targeted malware. Before joining KCL, he has been senior research associate at University College London (UCL) where he has explored the use of program analysis to study malware. He has also been actively involved in other research directions aiming at detecting and preventing of Mass-Marketing Fraud (MMF).
Prior to that, he held a post-doctoral position at Royal Holloway, University of London (RHUL) where he was part of the development team of CopperDroid, a tool to dynamically test malware that uses machine learning to model malicious behaviours. He also holds a solid expertise on building novel data learning algorithms for malware analysis. He obtained his PhD on smart malware analysis in Carlos III University of Madrid with distinction and received the Best National Student Academic Award---a competitive award given to the best Thesis in the field of Engineering between 2014-2015 with about 1% acceptance rate (about 100 Cum Laude Thesis were invited to compete for the only award).
Sergio Pastrana is Visiting Professor at Universidad Carlos III de Madrid. He got his PhD in June 2014 by the same institution. His thesis analyzed the effectiveness of Intrusion Detection Systems and Networks in the presence of adversaries, and also the problems derived by the use of classical Machine Learning and AI tools in adversarial environments. After completion of his PhD, he spent two post-doctoral years working in a research project related to security in the Internet of Things (SPINY). His research was focused on the design and evaluation of protocols and systems adapted to the IoT world, as well as attacks and defensed designed for embedded devices.
From October 2016 to October 2018, he worked as Research Associate (postdoctoral researcher) in the Cambridge Cybercrime Centre from the University of Cambridge. His research focused on the analysis of online communities focused on deviant and criminal topics. His first goal was to gather massive amount of data from various forums where these communities interact. For that purpose, he developed a web crawler designed with ethical and technical issues in the forefront. The analysis of these data allow to understand how new forms of cybercrime operate, and it has been or is being used by at least 15 research institutions. His research has been published in prestigious international conferences such as WWW, IMC or RAID, and also in high impact international journals.
We put forward the notion of subvector commitments (SVC): An SVC allows one to open a committed vector at a set of positions, where the opening size is independent of length of the committed vector and the number of positions to be opened. We propose two constructions under variants of the root assumption and the CDH assumption, respectively. We further generalize SVC to a notion called linear map commitments (LMC), which allows one to open a committed vector to its images under linear maps with a single short message, and propose a construction over pairing groups.
Equipped with these newly developed tools, we revisit the âCS proofsâ paradigm [Micali, FOCS 1994] which turns any arguments with public-coin verifiers into non-interactive arguments using the Fiat-Shamir transform in the random oracle model. We propose a compiler that turns any (linear, resp.) PCP into a non-interactive argument, using exclusively SVCs (LMCs, resp.). For an approximate 80 bits of soundness, we highlight the following new implications:
There exists a succinct non-interactive argument of knowledge (SNARK) with public-coin setup with proofs of size 5360 bits, under the adaptive root assumption over class groups of imaginary quadratic orders against adversaries with runtime $2^128$. At the time of writing, this is the shortest SNARK with public-coin setup.
There exists a non-interactive argument with private-coin setup, where proofs consist of 2 group elements and 3 field elements, in the generic bilinear group model.
Mr. Lai is a PhD candidate in the Friedrich-Alexander University Erlangen-Nuremberg advised by Prof. Dominique Schröder. He received his MPhil degree in Information Engineering in 2016, his BSc degree in Mathematics and BEng degree in Information Engineering in 2014, all from the Chinese University of Hong Kong. His recent research interests include succinct zero-knowledge proofs, privacy-preserving cryptocurrencies, searchable encryption, and password-based cryptography.
In 2018, clinics and hospitals were hit with numerous attacks leading to significant data breaches and interruptions in medical services. An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market.
In this talk, I will show how an attacker can use deep-learning to add or remove evidence of medical conditions from volumetric (3D) medical scans, using an autonomous malware. An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder. The attack is implemented using a 3D conditional GAN, and the exploitation framework (CT-GAN) is completely automated. Although the body is complex and 3D medical scans are very large, CT-GAN achieves realistic results which can be executed in milliseconds.
To evaluate the attack, we will focus on injecting and removing lung cancer in CT scans. We found that three expert radiologists and a state-of-the-art deep learning screening AI were highly susceptible to this attack. Moreover, I will show how this attack can be applied to other medical conditions such as brain tumors. To evaluate the threat, we will explore the attack surface of a modern radiology network and I will demonstrate one attack vector: a covert pen-test I performed on an active hospital to intercept and manipulate CT scans.
Finally, I will conclude by discussing the root causes of this threat, and countermeasures which can be implemented immediately to mitigate it.
Yisroel Mirsky is a post doctoral fellow in the Institute for Information Security & Privacy at Georgia Tech (Georgia Institute of Technology). He received his PhD from Ben-Gurion University in 2018 where he is still affiliated as a security researcher. His main research interests include online anomaly detection, adversarial machine learning, isolated network security, and blockchain. Yisroel has published his research in some of the best cyber security conferences: USENIX, NDSS, Euro S&P, Black Hat, DEF CON, CSF, AISec, etc. His research has also been featured in many well-known media outlets (Popular Science, Scientific American, Wired, Wall Street Journal, Forbes, BBCâŠ). One of Yisroel's recent publications exposed a vulnerability in the USA's 911 emergency services infrastructure. The research was shared with the US Department of Homeland Security and subsequently published in the Washington Post.
The advent of blockchain protocols brought to light a number of applications that could benefit from a large scale Byzantine resilient consensus system. At the same time a number of significant challenges were put forth in terms of scalability, energy efficiency, privacy, and the relevant threat model that such protocols may be proven secure for. In this talk I will give an overview of recent and ongoing research in the area of designing distributed ledgers based on blockchain protocols focusing on results such as the Ouroboros proof of stake blockchain protocols (Crypto'17, Eurocrypt'18, ACM-CCS'18, IEEE S&P'19) as well as other related constructions aiming to improve the interoperability and the incentive structure of distributed ledgers.
Aggelos Kiayias is chair in Cyber Security and Privacy and director of the Blockchain Technology Laboratory at the University of Edinburgh. He is also the Chief Scientist at blockchain technology company IOHK. His research interests are in computer security, information security, applied cryptography and foundations of cryptography with a particular emphasis in blockchain technologies and distributed systems, e-voting and secure multiparty protocols as well as privacy and identity management. His research has been funded by the Horizon 2020 programme (EU), the European Research Council (EU), the Engineering and Physical Sciences Research Council (UK), the Secretariat of Research and Technology (Greece), the National Science Foundation (USA), the Department of Homeland Security (USA), and the National Institute of Standards and Technology (USA). He has received an ERC Starting Grant, a Marie Curie fellowship, an NSF Career Award, and a Fulbright Fellowship. He holds a Ph.D. from the City University of New York and he is a graduate of the Mathematics department of the University of Athens. He has over 100 publications in journals and conference proceedings in the area. He has served as the program chair of the Cryptographersâ Track of the RSA conference in 2011 and the Financial Cryptography and Data Security conference in 2017, as well as the general chair of Eurocrypt 2013.
We introduce a formal quantitative notion of âbit securityâ for a general type of cryptographic games (capturing both decision and search problems), aimed at capturing the intuition that a cryptographic primitive with k-bit security is as hard to break as an ideal cryptographic function requiring a brute force attack on a k-bit key space. Our new definition matches the notion of bit security commonly used by cryptographers and cryptanalysts when studying search (e.g., key recovery) problems, where the use of the traditional definition is well established. However, it produces a quantitatively different metric in the case of decision (indistinguishability) problems, where the use of (a straightforward generalization of) the traditional definition is more problematic and leads to a number of paradoxical situations or mismatches between theoretical/provable security and practical/common sense intuition. Key to our new definition is to consider adversaries that may explicitly declare failure of the attack. We support and justify the new definition by proving a number of technical results, including tight reductions between several standard cryptographic problems, a new hybrid theorem that preserves bit security, and an application to the security analysis of indistinguishability primitives making use of (approximate) floating point numbers. This is the first result showing that (standard precision) 53-bit floating point numbers can be used to achieve 100-bit security in the context of cryptographic primitives with general indistinguishability-based security definitions. Previous results of this type applied only to search problems, or special types of decision problems.
This is joint work with Daniele Micciancio
Michael studied computer science at TU Darmstadt and graduated with a MSc in 2012. He then started his PhD at UCSD under the supervision of Daniele Micciancio with a focus on lattice algorithms and graduated in 2017. Since then he has been a post doc at IST Austria in the Cryptography group of Krzysztof Pietrzak.
The problem of making computing systems trustworthy is often framed in terms of ensuring that users can trust systems. In contrast, my research illustrates that trustworthy computing intrinsically relies upon social trust in the operation of systems, as much as in the use of systems. Drawing from cases including the Border Gateway Protocol, DNS, and the PGP key server pool, I will show how the trustworthiness of the Internet's infrastructural technologies relies upon interpersonal and institutional trust within the communities of the Internet's technical operations personnel. Through these cases, I will demonstrate how a sociotechnical perspective can aid in the analysis and development of trustworthy computing systems by foregrounding operational trust alongside user trust and technological design.
Ashwin J. Mathew is a lecturer in the Department of Digital Humanities at King's College, London. He is an ethnographer of Internet infrastructure, studying the technologies and technical communities involved in the operation of the global Internet. His research shows how the stability of global Internet infrastructure relies upon a social infrastructure of trust within the Internet's technical communities. In his work, he treats Internet infrastructure as culture, power, politics, and practice, just as much as technology.
He holds a Ph.D. from the UC Berkeley School of Information, and won the 2016 iConference Doctoral Dissertation Award for his research into network operator communities across North America and South Asia. His subsequent research into trust relationships and organisational problems in information security has been funded by the UC Berkeley Center for Long-Term Cybersecurity. Prior to his doctoral work, he spent a decade as a programmer and technical architect in companies such as Adobe Systems and Sun Microsystems.
Scholars argue that contemporary movements in the age of social media are leaderless and self-organised. However, the concept of connective leadership has been put forward to highlight the need for movements to have figures who connect entities together. This paper presents a qualitative research of grassroots human rights groups in risky context to address the question of how leadership is performed in information and communication technology-enabled activism. The paper reconceptualises connective leadership as decentred, emergent and collectively performed, and provides a broader and richer account of leadersâ roles, characteristics and challenges. These challenges contribute to the critical literature on the role of ICTs in collective action.
Evronia Azer is an Assistant Professor at the Centre for Business in Society, Faculty of Business and Law, Coventry University. She has recently submitted her PhD thesis titled: âInformation and Communication Technology (ICT)-Enabled Collective Action in Critical Context: A Study of Leadership, Visibility and Trustâ, at Royal Hollowayâs School of Business and Management. During her PhD, she received different awards for her research, including the Civil Society Scholar Award from Open Society Foundations in 2016. With a background in software engineering, Evronia is broadly interested in how technology can provide innovative and creative solutions for societiesâ problems; ICT4D, and specifically interested in ICTs in collective action, and data privacy and surveillance.
Cryptographic operations are generally quite costly when performed only in software. In order to improve the performance of a system, such operations can be performed via hardware accelerators. There are different techniques for hardware acceleration: Hardware/software co-design, instruction set extensions for processors, hardware-only implementations, etc. In addition to hardware acceleration of cryptographic operations, computational complexity of cryptography and cryptanalysis problems can also be decreased by dedicated hardware architectures especially on reconfigurable hardware platforms. The talk will start with an overview of hardware aspects of cryptography (and a bit of cryptanalysis). How and when do we use hardware acceleration in cryptography? What are different design techniques? Following this, two new cryptographic hardware architectures which are specifically designed to be very compact and perform efficiently on reconfigurable platforms will be presented. In the first design, AES-GCM algorithm is implemented using mostly some certain blocks (DSP and BRAM) of a Field Programmable Gate Array (FPGA); and in the second design, the new Troika hash function is implemented nearly only on BRAM blocks of an FPGA for compactness.
Elif Bilge Kavun is a Lecturer in Cybersecurity at the Department of Computer Science, The University of Sheffield since January 2019, co-affiliated with the Security of Advanced Systems Research Group. Previously, she was a Digital Design Engineer for Crypto Cores at the Digital Security Solutions division, Infineon (Munich, Germany) and a research assistant at Horst Goertz Institute for IT Security, Ruhr University Bochum (Bochum, Germany). She completed a PhD in Embedded Security in 2015 at the Faculty of Electrical Engineering and Information Technology, Ruhr University Bochum (Bochum, Germany). Her research interests are in hardware security, design and implementation of cryptographic primitives, lightweight cryptography, secure processors, and side-channel attacks and countermeasures.
Feminist theorists of international relations (IR) have long argued that binaries of public/private reinforce the subsidiary status given to gendered insecurities, so that these security problems are âindividualisedâ and taken out of the public and political domain. This talk will outline the relevance of feminist critiques of security studies and argue that the emerging field of cybersecurity risks recreating these dynamics by omitting or dismissing gendered technologically-facilitated abuse such as ârevenge pornâ and intimate partner violence (IPV). I will present a review of forty smart home security analysis papers to show the threat model of IPV is almost entirely absent in this literature. I conclude by outlining some suggestions for cybersecurity research and design, particularly my work on âabusability testingâ, and reaffirming the importance of critical studies of information architecture.
Julia Slupska is a doctoral student at the Centre for Doctoral Training in Cybersecurity. Her research focuses on the ethical implications of conceptual models of cybersecurity. Currently, she is studying cybersecurity in the context of intimate partner violence and the use of simulations in political decision-making. Previously, she completed the MSc in Social Science of the Internet on the role of metaphors in international cybersecurity policy. Before joining the OII, Julia worked on an LSE Law project on comparative regional integration and coordinated course on Economics in Foreign Policy for the Foreign and Commonwealth Office. She also works as a freelance photographer.
Vast amounts of information of all types is collected daily about people by governments, corporations and individuals. The information is collected, for example, when users register to or use online applications, receive health related services, use their mobile phones, utilize search engines, or perform common daily activities. As a result, there is an enormous quantity of privately-owned records that describe individuals finances, interests, activities, and demographics. These records often include sensitive data and may violate the privacy of the users if published.The common approach to safeguarding user information, or data in general, is to limit access to the storage (usually a database) by using and authentication and authorization protocol. This way, only users with legitimate permissions can access the user data. However, even in these cases some of the data is required to stay hidden or accessible only to a specific subset of authorized users. Our talk focuses on possible malicious behavior by users with both partial and full access to queries over data. We look at privacy attacks that meant to gather hidden information and show methods that rely mainly on the underlying data structure, query types and behavior, and data format of the database. We will show how to identify the potential weaknesses and attack vectors for various scenarios and data types, and offer defenses against them.
Joint CS/ISG seminar.
Michael Segal is a Professor of Communication Systems Engineering at Ben-Gurion University of the Negev, known for his work in ad-hoc and sensor networks. Segal has published over 160 scientific papers and he is serving as the Editor-in-Chief for the Journal of Computer and System Sciences. Michael Segal is a past head of the Department (2005-2010) and also held a visiting professorship at Cambridge and Liverpool Universities. Prof. Segal tackles are fundamental optimization problems that have applications in transportation, station placement, communication, facility location, graph theory, statistics, selection, geometric pattern matching, layout of VLSI circuits and enumeration. His research has been funded by many academic and industrial organizations including Israeli Science Foundation, US Army Research Office, Deutche Telecom, IBM, France Telecom, INTEL, Israeli Innovation Agency, General Motors and many others.
Many voter-verifiable, coercion-resistant schemes have been proposed, but even the most carefully designed voting systems necessarily leak information via the announced result. In corner cases, this may be problematic. For example, if all the votes go to one candidate then all vote privacy evaporates. The mere possibility of candidates getting no or few votes could have implications for security in practise: if a coercer demands that a voter cast a vote for such an unpopular candidate, then the voter may feel obliged to obey, even if she is confident that the voting system satisfies the standard coercion resistance definitions. With complex ballots, there may also be a danger of "Italian" style (aka "signature") attacks: the coercer demands the voter cast a ballot with a very specific, identifying pattern of votes.
Here we propose an approach to tallying end-to-end verifiable schemes that avoids revealing all the votes but still achieves whatever confidence level in the announced result is desired. Now a coerced voter can claim that the required vote must be amongst those that remained shrouded. Our approach is based on the well-established notion of Risk-Limiting Audits (RLA), but here applied to the tally rather than to the audit. We show that this approach counters coercion threats arising in extreme tallies and ``Italian'' attacks.
The approach can be applied to most end-to-end verifiable schemes, but for the purposes of illustration I will outline the Selene scheme, that provides a particularly transparent form of voter-verification. This also allows me to describe an extension of the idea to Risk-Limiting Verification (RLV), where not all vote trackers are revealed, thereby enhancing the coercion mitigation properties of Selene.
Peter Ryan is full Professor of Applied Security at the University of Luxembourg since Feb 2009. Since joining the University of Luxembourg he has grown the APSIA (Applied Security and Information Assurance) group that is now more than 25 strong. He has around 25 years of experience in cryptography, information assurance and formal verification. He pioneered the application of process calculi to modelling and analysis of secure systems, in particular presenting the first process algebraic characterization of non-interference taking account of non-determinism (CSFW 1990). While at the Defense Research Agency, he initiated and led the ``Modelling and Analysis of Security Protocols'' project that pioneered the application of process algebra (CSP) and model-checking tools (FDR) to the analysis of security protocols.
He has published extensively on cryptography, cryptographic protocols, security policies, mathematical models of computer security and, most recently, voter-verifiable election systems. He is the creator of the (polling station) PrĂȘt Ă Voter and, with V. Teague, the (internet) Pretty Good Democracy verifiable voting schemes. He was also co-designer of the vVote system, based on PrĂȘt Ă Voter that was used successfully in Victoria State in November 2015. Most recently he developed the voter-friendly E2E verifiable scheme Selene. With Feng Hao, he also developed the OpenVote boardroom voting scheme and the J-PAKE password based authenticated key establishment protocol.
Prior to taking up the Chair in Luxembourg, he held a Chair at the University of Newcastle. Before that he worked at the Government Communications HQ (GCHQ), the Defense Research Agency (DRA) Malvern, the Stanford Research (SRI) Institute, Cambridge UK and the Software Engineering Institute, CMU Pittsburgh.
He was awarded a PhD in mathematical physics from the University of London in 1982. Peter Ryan sits on or has sat on the program committees of numerous, prestigious security conferences, notably: IEEE Security and Privacy, IEEE Computer Security Foundations Workshop/Symposium (CSF), the European Symposium on Research in Computer Security (ESORICS), Workshop on Issues in Security (WITS). He is General Chair of ESORICS 2019. He was (co-)chair of WITS'04 and co-chair of ESORICS'04, Frontiers of Electronic Elections (FEE) 2005 Workshop on Trustworthy Elections (WOTE) 2007, VoteId 2009 and of ESORICS 2015. In 2016 he founded the Verifiable Voting Workshops, held in association with Financial Crypto. From 1999 to 2007 he was the President of the ESORICS Steering Committee. In 2013 he was awarded the ESORICS Outstanding Service Award.
He is a Visiting Professor at Surrey University and the ENS Paris.