In this work we advance the study of leakage-resilient Authenticated Encryption with Associated Data (AEAD) and lay the theoretical groundwork for building such schemes from sponges. Building on the work of Barwell et al. (ASIACRYPT 2017), we reduce the problem of constructing leakage-resilient AEAD schemes to that of building fixed-input-length function families that retain pseudorandomness and unpredictability in the presence of leakage. Notably, neither property is implied by the other in the leakage-resilient setting. We then show that such a function family can be combined with standard primitives, namely a pseudorandom generator and a collision-resistant hash, to yield a nonce-based AEAD scheme. In addition, our construction is quite efficient in that it requires only two calls to this leakage-resilient function per encryption or decryption call. This construction can be instantiated entirely from the T-sponge to yield a concrete AEAD scheme which we call SLAE. We prove this sponge-based instantiation secure in the non-adaptive leakage setting. SLAE bears many similarities and is indeed inspired by ISAP, which was proposed by Dobraunig et al. at FSE 2017. However, while retaining most of the practical advantages of ISAP, SLAE additionally benefits from a formal security treatment.
Christian is a Postdoc in the Cryptoplexity team led by Marc Fischlin. Prior to this he was a Ph.D. student in the Information Security Group at Royal Holloway, University of London under the supervision of Carlos Cid.
It is well known that older adults continue to lag behind younger adults in terms of their breadth of uptake of digital technologies, amount and quality of engagement in these tools and ability to critically engage with the online world. Can these differences be explained by older adults’ distrust of digital technologies? Is trust, therefore, a critical design consideration for appealing to older adults? In this talk I will argue that while distrust is not, in fact, determinative of non-use and therefore does not explain these differences in tech usage, it is nonetheless key for designers to understand older adult distrust in developing socially responsible technologies.
Bran is a lecturer in the Data Science Institute at Lancaster University. Her research explores the social impacts of computing, with a particular interest in trust, privacy, and ethics. Her recent work has explored these issues at both ends of the age spectrum, with projects such as IoT4Kids, looking at the privacy, security and ethical issues of enabling children to programme IoT devices; and Mobile Age, looking at developing mobile apps for older adults. Bran currently serves as a member of the ACM Europe Technology Policy Committee.
Mobile sensors have already proven to be helpful in different aspects of people’s everyday lives such as fitness, gaming, navigation, etc. However, illegitimate access to these sensors results in a malicious program running with an exploit path. While the users are benefiting from richer and more personalized apps, the growing number of sensors introduces new security and privacy risks to end users and makes the task of sensor management more complex. In this talk, first, we discuss the issues around the security and privacy of mobile sensors. We investigate the available sensors on mainstream mobile devices and study the permission policies that Android, iOS and mobile web browsers offer for them. Second, we reflect the results of two workshops that we organized on mobile sensor security. In these workshops, the participants were introduced to mobile sensors by working with sensor-enabled apps. We evaluated the risk levels perceived by the participants for these sensors after they understood the functionalities of these sensors. The results showed that knowing sensors by working with sensor-enabled apps would not immediately improve the users’ security inference of the actual risks of these sensors. However, other factors such as the prior general knowledge about these sensors and their risks had a strong impact on the users’ perception. We also taught the participants about the ways that they could audit their apps and their permissions. Our findings showed that when mobile users were provided with reasonable choices and intuitive teaching, they could easily self-direct themselves to improve their security and privacy. Finally, we provide recommendations for educators, app developers, and mobile users to contribute toward awareness and education on this topic.
*** I have a PhD studentship for Sep 2020 on "Cyber Security in Farm and Companion Animal Technologies" (schools of computing and agriculture) at Newcastle University. If you are interested, come and talk to me after the presentation, or email me any time.
I am a Research Fellow in Cyber Security, School of Computing, Newcastle University (NU), UK. I have a PhD in Computing Science, MSc and BSc in Computer Engineering. I work on Sensor, Mobile, and IoT Security, Security Standardisation, and Usable Security and Privacy. I work with W3C as an invited expert on sensor specifications. I am particularly interested in real-world multi-disciplinary projects. I am an advocate for Equality, Diversity and Inclusion (EDI) (a member of EDI committee in the School of Computing, Newcastle University) and particularly support women in STEM.
This talk will explore the disruptive and transformative effects of digital technology on gendered security asymmetries in Greenland. Through extended ethnographic fieldwork conducted in Greenland and Denmark, research findings emerged through in-depth interviews, collaborative mappings and field observations with 51 participants. Employing a critical feminist lens, the paper identifies how Greenlandic women develop digital security practices to respond to Greenland's ecologically, politically and socially induced transformation processes. By connecting individual security concerns of Greenlandic women with the broader regional context, the findings highlight how digital technology has created transitory spaces in which collective security is cultivated, shaped and challenged. The contribution to security scholarship is therefore threefold: (1) identification and acknowledgement of gendered effects of increased usage of digital technology in remote and hard-to-reach communities, (2) a broader conceptualisation of digital security and (3) a recommendation for more contextualised, pluralistic digitalisation design.
This talk is based on: Wendt, Nicola, Rikke Bjerg Jensen and Lizzie Coles-Kemp. "Civic Empowerment through Digitalisation: the Case of Greenlandic Women." In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems - CHI'20, New York, 2020. ACM Press.
Nicola is a PhD candidate supervised across the Information Security Group (Dr Rikke Bjerg Jensen) and the Geography Department (Prof Klaus Dodds) at Royal Holloway and funded by the Leverhulme Trust. In her PhD she focuses on identity formation within an increasingly digitalised public sphere in Greenland and, through this, explores gendered notions of security. Ethnographic in nature and using community-based participatory research methods, Nicola’s research investigates the intersection of digital technology and social practices, looking at how experiences of technological transitions are negotiated against a backdrop of historic and contemporary inequalities. She received her BA in International Relations from the University of Groningen and her MA from the Universities of Uppsala and Strasbourg.
Academic research on machine learning-based malware classification appears to leave very little room for improvement, boasting F1 performance figures of up to 0.99. Is the problem solved? In this talk, we argue that there is an endemic issue of inflated results due to two pervasive sources of experimental bias: spatial bias, caused by distributions of training and testing data not representative of a real-world deployment, and temporal bias, caused by incorrect splits of training and testing sets (e.g., in cross-validation) leading to impossible configurations. To overcome this issue, we propose a set of space and time constraints for experiment design. Furthermore, we introduce a new metric that summarizes the performance of a classifier over time, i.e., its expected robustness in a real-world setting. Finally, we present an algorithm to tune the performance of a given classifier. We have implemented our solutions in TESSERACT, an open source evaluation framework that allows a fair comparison of malware classifiers in a realistic setting. We used TESSERACT to evaluate two well-known malware classifiers from the literature on a dataset of 129K applications, demonstrating the distortion of results due to experimental bias and showcasing significant improvements from tuning.
The main results of this talk are published in: - Feargus Pendlebury, Fabio Pierazzi, Roberto Jordaney, Johannes Kinder, Lorenzo Cavallaro . TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time. USENIX Security Symposium, 2019.
Fabio Pierazzi is currently a Lecturer (Assistant Professor) in Computer Science at King's College London, where he is also a member of the Cybersecurity (CYS) group. His research expertise is on statistical methods for malware analysis and intrusion detection, with a particular emphasis on settings in which attackers adapt quickly to new defenses (i.e., high non-stationarity). Before joining King’s College London as a Lecturer in Sep 2019, he obtained his Ph.D. in Computer Science in 2017 from University of Modena and Reggio Emilia, Italy, under the supervision of Prof. Michele Colajanni; he spent most of 2016 as a Visiting Researcher at the University of Maryland, College Park, USA, under the supervision of Prof. V.S. Subrahmanian; between Oct 2017 and Sep 2019, he has been a Post-Doctoral Researcher in the Systems Security Research Lab (S2Lab), first at Royal Holloway University of London and then at King’s College London, under the supervision of Prof. Johannes Kinder and Prof. Lorenzo Cavallaro. Home page: https://fabio.pierazzi.com
Sean Heelan is a co-founder/CTO of Optimyze and a PhD candidate at the University of Oxford. In the former role he develops products for increasing the efficiency of large-scale, cloud based systems, and in the latter he is investigating automated approaches to exploit generation. Previously he ran Persistence Labs, a reverse engineering tooling company, and worked as a Senior Security Researcher at Immunity Inc. At Immunity he lead a team under DARPA's Cyber Fast Track programme, investigating hybrid approaches to vulnerability detection using a mix of static and dynamic analyses.
Much attention in cyber security has turned to new technologies and new materialities of information. These overlook the fact that much of security attention in everyday life is oriented around more conventional objects of security, such as documents. In this talk, I discuss why scholars should take documents and other everyday materialities more seriously. I build my argument based on ethnographic fieldwork conducted in the South Korean corporate world between 2011 and 2017. First, I suggest that even as organizations are increasingly paperless, documents nevertheless persist as focal objects, serving as idealised informational containers. Second, I suggest that digital security is not distinct from older material forms, such as paper; in contrast, new digital infrastructures are increasingly developed to protect older forms, such as cloud storage. Third, documents fit within social practices of protection beyond formal demands of information protection. I demonstrate how Korean employees I researched with treated documents with extra protection beyond legal requirements. These arguments point to new ways of thinking about how 'everyday' dimensions of security and securitisation are mediated by specific material objects and practices.
Michael Prentice was trained as a linguistic and cultural anthropologist at the University of Michigan, Ann Arbor. His doctoral research focused on the role of genres of communication in modern workplaces, and how they come to articulate ideas of democracy, progress, and global management. He has carried out field research in the South Korean corporate world since 2011. His book manuscript looks at efforts to reform hierarchy in the Korean corporate world. At Manchester, he is a research fellow with the Digital Trust & Security initiative, focused on issues around workplace security. In particular, he is interested in addressing issues surrounding the effects of securitization on everyday work life.
Underground communities attract people interested in illicit activities and easy-money making methods. In this joint talk, we will discuss the role of these forums in two different activities: eWhoring and the use of malware for illicit cryptocurrency mining.
On the one hand, eWhoring is the term used by offenders to refer an online fraud where they imitate partners in cyber-sexual encounters. Using all sort of social engineering skills, offenders aim at scamming their victims into paying for sexual-related material of a third-party person. We have analysed material and tutorials posted in underground forums to sed light into this previously-unknown deviant activity.
On the other hand, illicit crypto-mining uses stolen resources to mine cryptocurrencies for free. This threat is now pervasive and growing rapidly. Our talk will cover how this ecosystem is evolving, how much harm it is causing, and how can it be stopped. Our measurement shows that criminals have illicitly mined about 4.4% of the Monero cryptocurrency (we estimate that this accounts for 58 million USD). We also observe that there is a considerably small number of actors that hold sway this crime. Furthermore, we note that there is an increasing level of support offered by criminals in underground markets, that allow other criminals to run inexpensive malware-driven mining campaigns. This explains why this threat has grown sharply in 2018.
Guillermo Suarez-Tangil is a Lecturer of Computer Science at King's College London (KCL). His research focuses on systems security and malware analysis and detection. In particular, his area of expertise lies in the study of smart malware, ranging from the detection of advanced obfuscated malware to automated analysis of targeted malware. Before joining KCL, he has been senior research associate at University College London (UCL) where he has explored the use of program analysis to study malware. He has also been actively involved in other research directions aiming at detecting and preventing of Mass-Marketing Fraud (MMF).
Prior to that, he held a post-doctoral position at Royal Holloway, University of London (RHUL) where he was part of the development team of CopperDroid, a tool to dynamically test malware that uses machine learning to model malicious behaviours. He also holds a solid expertise on building novel data learning algorithms for malware analysis. He obtained his PhD on smart malware analysis in Carlos III University of Madrid with distinction and received the Best National Student Academic Award---a competitive award given to the best Thesis in the field of Engineering between 2014-2015 with about 1% acceptance rate (about 100 Cum Laude Thesis were invited to compete for the only award).
Sergio Pastrana is Visiting Professor at Universidad Carlos III de Madrid. He got his PhD in June 2014 by the same institution. His thesis analyzed the effectiveness of Intrusion Detection Systems and Networks in the presence of adversaries, and also the problems derived by the use of classical Machine Learning and AI tools in adversarial environments. After completion of his PhD, he spent two post-doctoral years working in a research project related to security in the Internet of Things (SPINY). His research was focused on the design and evaluation of protocols and systems adapted to the IoT world, as well as attacks and defensed designed for embedded devices.
From October 2016 to October 2018, he worked as Research Associate (postdoctoral researcher) in the Cambridge Cybercrime Centre from the University of Cambridge. His research focused on the analysis of online communities focused on deviant and criminal topics. His first goal was to gather massive amount of data from various forums where these communities interact. For that purpose, he developed a web crawler designed with ethical and technical issues in the forefront. The analysis of these data allow to understand how new forms of cybercrime operate, and it has been or is being used by at least 15 research institutions. His research has been published in prestigious international conferences such as WWW, IMC or RAID, and also in high impact international journals.
We put forward the notion of subvector commitments (SVC): An SVC allows one to open a committed vector at a set of positions, where the opening size is independent of length of the committed vector and the number of positions to be opened. We propose two constructions under variants of the root assumption and the CDH assumption, respectively. We further generalize SVC to a notion called linear map commitments (LMC), which allows one to open a committed vector to its images under linear maps with a single short message, and propose a construction over pairing groups.
Equipped with these newly developed tools, we revisit the “CS proofs” paradigm [Micali, FOCS 1994] which turns any arguments with public-coin verifiers into non-interactive arguments using the Fiat-Shamir transform in the random oracle model. We propose a compiler that turns any (linear, resp.) PCP into a non-interactive argument, using exclusively SVCs (LMCs, resp.). For an approximate 80 bits of soundness, we highlight the following new implications:
There exists a succinct non-interactive argument of knowledge (SNARK) with public-coin setup with proofs of size 5360 bits, under the adaptive root assumption over class groups of imaginary quadratic orders against adversaries with runtime $2^128$. At the time of writing, this is the shortest SNARK with public-coin setup.
There exists a non-interactive argument with private-coin setup, where proofs consist of 2 group elements and 3 field elements, in the generic bilinear group model.
Mr. Lai is a PhD candidate in the Friedrich-Alexander University Erlangen-Nuremberg advised by Prof. Dominique Schröder. He received his MPhil degree in Information Engineering in 2016, his BSc degree in Mathematics and BEng degree in Information Engineering in 2014, all from the Chinese University of Hong Kong. His recent research interests include succinct zero-knowledge proofs, privacy-preserving cryptocurrencies, searchable encryption, and password-based cryptography.
In 2018, clinics and hospitals were hit with numerous attacks leading to significant data breaches and interruptions in medical services. An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market.
In this talk, I will show how an attacker can use deep-learning to add or remove evidence of medical conditions from volumetric (3D) medical scans, using an autonomous malware. An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder. The attack is implemented using a 3D conditional GAN, and the exploitation framework (CT-GAN) is completely automated. Although the body is complex and 3D medical scans are very large, CT-GAN achieves realistic results which can be executed in milliseconds.
To evaluate the attack, we will focus on injecting and removing lung cancer in CT scans. We found that three expert radiologists and a state-of-the-art deep learning screening AI were highly susceptible to this attack. Moreover, I will show how this attack can be applied to other medical conditions such as brain tumors. To evaluate the threat, we will explore the attack surface of a modern radiology network and I will demonstrate one attack vector: a covert pen-test I performed on an active hospital to intercept and manipulate CT scans.
Finally, I will conclude by discussing the root causes of this threat, and countermeasures which can be implemented immediately to mitigate it.
Yisroel Mirsky is a post doctoral fellow in the Institute for Information Security & Privacy at Georgia Tech (Georgia Institute of Technology). He received his PhD from Ben-Gurion University in 2018 where he is still affiliated as a security researcher. His main research interests include online anomaly detection, adversarial machine learning, isolated network security, and blockchain. Yisroel has published his research in some of the best cyber security conferences: USENIX, NDSS, Euro S&P, Black Hat, DEF CON, CSF, AISec, etc. His research has also been featured in many well-known media outlets (Popular Science, Scientific American, Wired, Wall Street Journal, Forbes, BBC…). One of Yisroel's recent publications exposed a vulnerability in the USA's 911 emergency services infrastructure. The research was shared with the US Department of Homeland Security and subsequently published in the Washington Post.
The advent of blockchain protocols brought to light a number of applications that could benefit from a large scale Byzantine resilient consensus system. At the same time a number of significant challenges were put forth in terms of scalability, energy efficiency, privacy, and the relevant threat model that such protocols may be proven secure for. In this talk I will give an overview of recent and ongoing research in the area of designing distributed ledgers based on blockchain protocols focusing on results such as the Ouroboros proof of stake blockchain protocols (Crypto'17, Eurocrypt'18, ACM-CCS'18, IEEE S&P'19) as well as other related constructions aiming to improve the interoperability and the incentive structure of distributed ledgers.
Aggelos Kiayias is chair in Cyber Security and Privacy and director of the Blockchain Technology Laboratory at the University of Edinburgh. He is also the Chief Scientist at blockchain technology company IOHK. His research interests are in computer security, information security, applied cryptography and foundations of cryptography with a particular emphasis in blockchain technologies and distributed systems, e-voting and secure multiparty protocols as well as privacy and identity management. His research has been funded by the Horizon 2020 programme (EU), the European Research Council (EU), the Engineering and Physical Sciences Research Council (UK), the Secretariat of Research and Technology (Greece), the National Science Foundation (USA), the Department of Homeland Security (USA), and the National Institute of Standards and Technology (USA). He has received an ERC Starting Grant, a Marie Curie fellowship, an NSF Career Award, and a Fulbright Fellowship. He holds a Ph.D. from the City University of New York and he is a graduate of the Mathematics department of the University of Athens. He has over 100 publications in journals and conference proceedings in the area. He has served as the program chair of the Cryptographers’ Track of the RSA conference in 2011 and the Financial Cryptography and Data Security conference in 2017, as well as the general chair of Eurocrypt 2013.
We introduce a formal quantitative notion of “bit security” for a general type of cryptographic games (capturing both decision and search problems), aimed at capturing the intuition that a cryptographic primitive with k-bit security is as hard to break as an ideal cryptographic function requiring a brute force attack on a k-bit key space. Our new definition matches the notion of bit security commonly used by cryptographers and cryptanalysts when studying search (e.g., key recovery) problems, where the use of the traditional definition is well established. However, it produces a quantitatively different metric in the case of decision (indistinguishability) problems, where the use of (a straightforward generalization of) the traditional definition is more problematic and leads to a number of paradoxical situations or mismatches between theoretical/provable security and practical/common sense intuition. Key to our new definition is to consider adversaries that may explicitly declare failure of the attack. We support and justify the new definition by proving a number of technical results, including tight reductions between several standard cryptographic problems, a new hybrid theorem that preserves bit security, and an application to the security analysis of indistinguishability primitives making use of (approximate) floating point numbers. This is the first result showing that (standard precision) 53-bit floating point numbers can be used to achieve 100-bit security in the context of cryptographic primitives with general indistinguishability-based security definitions. Previous results of this type applied only to search problems, or special types of decision problems.
This is joint work with Daniele Micciancio
Michael studied computer science at TU Darmstadt and graduated with a MSc in 2012. He then started his PhD at UCSD under the supervision of Daniele Micciancio with a focus on lattice algorithms and graduated in 2017. Since then he has been a post doc at IST Austria in the Cryptography group of Krzysztof Pietrzak.
The problem of making computing systems trustworthy is often framed in terms of ensuring that users can trust systems. In contrast, my research illustrates that trustworthy computing intrinsically relies upon social trust in the operation of systems, as much as in the use of systems. Drawing from cases including the Border Gateway Protocol, DNS, and the PGP key server pool, I will show how the trustworthiness of the Internet's infrastructural technologies relies upon interpersonal and institutional trust within the communities of the Internet's technical operations personnel. Through these cases, I will demonstrate how a sociotechnical perspective can aid in the analysis and development of trustworthy computing systems by foregrounding operational trust alongside user trust and technological design.
Ashwin J. Mathew is a lecturer in the Department of Digital Humanities at King's College, London. He is an ethnographer of Internet infrastructure, studying the technologies and technical communities involved in the operation of the global Internet. His research shows how the stability of global Internet infrastructure relies upon a social infrastructure of trust within the Internet's technical communities. In his work, he treats Internet infrastructure as culture, power, politics, and practice, just as much as technology.
He holds a Ph.D. from the UC Berkeley School of Information, and won the 2016 iConference Doctoral Dissertation Award for his research into network operator communities across North America and South Asia. His subsequent research into trust relationships and organisational problems in information security has been funded by the UC Berkeley Center for Long-Term Cybersecurity. Prior to his doctoral work, he spent a decade as a programmer and technical architect in companies such as Adobe Systems and Sun Microsystems.
Scholars argue that contemporary movements in the age of social media are leaderless and self-organised. However, the concept of connective leadership has been put forward to highlight the need for movements to have figures who connect entities together. This paper presents a qualitative research of grassroots human rights groups in risky context to address the question of how leadership is performed in information and communication technology-enabled activism. The paper reconceptualises connective leadership as decentred, emergent and collectively performed, and provides a broader and richer account of leaders’ roles, characteristics and challenges. These challenges contribute to the critical literature on the role of ICTs in collective action.
Evronia Azer is an Assistant Professor at the Centre for Business in Society, Faculty of Business and Law, Coventry University. She has recently submitted her PhD thesis titled: “Information and Communication Technology (ICT)-Enabled Collective Action in Critical Context: A Study of Leadership, Visibility and Trust”, at Royal Holloway’s School of Business and Management. During her PhD, she received different awards for her research, including the Civil Society Scholar Award from Open Society Foundations in 2016. With a background in software engineering, Evronia is broadly interested in how technology can provide innovative and creative solutions for societies’ problems; ICT4D, and specifically interested in ICTs in collective action, and data privacy and surveillance.
Cryptographic operations are generally quite costly when performed only in software. In order to improve the performance of a system, such operations can be performed via hardware accelerators. There are different techniques for hardware acceleration: Hardware/software co-design, instruction set extensions for processors, hardware-only implementations, etc. In addition to hardware acceleration of cryptographic operations, computational complexity of cryptography and cryptanalysis problems can also be decreased by dedicated hardware architectures especially on reconfigurable hardware platforms. The talk will start with an overview of hardware aspects of cryptography (and a bit of cryptanalysis). How and when do we use hardware acceleration in cryptography? What are different design techniques? Following this, two new cryptographic hardware architectures which are specifically designed to be very compact and perform efficiently on reconfigurable platforms will be presented. In the first design, AES-GCM algorithm is implemented using mostly some certain blocks (DSP and BRAM) of a Field Programmable Gate Array (FPGA); and in the second design, the new Troika hash function is implemented nearly only on BRAM blocks of an FPGA for compactness.
Elif Bilge Kavun is a Lecturer in Cybersecurity at the Department of Computer Science, The University of Sheffield since January 2019, co-affiliated with the Security of Advanced Systems Research Group. Previously, she was a Digital Design Engineer for Crypto Cores at the Digital Security Solutions division, Infineon (Munich, Germany) and a research assistant at Horst Goertz Institute for IT Security, Ruhr University Bochum (Bochum, Germany). She completed a PhD in Embedded Security in 2015 at the Faculty of Electrical Engineering and Information Technology, Ruhr University Bochum (Bochum, Germany). Her research interests are in hardware security, design and implementation of cryptographic primitives, lightweight cryptography, secure processors, and side-channel attacks and countermeasures.
Feminist theorists of international relations (IR) have long argued that binaries of public/private reinforce the subsidiary status given to gendered insecurities, so that these security problems are ‘individualised’ and taken out of the public and political domain. This talk will outline the relevance of feminist critiques of security studies and argue that the emerging field of cybersecurity risks recreating these dynamics by omitting or dismissing gendered technologically-facilitated abuse such as ‘revenge porn’ and intimate partner violence (IPV). I will present a review of forty smart home security analysis papers to show the threat model of IPV is almost entirely absent in this literature. I conclude by outlining some suggestions for cybersecurity research and design, particularly my work on “abusability testing”, and reaffirming the importance of critical studies of information architecture.
Julia Slupska is a doctoral student at the Centre for Doctoral Training in Cybersecurity. Her research focuses on the ethical implications of conceptual models of cybersecurity. Currently, she is studying cybersecurity in the context of intimate partner violence and the use of simulations in political decision-making. Previously, she completed the MSc in Social Science of the Internet on the role of metaphors in international cybersecurity policy. Before joining the OII, Julia worked on an LSE Law project on comparative regional integration and coordinated course on Economics in Foreign Policy for the Foreign and Commonwealth Office. She also works as a freelance photographer.
Joint CS/ISG seminar.
Vast amounts of information of all types is collected daily about people by governments, corporations and individuals. The information is collected, for example, when users register to or use online applications, receive health related services, use their mobile phones, utilize search engines, or perform common daily activities. As a result, there is an enormous quantity of privately-owned records that describe individuals finances, interests, activities, and demographics. These records often include sensitive data and may violate the privacy of the users if published.The common approach to safeguarding user information, or data in general, is to limit access to the storage (usually a database) by using and authentication and authorization protocol. This way, only users with legitimate permissions can access the user data. However, even in these cases some of the data is required to stay hidden or accessible only to a specific subset of authorized users. Our talk focuses on possible malicious behavior by users with both partial and full access to queries over data. We look at privacy attacks that meant to gather hidden information and show methods that rely mainly on the underlying data structure, query types and behavior, and data format of the database. We will show how to identify the potential weaknesses and attack vectors for various scenarios and data types, and offer defenses against them.
Michael Segal is a Professor of Communication Systems Engineering at Ben-Gurion University of the Negev, known for his work in ad-hoc and sensor networks. Segal has published over 160 scientific papers and he is serving as the Editor-in-Chief for the Journal of Computer and System Sciences. Michael Segal is a past head of the Department (2005-2010) and also held a visiting professorship at Cambridge and Liverpool Universities. Prof. Segal tackles are fundamental optimization problems that have applications in transportation, station placement, communication, facility location, graph theory, statistics, selection, geometric pattern matching, layout of VLSI circuits and enumeration. His research has been funded by many academic and industrial organizations including Israeli Science Foundation, US Army Research Office, Deutche Telecom, IBM, France Telecom, INTEL, Israeli Innovation Agency, General Motors and many others.
Many voter-verifiable, coercion-resistant schemes have been proposed, but even the most carefully designed voting systems necessarily leak information via the announced result. In corner cases, this may be problematic. For example, if all the votes go to one candidate then all vote privacy evaporates. The mere possibility of candidates getting no or few votes could have implications for security in practise: if a coercer demands that a voter cast a vote for such an unpopular candidate, then the voter may feel obliged to obey, even if she is confident that the voting system satisfies the standard coercion resistance definitions. With complex ballots, there may also be a danger of "Italian" style (aka "signature") attacks: the coercer demands the voter cast a ballot with a very specific, identifying pattern of votes.
Here we propose an approach to tallying end-to-end verifiable schemes that avoids revealing all the votes but still achieves whatever confidence level in the announced result is desired. Now a coerced voter can claim that the required vote must be amongst those that remained shrouded. Our approach is based on the well-established notion of Risk-Limiting Audits (RLA), but here applied to the tally rather than to the audit. We show that this approach counters coercion threats arising in extreme tallies and ``Italian'' attacks.
The approach can be applied to most end-to-end verifiable schemes, but for the purposes of illustration I will outline the Selene scheme, that provides a particularly transparent form of voter-verification. This also allows me to describe an extension of the idea to Risk-Limiting Verification (RLV), where not all vote trackers are revealed, thereby enhancing the coercion mitigation properties of Selene.
Peter Ryan is full Professor of Applied Security at the University of Luxembourg since Feb 2009. Since joining the University of Luxembourg he has grown the APSIA (Applied Security and Information Assurance) group that is now more than 25 strong. He has around 25 years of experience in cryptography, information assurance and formal verification. He pioneered the application of process calculi to modelling and analysis of secure systems, in particular presenting the first process algebraic characterization of non-interference taking account of non-determinism (CSFW 1990). While at the Defense Research Agency, he initiated and led the ``Modelling and Analysis of Security Protocols'' project that pioneered the application of process algebra (CSP) and model-checking tools (FDR) to the analysis of security protocols.
He has published extensively on cryptography, cryptographic protocols, security policies, mathematical models of computer security and, most recently, voter-verifiable election systems. He is the creator of the (polling station) Prêt à Voter and, with V. Teague, the (internet) Pretty Good Democracy verifiable voting schemes. He was also co-designer of the vVote system, based on Prêt à Voter that was used successfully in Victoria State in November 2015. Most recently he developed the voter-friendly E2E verifiable scheme Selene. With Feng Hao, he also developed the OpenVote boardroom voting scheme and the J-PAKE password based authenticated key establishment protocol.
Prior to taking up the Chair in Luxembourg, he held a Chair at the University of Newcastle. Before that he worked at the Government Communications HQ (GCHQ), the Defense Research Agency (DRA) Malvern, the Stanford Research (SRI) Institute, Cambridge UK and the Software Engineering Institute, CMU Pittsburgh.
He was awarded a PhD in mathematical physics from the University of London in 1982. Peter Ryan sits on or has sat on the program committees of numerous, prestigious security conferences, notably: IEEE Security and Privacy, IEEE Computer Security Foundations Workshop/Symposium (CSF), the European Symposium on Research in Computer Security (ESORICS), Workshop on Issues in Security (WITS). He is General Chair of ESORICS 2019. He was (co-)chair of WITS'04 and co-chair of ESORICS'04, Frontiers of Electronic Elections (FEE) 2005 Workshop on Trustworthy Elections (WOTE) 2007, VoteId 2009 and of ESORICS 2015. In 2016 he founded the Verifiable Voting Workshops, held in association with Financial Crypto. From 1999 to 2007 he was the President of the ESORICS Steering Committee. In 2013 he was awarded the ESORICS Outstanding Service Award.
He is a Visiting Professor at Surrey University and the ENS Paris.