16th IEEE Workshop on Offensive Technologies
May 26, 2022, co-located with IEEE S&PFollow us on Twitter
The Workshop on Offensive Technologies (WOOT) aims to present a broad picture of offense and its contributions, bringing together researchers and practitioners across all areas of computer security. Offensive security has changed from a hobby to an industry. No longer an exercise for isolated enthusiasts, offensive security is today a large-scale operation managed by organized, capitalized actors. Meanwhile, the landscape has shifted: software used by millions is built by startups less than a year old, delivered on mobile phones and surveilled by national signals intelligence agencies. In the field's infancy, offensive security research was conducted separately by industry, independent hackers, or in academia. Collaboration between these groups was difficult. Since 2007, the Workshop on Offensive Technologies (WOOT) has been bringing those communities together.
Workshop ProgramRoom: Bayview B
|9:00 - 9:10||Hello. Martina Lindorfer and Colin O'Flynn|
|9:10 - 10:00||Keynote||Trusting Computers not to Foul Things Up Robert Van Spyk (Nvidia)|
|10:00 - 10:30||Break||Coffee Good.|
|10:30 - 10:50||Binary||Abusing Trust: Mobile Kernel Subversion via TrustZone Rootkits Daniel Marth; Clemens Hlauschek; Christian Schanes (Research Industrial Systems Engineering GmbH); Thomas Grechenig (Research group for Industrial Software, TU Wien)
[Visitor PDF] [Public pre-print] [Artifact(s)]
|10:50 - 11:10||Binary||Exploring Widevine for Fun and Profit Gwendal Patat; Mohamed Sabt; Pierre-Alain Fouque (Univ Rennes, IRISA, CNRS)
[Visitor PDF] [Public pre-print] [Artifact: wideXtractor] [Artifact: widevine_key_ladder]
|11:10 - 11:30||Binary||Hack the Heap: Heap Layout Manipulation made Easy Jordy Gennissen; Daniel O'Keeffe (Royal Holloway, University of London)
[Visitor PDF] [Public pre-print] [Source Code] [Try the game on the web]
|11:30 - 11:50||Binary||AirTag of the Clones: Shenanigans with Liberated Item Finders Thomas Roth (Leveldown Security); Fabian Freyer (Independent); Matthias Hollick; Jiska Classen (TU Darmstadt, SEEMOO)
[Visitor PDF] [Public pre-print] [Artifact: glitcher] [Artifact: hooks]
|11:50 - 13:10||Lunch|
|13:10 - 13:30||Side Channel||Clairvoyance: Exploiting Far-field EM Emanations of GPU to ``See'' Your DNN Models through Obstacles at a Distance Sisheng Liang (Clemson University); Zihao Zhan (University of Florida); Long Cheng (Clemson University); Fan Yao (University of Central Florida); Zhenkai Zhang (Clemson University)
[Visitor PDF] [Public pre-print]
|13:30 - 13:50||Side Channel||DABANGG: A Case for Noise Resilient Flush Based Cache Attacks Anish Saxena (Georgia Institute of Technology and Indian Institute of Technology Kanpur); Biswabandan Panda (Indian Institute of Technology Bombay)
[Visitor PDF] [Public pre-print] [Artifact]
|13:50 - 14:10||Side Channel||Spring: Spectre Returning in the Browser with Speculative Load Queuing and Deep Stacks Johannes Wikner (ETH Zurich); Herbert Bos; Cristiano Giuffrida (Vrije Universiteit Amsterdam); Kaveh Razavi (ETH Zurich)
|14:30 - 15:00||Break||Coffee Good.|
|15:00 - 15:20||Browser & Protocols||Interactive History Sniffing with Dynamically-Generated QR Codes and CSS Difference Blending Keith O'Neal; Scott Yilek (University of St. Thomas)
|15:20 - 15:40||Browser & Protocols||On the Security of Parsing Security-Relevant HTTP Headers in Modern Browsers Hendrik Siewert (Paderborn University); Martin Kretschmer (IT.NRW); Marcus Niemietz (Niederrhein University of Applied Sciences); Juraj Somorovsky (Paderborn University)
[Visitor PDF] [Artifact(s)]
|15:40 - 16:00||Browser & Protocols||On the Insecurity of Vehicles Against Protocol-Level Bluetooth Threats Daniele Antonioli (EURECOM, FR); Mathias Payer (EPFL)
[Visitor PDF] [Public pre-print]
|16:00||Goodbye Colin O'Flynn and Martina Lindorfer|
- Binary: Daniele Antonioli
- Side Channel: Yuval Yarom
- Browsers & Protocols: Kevin Borgolte
Call for Papers
Computer security exposes the differences between the actual mechanisms of everyday trusted technologies and their models used by developers, architects, academic researchers, owners, operators, and end users. While being inherently focused on practice, security also poses questions such as "what kind of computations are and aren't trusted systems capable of?" which harken back to fundamentals of computability. State-of-the-art offense explores these questions pragmatically, gathering material for generalizations that lead to better models and more trustworthy systems.
WOOT provides a forum for high-quality, peer-reviewed work discussing tools and techniques for attacks. Submissions should reflect the state of the art in offensive computer security technology, exposing poorly understood mechanisms, presenting novel attacks, highlighting the limitations of published attacks and defenses, or surveying the state of offensive operations at scale. WOOT '22 accepts papers in both an academic security context and more applied work that informs the field about the state of security practice in offensive techniques. The goal for these submissions is to produce published works that will guide future work in the field. Submissions will be peer reviewed and shepherded as appropriate. Submission topics include, but are not limited to, attacks on and offensive research into:
- Hardware, including software-based exploitation of hardware vulnerabilities
- Virtualization and the cloud
- Network and distributed systems
- Operating systems
- Browser and general client-side security (runtimes, JITs, sandboxing)
- Application security
- Analysis of mitigations and automating how they can be bypassed
- Automating software testing such as fuzzing for novel targets
- Internet of Things
- Machine Learning
- Cyber-physical systems
- Cryptographic systems (practical attacks on deployed systems)
- Malware design, implementation and analysis
- Offensive applications of formal methods (solvers, symbolic execution)
The presenters will be authors of accepted papers. There will also be a keynote speaker and a selection of invited speakers. WOOT '22 will feature a Best Paper Award and a Best Student Paper Award.
Note that WOOT'22 and other IEEE S&P workshops are planned to be held in person, see the IEEE S&P website for details and updates.
WOOT '22 welcomes submissions without restrictions of origin. Submissions from academia, independent researchers, students, hackers, and industry are welcome. Are you planning to give a cool talk at Black Hat in August? Got something interesting planned for other non-academic venues later this year? This is exactly the type of work we'd like to see at WOOT '22. Please submit—it will also give you a chance to have your work reviewed and to receive suggestions and comments from some of the best researchers in the world. More formal academic offensive security papers are also very welcome.
Systemization of Knowledge
Continuing the tradition of past years, WOOT '22 will be accepting "Systematization of Knowledge" (SoK) papers. The goal of an SoK paper is to encourage work that evaluates, systematizes, and contextualizes existing knowledge. These papers will prove highly valuable to our community but would not be accepted as refereed papers because they lack novel research contributions. Suitable papers include survey papers that provide useful perspectives on major research areas, papers that support or challenge long-held beliefs with compelling evidence, or papers that provide an extensive and realistic evaluation of competing approaches to solving specific problems. Be sure to select "Systematization of Knowledge paper" in the submissions system to distinguish it from other paper submissions.
- Paper submission deadline:Thursday, January 27, 2022. Thursday, February 3, 2022 Tuesday, February 8, 2022, 11:59 AoE (Anywhere on Earth) - EXTENDED
- Notification date: Thursday, Febrary 27th, 2022 Tuesday, March 8th, 2022
- Camera-ready paper deadline: Thursday, March 17, 2022
- Workshop date: Thursday, May 26, 2022
Please submit your paper at https://woot22.secpriv.tuwien.ac.at/woot22/paper/new
What to Submit
Submissions must be in PDF format. Papers should be succinct but thorough in presenting the work. The contribution needs to be well motivated, clearly exposed, and compared to the state of the art. Typical research papers are at least 4 pages, and maximum 10 pages long (not counting bibliography and appendix). Yet, papers whose lengths are incommensurate with their contributions will be rejected.
The submission should be formatted in 2-columns, using 10-point Times Roman type on 12-point leading, in a text block of 6.5” x 9”. Please number the pages. Authors must use the IEEE templates, for LaTeX papers this is IEEETran.cls version 1.8b.
Note that paper format rules may be clarified. Stay tuned.
Submissions are double blind: submissions should be anonymized and avoid obvious self-references (authors are allowed to release technical reports and present their work elsewhere such as at DefCon or BlackHat). Submit papers using the submission form.
Authors of accepted papers will have to provide a paper for the proceedings following the above guidelines. A shepherd may be assigned to ensure the quality of the proceedings version of the paper.
If your paper should not be published prior to the event, please notify the chairs. Submissions accompanied by non-disclosure agreement forms will not be considered. Accepted submissions will be treated as confidential prior to publication on the WOOT '22 website; rejected submissions will be permanently treated as confidential.
Policies and Contact Information
Simultaneous submission of the same work to multiple competing academic venues, submission of previously published work without substantial novel contributions, or plagiarism constitutes dishonesty or fraud. Note: Work presented by the authors at industry conferences, such as Black Hat, is not considered to have been "previously published" for the purposes of WOOT '22. We strongly encourage the submission of such work to WOOT '22, particularly work that is well suited to a more formal and complete treatment in a published, peer-reviewed setting. In your submission, please do note any previous presentations of the work.
If the submission describes, or otherwise takes advantage of, newly identified vulnerabilities (e.g., software vulnerabilities in a given program or design weaknesses in a hardware system) the authors should disclose these vulnerabilities to the vendors/maintainers of affected software or hardware systems prior to the CFP deadline. When disclosure is necessary, authors should include a statement within their submission and/or final paper about steps taken to fulfill the goal of disclosure.
Submissions that describe experiments on human subjects, that analyze data derived from human subjects (even anonymized data), or that otherwise may put humans at risk should:
- Disclose whether the research received an approval or waiver from each of the authors’ institutional ethics review boards (e.g., an IRB).
- Discuss steps taken to ensure that participants and others who might have been affected by an experiment were treated ethically and with respect.
- If a paper raises significant ethical or legal concerns, including in its handling of personally identifiable information (PII) or other kinds of sensitive data, it might be rejected based on these concerns.
WOOT '22 Artifact Evaluation
All deadlines are 23:59 AoE (Anywhere on Earth):
- March 1: Invitation to authors of accepted papers to submit artifacts
- March 24: Artifact submission deadline
- March 25-April 7: Authors must be reachable for questions in this period
- April 8: Notification
Authors are expected to submit the following:
- A PDF with an abstract for the artifact, which specifies the core idea, the focus of the artefact, and what the evaluation should check
- A PDF of the most recent version of the accepted paper
- Documentation about the artifact (how to reproduce the contributions of the paper)
- A link to the artifact, which must be available anonymously (artifact evaluation is single-blind)
Artifact evaluation is to be opened
A scientific paper consists of a constellation of artifacts that extend beyond the document itself: software, hardware, evaluation data and documentation, raw survey results, mechanized proofs, models, test suites, benchmarks, and so on. In some cases, the quality of these artifacts is as important as that of the document itself, yet many of our conferences offer no formal means to submit and evaluate anything but the paper itself. To address this shortcoming, WOOT will run an optional artifact evaluation process, inspired by similar efforts in software engineering and security conferences.
The AEC evaluates whether the artifact does or does not conform to the expectations set by the paper. We expect artifacts to be:
- consistent with the paper
- as complete as possible
- documented well
- easy to reuse, facilitating further research
We believe the dissemination of artifacts benefits our science and engineering as a whole, as well as the authors submitting them. Their availability improves replicability and reproducibility and enables authors to build on top of each other's work. It can also help more unambiguously resolve questions about cases not considered by the original authors. The authors receive recognition, leading to higher-impact papers, and also benefit themselves from making code reusable.
Artifact evaluation is a separate process from paper reviews, and authors will be asked to submit their artifacts only after their papers have been (conditionally) accepted for publication at WOOT.
After artifact submission, at least one member of the AEC will download and install the artifact (where relevant) and evaluate it. Since we anticipate small glitches with installation and use, reviewers may communicate with authors to help resolve glitches while preserving reviewer anonymity. The AEC will complete its evaluation and notify authors of the outcome.
For the camera ready version, authors that have successfully passed the evaluation process will receive dedicated badges on their papers to demonstrate that their paper has passed this additional evaluation. We also ask the authors to make the artifacts available such that others can replicate the results.
To avoid excluding some papers, the AEC will try to accept any artifact that authors wish to submit. These can be software, hardware, data sets, survey results, test suites, mechanized proofs, and so on. Given the experience in other communities, we decided to not accept paper proofs in the artifact evaluation process. The AEC lacks the time and often the expertise to carefully review paper proofs. Obviously, the better the artifact is packaged, the more likely the AEC can actually work with it during the evaluation process.
While we encourage open research, submission of an artifact does not contain tacit permission to make its content public. All AEC members will be instructed that they may not publicize any part of your artifact during or after completing the evaluation, nor retain any part of it after evaluation. Thus, you are free to include, e.g., models, data files, or proprietary binaries in your artifact. Also, note that participating in the AEC experiment does not require you to later publish your artifacts, but of course we strongly encourage you to do so.
We recognize that some artifacts may attempt to perform malicious operations by design. These cases should be boldly and explicitly flagged in detail in the readme so AEC members can take appropriate precautions before installing and running these artifacts. The evaluation of exploits and similar results might lead to additional hurdles where we still need to collect experience how to handle this best. Please contact us in case you have concerns, for example when evaluating bug finding tools or other types of artifacts that need special requirements.
- Eduardo Blázquez González, UC3M Madrid
- Georg Merzdovnik, SBA Research
- Hendrik Siewert, Paderborn University
- Kevin Tavukciyan, IBM Zürich
- Marco Casagrande, EURECOM
- Michael Pucher, University of Vienna
- Nathan Rutherford, Royal Holloway, University of London
- Pedro Bernardo, TU Wien
- Sebastian Schrittwieser, University of Vienna
- Sven Hebrok, Paderborn University
- Travis Goodspeed, Radiant Machines
Program co-chairs (email)
- Martina Lindorfer, TU Wien
- Colin O'Flynn, NewAE Technology and Dalhousie University
- Adrian Dabrowski, University of California, Irvine and CISPA Helmholtz Center for Information Security, DE
Artifact evaluation committeeTo be announced.
Note the program committee as follows is confirmed members only, and this list will be expanded over the coming days:
- Adrian Dabrowski, University of California, Irvine, US and CISPA Helmholtz Center for Information Security, DE
- Alessandro Sorniotti, IBM Research Europe - Zurich, CH
- Andrea Fioraldi, EURECOM, FR
- Andrea Lanzi, University of Milan, IT
- Andrea Mambretti, IBM Research Europe - Zurich, CH
- Andrew Paverd, Microsoft, UK
- Antonio Bianchi, Purdue University, US
- Aravind Machiry, Purdue University, US
- Asuka Nakajima, NTT R&D, JP
- Ben Gras, Intel, NL
- Daniel Genkin, Georgia Tech, US
- Daniele Antonioli, EURECOM, FR
- Fabien Duchene, Apple Security, US
- Fabio Pagani, UC Santa Barbara, US
- Jiska Classen, TU Darmstadt, Secure Mobile Networking Lab, DE
- Marco Squarcina, TU Wien, AT
- Maria Markstedter, Azeria Labs, DE
- Maddie Stone, Google, US
- Michael Heinzl, Independent, AT
- Natalie Silvanovich, Google, US
- Sarah Zennou, Airbus, FR
- Sara Rampazzi, University of Flordia, US
- Sebastian Schinzel, Münster University of Applied Sciences, DE
- Thomas Roth, Leveldown Security, DE
- Travis Goodspeed, US
- Vasileios Kemerlis, Brown University, US
- Victor van der Veen, Qualcomm, NL
- Yueqiang Cheng, NIO Security Research, US
- Aurélien Francillon, EURECOM
- Yuval Yarom, University of Adelaide and Data61
- Clémentine Maurice, CNRS
- Sarah Zennou, Airbus
- Collin Mulliner, Cruise
- Fangfei Liu, Intel
- Mathias Payer, EPFL