23 JULY 2021 – Virtual conference

1ˢᵗ Huawei Innovation Workshop on

Artificial Intelligence for Cyber-Security

A meeting to foster innovation and research exchange among experts at the cutting-edge of Artificial Intelligence and Cyber-Security

Register Now
Watch Live Stream

9:00 AM – 5:00 PM CEST

23ʳᵈ July 2021

Academic and Industry Attendees

4 Topic

Sessions

AI/ML,

Cyber-Security

Description

In the last few years, we have witnessed a renaissance in machine learning (ML) and artificial intelligence (AI) applied within cyber-security. Cyber-security is a very promising area for AI/ML applications, due to the availability of massive amounts of data, shortage of trained cyber-security professionals, and increasing complexity of attacks. However, there remain significant challenges which hinder AI/ML adoption, such as lack of decision interpretability, imbalanced datasets, high cost of false positives/negatives, risk of adversarial exploitation, fast attack evolution, privacy- or security-sensitive datasets not allowed to be shared, and a general lack of high-quality benchmark datasets.

To discuss issues surrounding AI/ML and cyber-security, we are pleased to launch the inaugural edition of Huawei Innovation Workshop on Artificial Intelligence for Cyber-Security. The workshop will take place (virtually) on 23ʳᵈ July 2021 (9:00 AM – 5:00 PM CEST), and will be jointly organized by the Huawei AI4Sec Research Team (Munich Research Center) and Huawei Datacom. This workshop will foster discussion and collaboration among academics and professionals interested in analyzing and understanding current issues around the use of AI/ML within cyber-security domains. By being part of this community, you will benefit from remarkable networking opportunities for sharing your innovative ideas in a friendly atmosphere and for establishing long-lasting contacts.

State of The Art

Understanding AI/ML within cyber-security

AI and Cyber-Security Community

Top academics and professionals

Share your Research

Share your research idea!

Agenda

  • SESSION 1

  • 9:00-9:10 Welcome and Opening Remarks

  • 9:10-9:50 Intriguing Properties of Adversarial ML Attacks in the Problem Space. Lorenzo Cavallaro (King’s College London)

  • 9:50-10:30 Can you trust your GNN? — Certifiable Robustness of Machine Learning Models for Graphs. Stephan Günnemann (TUM Informatik)

  • 10:30-10:40 Break

  • SESSION 2

  • 10:40-11:20 Adversarial Preprocessing: Image-Scaling Attacks in Machine Learning. Konrad Rieck and Erwin Quiring (TU Braunschweig)

  • 11:20-12:00 Phishpedia: A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages. Lin Yun (SoC, National University of Singapore)

  • 12:00-12:40 Lunch Break

  • SESSION 3

  • 12:40-13:20 Large-Scale Modelling of TLS-based Servers in the Internet. Georg Carle (Technical University of Munich)

  • 13:20-14:00 AI-Based Cybersecurity for Autonomous Vehicles – Detecting Network Level Attacks on LiDAR. Girish Revadigar (IT Lab Singapore)

  • 14:00-14:10 Break

  • SESSION 4

  • 14:10-14:50 Machine Learning (for) Security: Lessons Learned and Future Challenges. Battista Biggio (University of Cagliari)

  • 14:50-15:30 The Security of Machine Learning in 5G Network Infrastructures. Giovanni Apruzzese (University of Liechtenstein)

  • 15:30-15:40 Break

  • SESSION 5

  • 15:40-16:00 Fake identity detection in speech data. Nicolas Müller (Fraunhofer AISEC)

  • 16:00-16:20 Adversarial EXEmples: Functionality-preserving Optimization of Adversarial Windows Malware. Luca Demetrio (University of Cagliari)

  • 16:20-16:40 Detection of illicit cryptomining using network metadata. Michele Russo (Huawei MRC, AI4Sec)

  • 16:40-17:00 Concluding Remarks

Agenda

Time (CEST)

Talk

9:00 - 9:05Welcome
9:05 - 9:10 Opening Remarks
Ma Ye (President Security and Gateway Department, Huawei)
Session 1
9:10 - 9:50 Intriguing Properties of Adversarial ML Attacks in the Problem Space
Lorenzo Cavallaro (King's College London)
9:50 - 10:30 Can you trust your GNN? -- Certifiable Robustness of Machine Learning Models for Graphs
Stephan Günnemann (TUM Informatik)
10:30 - 10:40Break
Session 2
10:40 - 11:20 Adversarial Preprocessing: Image-Scaling Attacks in Machine Learning
Konrad Rieck and Erwin Quiring (TU Braunschweig)
11:20 - 12:00 Phishpedia: A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages
Lin Yun (SoC, National University of Singapore)
12:00 - 12:40Lunch Break
Session 3
12:40 - 13:20 Large-Scale Modelling of TLS-based Servers in the Internet
Georg Carle (Technical University of Munich)
13:20 - 14:00 AI-Based Cybersecurity for Autonomous Vehicles - Detecting Network Level Attacks on LiDAR Sensor Data
Girish Revadigar (Trustworthiness Technology Lab, HUAWEI Singapore Research Center)
14:00 - 14:10Break
Session 4
14:10 - 14:50 Machine Learning (for) Security: Lessons Learned and Future Challenges
Battista Biggio (University of Cagliari)
14:50 - 15:30 The Security of Machine Learning in 5G Network Infrastructures
Giovanni Apruzzese (University of Liechtenstein)
15:30 - 15:40Break
Session 5
15:40 - 16:00 Fake identity detection in speech data
Nicolas Müller (Fraunhofer AISEC)
16:00 - 16:20 Adversarial EXEmples: Functionality-preserving Optimization of Adversarial Windows Malware
Luca Demetrio (University of Cagliari)
16:20 - 16:40 Detection of illicit cryptomining using network metadata
Michele Russo (Huawei MRC, AI4Sec)
16:40 - 17:00Concluding Remarks

Our Speakers

Dr Giovanni Apruzzese

PostDoc Researcher, university of liechtenstein

Giovanni Apruzzese is a Post-Doctoral researcher within the Hilti Chair of Data and Application Security at the University of Liechtenstein since 2020. He received the PhD Degree and the Master’s Degree in Computer Engineering (summa cum laude) in 2020 and 2016 respectively at the University of Modena, Italy. In 2019 he spent 6 months as a Visiting Researcher at Dartmouth College (Hanover, NH, USA) under the supervision of Prof. VS Subrahmanian. His research interests involve all aspects of big data security analytics with a focus on adversarial machine learning.

Read more
Prof. Battista Biggio

Assistant Professor, university of calgliari

Battista Biggio (MSc 2006, PhD 2010) is an Assistant Professor at the University of Cagliari, Italy, and co-founder of Pluribus One (pluribus-one.it). His research interests include machine learning and cybersecurity. He has provided pioneering contributions in the area of ML security, demonstrating the first gradient-based evasion and poisoning attacks, and how to mitigate them, playing a leading role in the establishment and advancement of this research field. He has managed six research projects, and served as a PC member for the most prestigious conferences and journals in the area of ML and computer security (ICML, NeurIPS, ICLR, IEEE SP, USENIX Security). He chaired the IAPR TC on Statistical Pattern Recognition Techniques (2016-2020), co-organized S+SSPR, AISec and DLS, and served as Associate Editor for IEEE TNNLS, IEEE CIM and Pattern Recognition. He is a senior member of the IEEE, and a member of the IAPR, ACM, and ELLIS.

Read more
Dr Luca Demetrio

POSTDOCTORAL RESEARCHER, UNIVERSITY OF CAGLIARI

Luca Demetrio is a postdoctoral researcher at the University of Cagliari. He received the M.Sc. degree (Hons.) and the Ph.D. degree in Computer Science from the University of Genoa, Italy, in 2017 and 2021. His research interests cover the area of adversarial machine learning, with strong focus on its application in the cyber-security domain. He is currently studying the weaknesses of threat detectors implemented with machine learning techniques, and how to exploit such vulnerabilities.

Read more
Prof. Stephan Günnemann
Professor, TUM Informatik

Stephan Günnemann is a Professor at the Department of Informatics, Technical University of Munich and Director of the Munich Data Science Institute. His main research focuses on how to make machine learning techniques reliable, thus, enabling their safe and robust use. Prof. Günnemann is particularly interested in studying machine learning methods targeting complex data domains such as graphs/networks and temporal data. His works on subspace clustering on graphs as well as adversarial robustness of graph neural networks have received the best research paper awards at ECML-PKDD and KDD. Stephan acquired his doctoral degree at RWTH Aachen University, Germany. He was an associate of Carnegie Mellon University, USA, a visiting researcher at Simon Fraser University, Canada, and a research scientist at the Research & Technology Center of Siemens AG. Stephan has been a (senior) PC member/area chair at conferences including NeurIPS, ICML, KDD, ECML-PKDD, AAAI, WWW.

Read more
Prof. Georg Carle

Professor, Chair of Network Architectures and Services

Georg Carle is professor at the Department of Informatics at Technical University of Munich (TUM), Germany, heading the chair of Network Architectures and Services. He studied electrical engineering at University of Stuttgart. Studies abroad included a Master of Science in Digital Systems at Brunel University, London, and a stay at Ecole Nationale Superieure des Telecommunications, Paris (now Telecom ParisTech). He did his PhD in Computer Science at University of Karlsruhe, and worked as postdoctoral scientist at Institut EURECOM, Sophia Antipolis, France. Subsequently, he worked at Fraunhofer FOKUS, Berlin, where he directed the competence center on Global Networking. In 2003, he joined University of Tübingen as a full professor, founding the chair of Computer Networks and Internet. In 2008 he accepted a call by Technical University of Munich. His work addresses Internet technologies and security of networked systems.

Read more
Prof. Lorenzo Cavallaro

Professor of Computer Science, Chair in Cybersecurity, King’s College London

Lorenzo grew up on pizza, spaghetti, and Phrack, first. Underground and academic research interests followed shortly thereafter. He is a Full Professor of Computer Science in the Department of Informatics at King’s College London, where he holds the Chair in Cybersecurity (Systems Security) and leads the Cybersecurity group’s Systems Security Research Lab (https://s2lab.kcl.ac.uk), working at the intersection of program analysis and machine learning for systems security. Lorenzo is Program Co-Chair of DIMVA 2021-22 and CyberSec & AI Connected 2021, and was Program Co-Chair of Deep Learning and Security 2021, ACM EuroSec 2019-20, and was General Co-Chair of ACM CCS 2019. He holds a PhD in Computer Science from the University of Milan, held positions at VU Amsterdam, UC Santa Barbara, and Stony Brook University, and was an Academic in the Information Security Group at Royal Holloway, University of London. He’s definitely never stopped wondering and having fun throughout.

Read more
Dr Yun Lin

Senior Research Fellow, Nat’l University of Singapore

Dr. LIN Yun is a Senior Research Fellow in National University of Singapore. His research interests include program analysis and cybersecurity. He published research works on top-tier international conferences and journals such as USENIX Security, ICSE, FSE, ASE, ISSTA, and TSE. He is the leading (first) author in 10 of them. He won ACM SIGSOFT Distinguished Paper Award on ICSE’18. Moreover, he has served as reviewer or PC member for several journals and conferences such as TSE, TOSEM, TOIT, FOCS, ICSE (NIER Track), ICSME (Tool Demo), SANER, COMPSAC, ICPC, etc. He served as a publication chair in ICECCS’17 and local arrangement chair in Internetware’20.

Read more
Erwin Quiring

Researcher, TU Braunschweig 

Erwin Quiring is a researcher at the Institute of System Security at TU Braunschweig. His research interests include the secure application of machine learning (adversarial machine learning), malware detection, and multimedia security.

Read more
Dr Nicolas Müller

Research Associate, fraunhofer AISEC

Nicolas Müller studied Mathematics and Computer Science at the University of Freiburg and now works as a research associate at Fraunhofer AISEC, where his team of researchers focuses on the security of machine learning (adversarial machine learning) and deep-fake / spoofing detection in audio and voice data.

Read more
Prof. Konrad Rieck

Professor, TU Braunschweig

Konrad Rieck is a Professor at TU Braunschweig, where he leads the Institute of System Security. Prior to this, he has been working at the University of Göttingen, TU Berlin and Fraunhofer Institute FIRST. He is a recipient of the CAST/GI Dissertation Award, the Google Faculty Research Award and the German Prize for IT-Security. His interests revolve around computer security and machine learning, including the detection of computer attacks, the analysis of malicious code, and the discovery of vulnerabilities.

Read more
Dr Girish Revadigar

SENIOR RESEARCHER, Trustworthiness Technology Lab, HUAWEI Singapore Research Center

Dr. Girish Revadigar is a Senior Researcher at TT Lab Singapore. His research focuses on AI-based Cybersecurity for Autonomous Vehicles. Till date, Dr. Revadigar’s 16 novel solutions have been patented with 2 of them named as Huawei’s high value patents. Girish earned his PhD in Comp. Science (Cybersecurity) from UNSW Sydney, Australia, and completed Master of Tech., and Bachelor of Eng., from VTU, India. Post PhD, he was a Research Associate at UNSW Sydney, and Postdoctoral Research Fellow at SUTD, Singapore. Prior to PhD, he was a Senior Software Engineer in the field of automotive embedded and infotainment systems, and short range wireless networks. Girish has won many awards for his research. He was one of the top 200 most qualified young researchers selected from all over the world for attending the prestigious Heidelberg Laureate Forum (HLF) 2016. He serves as a TPC Member and reviewer for many top tier conferences and journals. Girish is a member of IEEE and ACM.

Read more
Michele Russo

Research engineer, Huawei

Michele has a bachelor’s degree in electronic and telecommunications engineering from the University of Trento (Italy) and a master’s double degree in computer engineering and cyber security from Polytechnic University of Turin and EURECOM. In his master’s studies and thesis, he focused on applications of machine learning for cyber security. He joined Huawei AI4Sec team in 2018 and continued working on AI for cyber security, focusing primarily on network traffic analysis. In his free time Michele enjoys being outdoors and doing sports like climbing, hiking, and surfing.

Read more

Topics of Interest

The main topics of interest for the 1st Huawei Innovation Workshop on Artificial Intelligence for Cyber-Security are:

Topic 1: AI for threat detection

How can AI help SOC analysts to defend against novel and advanced threats?

Sub topics:

  • Encrypted traffic analysis with AI/ML
  • Detecting advanced attacks with AI/ML
  • Malware analysis with AI/ML
  • Edge AI for embedded security

Topic 2: Adversarial machine learning

How is machine learning affected by adversarial manipulation?

Sub topics:

  • AI-enabled attacks
  • Adversarial analysis of AI/ML algorithms
  • AI vs AI in cyber-security
  • Concept drift as a challenge in applied AI for security
  • Building robust AI-based cyber-security solutions

Topic 3: AI and cyber threat intelligence

How can we leverage AI to enrich cyber threat intelligence?

Sub topics:

  • Understanding cyber threat intelligence with AI
  • AI/ML techniques for cyber. threat intelligence discovery
  • Human and AI cooperation within cyber-security
  • Exploiting knowledge graphs for cyber threat intelligence extraction and inference

Topic 4: Cloud-delivered AI security

How can we enable Cloud-delivered AI cyber-security to protect today’s enterprise infrastructures across endpoints, networks and the Cloud itself?

Sub topics:

  • Cloud-delivered AI-driven cyber-security
  • AI & SOC-as-a-service
  • Distributed/federated AI/ML for cyber-security
  • Privacy-preserving AI on the Cloud
  • AI for policy automation and security orchestration

Preliminary Schedule

23rd July 2021

Keynote Speech 1
Keynote Speaker 1 (Updated soon)
Keynote Speech 2
Keynote Speaker 1 (Updated soon)
Keynote Speech 3
Keynote Speaker 1 (Updated soon)
Keynote Speech 4
Keynote Speaker 1 (Updated soon)

Where ?

Virtual Conference

Got any question?

Email us at
IW2021@ai4sec.net

Program Chairs

  • Claas Grohnfeldt (Huawei AI4Sec Research Team)
  • Daniele Sgandurra (Huawei AI4Sec Research Team)
  • Nedim Šrndic (Huawei AI4Sec Research Team)
  • Jing Tan (Huawei AI4Sec Research Team)

Registration [Event Closed]

Please register by clicking button below. After registering, you will receive information on how to attend the virtual event. Please note that without registering it will not be possible to attend the event.

Frequently Asked Questions

Q: How can I attend the event?

A: For this inaugural edition, the event will be hosted and streamed online: all attendees (including speakers) must register to be able to attend the event. Registered attendees will receive information on how to attend the virtual event. Please note that without registering it will not be possible to attend the event.

Q: Is there a registration fee?

A: No, this year the event will be free but it will require registration.

Q: Who is the audience this workshop addresses?

A: This workshop will be mainly attended by academics and professionals who are interested in analyzing and understanding current issues around the use of AI within cyber-security domains.

Q: How can I submit a presentation?

A: For this inaugural edition, the call-for-speakers is by invitation-only. We plan to open the call-for-speakers starting from next editions.

Q: Will the presentation be done in real-time or will there be pre-recorded content streamed?

A: Presentations will be performed real-time by the speakers.

Q: How can I follow the event and watch the presentations?

A: Only registered participants can follow the event and watch the presentations. Videos will not be made available to the public after the event.

Q: How can attendees engage with the speakers and other participants?

A: After each presentation, a live Q&A will take place so attendees may ask questions in real time to the presenters. We will also provide a live chat functionality to engage with participants in a more relaxed way.

Q: Will the videos of the presentations be made available after the event?

A: Videos of the event (e.g., presentations) will NOT be made available after the event, therefore, we recommend everyone interested in this event to register to be able to listen and engage with the speakers and the attendees during the event.

FAQ

Q: How can I attend the event?

A: The event will be hosted online: all attendees (including speakers) must first register (Link to the registration will be available shortly. Attendees will receive information on how to attend the virtual event using a software client (likely, Zoom). Please note that without registration it will not be possible to attend the event.

Q: Is there any registration fee to attend the event?

A: No, this year the event is free but it will require registration: registration will open soon.

Q: How can I submit a presentation?

A: For this inaugural edition, the call-for-speakers is by invitation-only. We plan to open the call-for-speakers starting from next editions.

Q: Will the presentation be done in real-time or will be pre-recorded content streamed?

A: Presentations will be performed real-time by the speakers (unless stated otherwise, e.g., for last-minute unavailability of a speaker).

Q: Will the videos of the presentations made available after the event?

A: Videos of the presentations will not be made available after the event, therefore, we recommend everyone interested in this event to register to be able to listen and engage with the speakers and the attendees during the event. Registration will open soon

 

Supported by:

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google
Spotify
Consent to display content from Spotify
Sound Cloud
Consent to display content from Sound

Title: Detection of illicit cryptomining using network metadata

Abstract: Illicit cryptocurrency mining has become one of the prevalent methods for monetization of computer security incidents. In this attack, victims' computing resources are abused to mine cryptocurrency for the benefit of attackers. Mining crucially relies on communication between compromised systems and remote mining pools using the de facto standard protocol Stratum. Therefore, we focused on network-based detection of cryptomining malware and developed XMR-Ray, a machine learning detector using novel features based on reconstructing the Stratum protocol from raw NetFlow records. The detector is trained offline using only mining traffic and does not require privacy-sensitive normal network traffic, which facilitates its adoption and integration.

Speaker: Michele Russo

Time: 16:20 - 16:40 on 23rd July 2021

Title: Adversarial EXEmples: Functionality-preserving Optimization of Adversarial Windows Malware

Abstract: Windows malware classifiers that rely on static analysis have been proven vulnerable to adversarial examples, but such attacks are query-inefficient since they iteratively apply random manipulations, and require checking that the malicious functionality is preserved after manipulation.

To overcome these limitations, we propose RAMEn, a general framework for creating adversarial examples via functionality-preserving manipulations, optimizing the intensity of such manipulations via gradient-based and gradient-free techniques.

It encodes many state-of-the-art attacks, including GAMMA, a blackbox attack that optimize the injection of benign content to facilitate evasion. We show how gradient-based and gradient-free attacks can bypass academic malware detectors, and some commercial products..

Speaker: Luca Demetrio

Time: 16:00 - 16:20 on 23rd July 2021

Title: Fake identity detection in speech data

Abstract: With the advances in artificial intelligence, the so-called 'deep-fake' technology is also gaining strength, making it possible to ‘digitally clone’ a target person.

Speaker: Nicolas Müller

Time: 15:40 - 16:00 on 23rd July 2021

Title: The Security of Machine Learning in 5G Network Infrastructures

Abstract: The 5G network infrastructures must support billions of devices while guaranteeing optimal QoS. This cannot be achived with hardcoded rules, and Machine Learning (ML) is expected to play a pivotal role.

However, the deployment of ML exposes to "adversarial attacks", where an input sample is minimally altered to thwart the target ML system. Although existing works show extremely effective attacks, they usually make unrealistic assumptions. Critical infrastructures are well protected and this must be taken into consideration.

In this talk, we present how a realistic attacker can cause damage to the 5G infrastructure provider through adversarial examples, and show that even proficient ML models can be affected by `constrained' attackers that must operate within real 5G scenarios.

Speaker: Giovanni Apruzzese

Time: 14:50 - 15:30 on 23rd July 2021

Title: Machine Learning (for) Security: Lessons Learned and Future Challenges

Abstract: In this talk, I will briefly review some recent advancements in the area of machine learning security with a critical focus on the main factors which are hindering progress in this field. These include the lack of an underlying, systematic and scalable framework to properly evaluate machine-learning models under adversarial and out-of-distribution scenarios, along with suitable tools for easing their debugging. The latter may be helpful to unveil flaws in the evaluation process, as well as the presence of potential dataset biases and spurious features learned during training. I will finally report concrete examples of what our laboratory has been recently working on to enable a first step towards overcoming these limitations, in the context of Android and Windows malware detection.

Speaker: Battista Biggio

Time: 14:10 - 14:50 on 23rd July 2021

Title: AI-Based Cybersecurity for Autonomous Vehicles - Detecting Network Level Attacks on LiDAR Sensor Data

Abstract: The rapid advancement in automotive industry has lead to the emergence of autonomous vehicles (AV) that are capable of self driving in all types of road environments. An AV depends on numerous sensors to perceive the external world. An adversary could exploit this feature to manipulate/spoof/inject false sensor data to mislead AV’s planning that can cause serious issues related to the safety of vehicle and occupants. In this talk, I present our recent AI-based Cyber Security solution for LiDAR sensors, which can effectively detect any such network level signal modification/injection attacks. A prototype of our solution has been implemented to demonstrate its feasibility in practical applications. The solution is software based only, which makes it an ideal choice for any AV platforms.

Speaker: Girish Revadigar

Time: 13:20 - 14:00 on 23rd July 2021

Title: Large-Scale Modelling of TLS-based Servers in the Internet

Abstract: Web Servers in the Internet exhibit a wide range of different characteristics, that are influenced by the software implementation of their protocol stack, and also the configuration. In the talk, we present results of large-scale investigations of Web servers, focusing on their TLS and HTTP header properties. We compare properties of Web servers, as represented by Alexa top lists, and properties of servers that appear on block lists.

Speaker: Georg Carle

Time: 12:40 - 13:20 on 23rd July 2021

Title: Phishpedia: A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages

Abstract: Reference-based phishing detection approaches usually identify phishing webpages with explanation by visually comparing webpages with predefined legitimate references and report phishing along with its target brand. However, there are technical challenges in visual analyses that limit existing solutions from being effective and efficient. In this work, we design a hybrid deep learning system, Phishpedia, to address the prominent technical challenges of logo matching on screenshot. Our extensive experiments demonstrate that Phishpedia significantly outperforms baseline identification approaches in accurately and efficiently identifying phishing pages. Phishpedia integrated with CertStream service discovered 1704 new real phishing websites within 30 days, way more than other solutions.

Speaker: Yun Lin

Time: 11:20 - 12:00 on 23rd July 2021

Title: Adversarial Preprocessing: Image-Scaling Attacks in Machine Learning

Abstract: The success of machine learning has been overshadowed by different attacks that thwart its correct operation. While prior work has mainly focused on attacking learning algorithms, another weak spot in learning-based systems has been overlooked: data preprocessing. In this talk, I discuss a recent class of attacks against image scaling. These attacks are agnostic to learning algorithms and affect the preprocessing of all vision systems that use vulnerable scaling implementations, such as TensorFlow, OpenCV, and Pillow. Based on a root-cause analysis of the vulnerabilities, I present novel defenses that effectively block image-scaling attacks in practice and can be easily added to existing systems.

Speaker: Konrad Rieck

Time: 10:40 - 11:20 on 23rd July 2021

Title: Can you trust your GNN? -- Certifiable Robustness of Machine Learning Models for Graphs

Abstract: Graph neural networks have achieved impressive results in various graph learning tasks and they have found their way into many applications such as fraud detection, knowledge graph reasoning, molecular property prediction, or cancer classification, Despite their proliferation, studies of their robustness properties are still very limited -- yet, in domains where graph learning methods are often used the data is rarely perfect and adversaries are common. Specifically, in safety-critical environments and decision-making contexts involving humans, it is crucial to ensure the GNNs reliability. In my talk, I will shed light on the aspect of robustness for state-of-the art graph-based learning techniques and I will discuss principles allowing us to certify their robustness.

Speaker: Stephan Günnemann

Time: 9:50 - 10:30 on 23rd July 2021

Title: Intriguing Properties of Adversarial ML Attacks in the Problem Space

Abstract: Recent research on adversarial ML has investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, real-world implications of problem-space attacks remain underexplored. In this talk, I will present our novel reformulation of adversarial ML evasion attacks in the context of realizable attacks. This requires to reason about additional constraints feature-space attacks ignore, which sheds light on the relationship between feature-space and problem-space attacks. Then, building on our reformulation, I will present a novel problem-space attack for generating end-to-end evasive Android malware, which evades state-of-the-art defenses.

Speaker: Lorenzo Cavallaro

Time: 9:10 - 9:50 on 23rd July 2021

Title: Detection of illicit cryptomining using network metadata

Abstract: Illicit cryptocurrency mining has become one of the prevalent methods for monetization of computer security incidents. In this attack, victims' computing resources are abused to mine cryptocurrency for the benefit of attackers. Mining crucially relies on communication between compromised systems and remote mining pools using the de facto standard protocol Stratum. Therefore, we focused on network-based detection of cryptomining malware and developed XMR-Ray, a machine learning detector using novel features based on reconstructing the Stratum protocol from raw NetFlow records. The detector is trained offline using only mining traffic and does not require privacy-sensitive normal network traffic, which facilitates its adoption and integration.

Speaker: Michele Russo

Time: 16:20 - 16:40 on 23rd July 2021

Title: Adversarial EXEmples: Functionality-preserving Optimization of Adversarial Windows Malware

Abstract: Windows malware classifiers that rely on static analysis have been proven vulnerable to adversarial examples, but such attacks are query-inefficient since they iteratively apply random manipulations, and require checking that the malicious functionality is preserved after manipulation.

To overcome these limitations, we propose RAMEn, a general framework for creating adversarial examples via functionality-preserving manipulations, optimizing the intensity of such manipulations via gradient-based and gradient-free techniques.

It encodes many state-of-the-art attacks, including GAMMA, a blackbox attack that optimize the injection of benign content to facilitate evasion. We show how gradient-based and gradient-free attacks can bypass academic malware detectors, and some commercial products..

Speaker: Luca Demetrio

Time: 16:00 - 16:20 on 23rd July 2021

Title: Fake identity detection in speech data

Abstract: With the advances in artificial intelligence, the so-called 'deep-fake' technology is also gaining strength, making it possible to ‘digitally clone’ a target person.

Speaker: Nicolas Müller

Time: 15:40 - 16:00 on 23rd July 2021

Title: The Security of Machine Learning in 5G Network Infrastructures

Abstract: The 5G network infrastructures must support billions of devices while guaranteeing optimal QoS. This cannot be achived with hardcoded rules, and Machine Learning (ML) is expected to play a pivotal role.

However, the deployment of ML exposes to "adversarial attacks", where an input sample is minimally altered to thwart the target ML system. Although existing works show extremely effective attacks, they usually make unrealistic assumptions. Critical infrastructures are well protected and this must be taken into consideration.

In this talk, we present how a realistic attacker can cause damage to the 5G infrastructure provider through adversarial examples, and show that even proficient ML models can be affected by `constrained' attackers that must operate within real 5G scenarios.

Speaker: Giovanni Apruzzese

Time: 14:50 - 15:30 on 23rd July 2021

Title: Machine Learning (for) Security: Lessons Learned and Future Challenges

Abstract: In this talk, I will briefly review some recent advancements in the area of machine learning security with a critical focus on the main factors which are hindering progress in this field. These include the lack of an underlying, systematic and scalable framework to properly evaluate machine-learning models under adversarial and out-of-distribution scenarios, along with suitable tools for easing their debugging. The latter may be helpful to unveil flaws in the evaluation process, as well as the presence of potential dataset biases and spurious features learned during training. I will finally report concrete examples of what our laboratory has been recently working on to enable a first step towards overcoming these limitations, in the context of Android and Windows malware detection.

Speaker: Battista Biggio

Time: 14:10 - 14:50 on 23rd July 2021

Title: AI-Based Cybersecurity for Autonomous Vehicles - Detecting Network Level Attacks on LiDAR Sensor Data

Abstract: The rapid advancement in automotive industry has lead to the emergence of autonomous vehicles (AV) that are capable of self driving in all types of road environments. An AV depends on numerous sensors to perceive the external world. An adversary could exploit this feature to manipulate/spoof/inject false sensor data to mislead AV’s planning that can cause serious issues related to the safety of vehicle and occupants. In this talk, I present our recent AI-based Cyber Security solution for LiDAR sensors, which can effectively detect any such network level signal modification/injection attacks. A prototype of our solution has been implemented to demonstrate its feasibility in practical applications. The solution is software based only, which makes it an ideal choice for any AV platforms.

Speaker: Girish Revadigar

Time: 13:20 - 14:00 on 23rd July 2021

Title: Large-Scale Modelling of TLS-based Servers in the Internet

Abstract: Web Servers in the Internet exhibit a wide range of different characteristics, that are influenced by the software implementation of their protocol stack, and also the configuration. In the talk, we present results of large-scale investigations of Web servers, focusing on their TLS and HTTP header properties. We compare properties of Web servers, as represented by Alexa top lists, and properties of servers that appear on block lists.

Speaker: Georg Carle

Time: 12:40 - 13:20 on 23rd July 2021

Title: Phishpedia: A Hybrid Deep Learning Based Approach to Visually Identify Phishing Webpages

Abstract: Reference-based phishing detection approaches usually identify phishing webpages with explanation by visually comparing webpages with predefined legitimate references and report phishing along with its target brand. However, there are technical challenges in visual analyses that limit existing solutions from being effective and efficient. In this work, we design a hybrid deep learning system, Phishpedia, to address the prominent technical challenges of logo matching on screenshot. Our extensive experiments demonstrate that Phishpedia significantly outperforms baseline identification approaches in accurately and efficiently identifying phishing pages. Phishpedia integrated with CertStream service discovered 1704 new real phishing websites within 30 days, way more than other solutions.

Speaker: Yun Lin

Time: 11:20 - 12:00 on 23rd July 2021

Title: Adversarial Preprocessing: Image-Scaling Attacks in Machine Learning

Abstract: The success of machine learning has been overshadowed by different attacks that thwart its correct operation. While prior work has mainly focused on attacking learning algorithms, another weak spot in learning-based systems has been overlooked: data preprocessing. In this talk, I discuss a recent class of attacks against image scaling. These attacks are agnostic to learning algorithms and affect the preprocessing of all vision systems that use vulnerable scaling implementations, such as TensorFlow, OpenCV, and Pillow. Based on a root-cause analysis of the vulnerabilities, I present novel defenses that effectively block image-scaling attacks in practice and can be easily added to existing systems.

Speaker: Konrad Rieck

Time: 10:40 - 11:20 on 23rd July 2021

Title: Can you trust your GNN? -- Certifiable Robustness of Machine Learning Models for Graphs

Abstract: Graph neural networks have achieved impressive results in various graph learning tasks and they have found their way into many applications such as fraud detection, knowledge graph reasoning, molecular property prediction, or cancer classification, Despite their proliferation, studies of their robustness properties are still very limited -- yet, in domains where graph learning methods are often used the data is rarely perfect and adversaries are common. Specifically, in safety-critical environments and decision-making contexts involving humans, it is crucial to ensure the GNNs reliability. In my talk, I will shed light on the aspect of robustness for state-of-the art graph-based learning techniques and I will discuss principles allowing us to certify their robustness.

Speaker: Stephan Günnemann

Time: 9:50 - 10:30 on 23rd July 2021

Title: Intriguing Properties of Adversarial ML Attacks in the Problem Space

Abstract: Recent research on adversarial ML has investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, real-world implications of problem-space attacks remain underexplored. In this talk, I will present our novel reformulation of adversarial ML evasion attacks in the context of realizable attacks. This requires to reason about additional constraints feature-space attacks ignore, which sheds light on the relationship between feature-space and problem-space attacks. Then, building on our reformulation, I will present a novel problem-space attack for generating end-to-end evasive Android malware, which evades state-of-the-art defenses.

Speaker: Lorenzo Cavallaro

Time: 9:10 - 9:50 on 23rd July 2021