BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//The Sustainable Computing Lab - ECPv6.0.11//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:The Sustainable Computing Lab
X-ORIGINAL-URL:https://www.sustainablecomputing.eu
X-WR-CALDESC:Events for The Sustainable Computing Lab
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20220721T140000
DTEND;TZID=Europe/Paris:20220721T153000
DTSTAMP:20260428T082316
CREATED:20220710T130412Z
LAST-MODIFIED:20220718T132503Z
UID:6240-1658412000-1658417400@www.sustainablecomputing.eu
SUMMARY:Pavel Laskov: Can We Trust AI?
DESCRIPTION:Title:Can We Trust AI? \nThursday\, 21.07.2022\, 14:00–15:30 CEST \nHybrid Event \nSpeaker:Prof. Dr. Pavel Laskov (Hilti Chair for Data and Application Security) \nShort bio:Pavel Laskov is Full Professor at the University of Liechtenstein and head of the Hilti Chair of Data and Application Security. He received PhD in computer science at the University of Delaware in 2001 and held research and teaching positions at the Fraunhofer Institute FIRST\, University of Tuebingen and Huawei European Research Center. His research is focused on the development of techniques for detection and mitigation of security incidents\, especially using custom-built AI techniques. As one of the pioneers of research on AI security\, Pavel Laskov co-designed the first proof-of-concept attacks against mainstream AI algorithms such as neural networks and Support Vector Machines. \nAbstract:“Data is the new oil”. This succinct metaphor fuels an intense scholarly debate about the genuine value of data in modern economy and society. Tremendous recent progress in AI methods and applications of brought up new products\, services and capabilities that would have appeared science fiction even a decade ago. As AI is increasingly deployed for security- and safety-critical applications\, robustness of learning algorithms to unexpected data perturbations\, commonly knowns as “adversarial examples”\, becomes a crucial property. In this presentation\, I will present the general idea of data-driven attacks against AI and discuss various existing threat models. As an example of a future security-critical application\, I will elucidate the role of AI in 5G mobile network infrastructures and present a new threat model recently developed for this use case. \nOnline:Please contact events [at] sustainablecomputing.eu for a link. \nOn campus:WU Wien\, D2.2.094https://campus.wu.ac.at/?campus=1&q=D2.2.094 \nMore info (requesting a link to the online call): events [at] sustainablecomputing.eu \nSustainable Computing Lecture SeriesChairs: Soheil Human\, Vienna University of Economics and Business (WU Wien) Arianna Rossi\, SnT\, University of LuxembourgCristiana Santos\, Utrecht UniversityMartin Degeling\, Ruhr-University Bochum (RUB)
URL:https://www.sustainablecomputing.eu/event/pavel-laskov-can-we-trust-ai/
LOCATION:Online Event\, To receive the online event link\, please send an email to\, events[@]sustainablecomputing.eu
ATTACH;FMTTYPE=image/jpeg:https://www.sustainablecomputing.eu/wp-content/uploads/2022/07/SCLS_20220721_Pavel-Laskov_Poster.jpg
END:VEVENT
END:VCALENDAR