Managing security under the GDPR profoundly

June 18, 2018 in Interviews

 

 

An interview with  Dr. Alexander Novotny:

The EU General Data Protection Regulation (GDPR) requires organizations to stringently secure personal data. Since penalties under the GDPR loom large, organizations feel uncertain about how to deal with securing personal data processing activities. The Privacy and Sustainable Computing Lab has interviewed the security and privacy expert Dr. Alexander Novotny on how organizations shall address security for processing personal data:

 

 

 

 

 

Under the GDPR, organizations using personal data will have stringent obligations to secure the processing of personal data. How can organizations meet this challenge?

Organization’s security obligations while processing personal data are regulated under Article 32 of the EU General Data Protection Regulation. Security is primarily the data controller’s responsibility. The data controller is the organization who determines the purposes and means of the processing of personal data. To ensure appropriate security, controllers and processors of personal data have to take technical and organizational measures, the so called “TOMs”. Which security measures are appropriate depends on the state of the art and the costs of implementation in relation to the risk. Organizations are only required to implement state of the art technology for securing data processing. The implementation of best available security technologies is neither a requirement in most cases, nor putting security technologies in place that are still not market-available or pre-mature. Also the nature, scope and context of data processing need to be taken into account. For processing dozens of IP addresses in an educational context, for example, different protection is adequate than for processing thousands of IP addresses in a healthcare context. For identifying reasonable TOMs, also the purposes of processing and the risks for the rights and freedoms of natural persons need to be considered.

How can the level of risk for the rights and freedom of natural persons be measured?

The GDPR outlines that the likelihood and the severity of the risk are important factors: the wording in Article 32 of the GDPR points to traditional risk appraisal methods based on probability and impact. These methods are commonly used in IT security already today. Many organizations therefore have classification schemes for likelihood and severity. Often, they categorize these two factors into the classes “low”, “medium” and “high”. Little historic experience in terms of likelihood and severity of security incidents is available. Without such experience, it is very difficult to meaningfully apply rational risk scales such as scales based on monetary values. Also, the ENISA recommends a similar qualitative risk assessment method in its 2017 handbook on the security of personal data processing. What data controllers need to keep in mind is especially the risk for the data subject in the first place and not the organization’s own risk. Thus, organizations have to take a different viewpoint, in particular organizations that have done a risk assessment with regard to an ISO 27001 information security management system already. These organizations need to amend the risk assessment by the viewpoint of the data subject.

What are these so-called TOMs?

Examples on technical and organizational measures are given in Article 32 of the GDPR. The regulation names pseudonymization and encryption of personal data as well as the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services. Organizations need the ability to restore the availability and access to the personal data in the event of a physical or technical incident. Also, a process for regularly testing and evaluating the effectiveness of technical and organizational measures is required. Recital 78 of the GDPR refers to additional measures such as internal policies, for instance. What is remarkable here is that TOMs do not only aim to keep personal data confidential and correct. TOMs also target the availability and access to personal data as well as the resilience of IT systems that are used to process personal data. Availability and resilience of IT infrastructure is one of the traditional IT security goals. But from the viewpoint of data protection it has not been given high priority so far. Hence, organizations have to further integrate their data protection efforts with IT security in order to tackle these requirements set out by the GDPR.

How can a controller be sure that the identified and implemented TOMs are actually appropriate?

This is a question that is often asked by organizations complaining that the guidance provided by the GDPR is overly vague and legal certainty is low. With regard to this question of appropriateness a clash of cultures is often witnessed: on the one hand, technicians responsible for the implementation of the TOMs and, on the other hand, lawyers having an eye on GDPR compliance follow different approaches. Technicians are used to predetermined instructions and requirements. They take a very technological viewpoint and often desire that competent authorities issue specific hard facts lists of TOMs. In contrast, lawyers are used to structurally apply legal criteria for appropriateness and adequacy to real world cases. Instead of relying on predetermined lists of TOMs, organizations are now required to think in terms of what is best for the data subjects and for themselves when it comes to data security. Of course, predefined lists and templates of TOMs can be helpful to enlighten the state of the art. But organizations are required to make up their own minds which TOMs are particularly appropriate for them. This is particularly reflected in Article 32 of the GDPR. It states that the nature, scope and context of data processing need to be taken into account to determine appropriate TOMs.  To increase legal certainty for organizations, they are well advised to write down their particular approach on the selection of TOMs. If organizations comprehensively document their risk-based reasoning about which TOMs they implement to address the identified risks they will likely be safe in front of the law.

What can we understand under regularly assessing and evaluating the effectiveness of TOMs?
Practically this means that controllers need to operate a data protection management system (DMS). Within the scope of such a DMS, regular audits of the effectiveness of the implemented TOMs need to be conducted. Organizations can integrate the DMS into their existing information security management system. With such integration, they can leverage the continual improvement process that is already in place with established management systems. Also, the DMS  supports the required process of regularly testing and evaluating the effectiveness of TOMs.

About the interviewee:

Dr. Alexander Novotny is an information privacy and security specialist. He has been researching on privacy and data protection since the first proposal of the EU commission on the GDPR in 2012. He works as an information security manager for a large international enterprise based in Austria. He holds certification as a data protection officer, is lecturing on IoT security and advising EU-funded research and innovation projects on digital security and privacy.

 

 

 

Consent Request

February 13, 2018 in Interviews

Olha, would you be so kind and introduce yourself and your project?

My name is Olha Drozd. I am a project related research associate at theInstitute of Management Information Systems, working on the SPECIAL (Scalable Policy-aware Linked Data Architecture For Privacy,Transparency and Compliance) project a Research and Innovation Actionfunded under the H2020-ICT-2016-1 Big Data PPP call (http://specialprivacy.eu/). At the moment, together with my colleagues,I am working on the development of the user interface (UI) for theconsent request that will be integrated into the privacy dashboard.

Would you please explain the privacy dashboard?

With the help of the privacy dashboard users would be able to access the information about what data is/was processed about them, what is/was the purpose for the data processing, and what data processors are/were involved. The users would also be able to request correction and erasure of the data, review the consent they gave for the data processing and withdraw that consent.

We have two ideas of how this dashboard could be implemented:

  1. Every company could have their own privacy dashboard installed on their infrastructure.
  2. The privacy dashboard could be a trusted intermediary between a company and a user. In that case we would have different companies that are represented in a single dashboard.

As I mentioned in the beginning, I am concentrating on the development of different versions of UI for the consent request that could be integrated into the dashboard. Our plan is to test multiple UIs with the help of user studies to identify better suitable UIs for different contexts. At the moment we are planning to develop two UIs for the consent request.

Olha, would you please tell us more about the consent request?

Before a person starts using an online service he/she should be informed about:

  • What data is processed by the service?
  • How is the data processed?
  • What is the purpose for the processing?
  • Is the data shared and with whom?
  • How is the data stored?

All this information is presented in a consent request, because the user has not only to be informed but has to give his/her consent to the processing of his/her data. We are now aiming to create a dynamic consent request, so that users have flexibility and more control over giving consent compared to all-or-nothing approach that is used by companies today. For example, if the person wants to use wearable health tracking device (e.g. for a FitBit watch) but he/she does not want to have an overview of the statistics of all day heart rate but just activity heart rate, then he/she could allow collection/processing of the data just for the purpose of displaying activity heart rate. It should be also possible to show only the relevant information for the specific situation to the user. In order to ensure that the user is not over burdened with consent requests we are planning to group similar requests into categories and ask for consent once per category. Additionally, it should be possible to adjust or revoke the consent at any time.

At the moment, the main issue for the development of the consent request is the amount of information that should be presented to and digested by a user. The general data protection regulation (GDPR) requires that the users should be presented with every detail. For example, not just the company, or the department that processes the information – the users should be able to drill down through the info. In the graph below you can see an overview of the data that should be shown to users in our small exemplifying use case scenario where a person uses health tracking wearable appliance [1]. You can see how much information users have to digest even in this small use case. Maybe for some people this detailed information could be interesting and useful, but if we consider the general public, it is known that people want to immediately use the device or service and not spend an hour reading and selecting what categories of data for what purpose they can allow to be processed. In our user studies we want to test what will happen if we give users all this information.

Olha, you have mentioned that you were palnning to develop two UIs for the consent request. Would you explain the differences between those two?

One is more technical and innovative (in a graph form) and the other one is more traditional (with tabs, like in a browser). We assume that the more traditional UI might work well with older adults and with people who are not so flexible in adapting to change, new styles and new UIs. And the more innovative one could be more popular with young people.

[1] Bonatti P., Kirrane S., Polleres A., Wenning R. (2017) Transparent Personal Data Processing: The Road Ahead. In: Tonetta S., Schoitsch E., Bitsch F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2017. Lecture Notes in Computer Science, vol 10489. Springer, Cham

“Why RFID Chips are Like a Dog Collar” Interview with Sushant Agarwal, Privacy and Sustainable Computing Lab

November 24, 2017 in Interviews

 

Sushant would you please introduce yourself and tell us about your scientific work and background?

 

Sushant: My name is Sushant Agarwal. I did my Bachelor and Masters in India in Aerospace Engineering at the Indian Institute of Technology Bombay.During this time, I did an internships at the University of Cambridge where I worked on a project related to RFID. There I had to carry several RFID enabled cards – key cards to unlock the university doors, college main entrance, my dorm room and also an id-card for a library. I used to wonder why they don’t just create one RFID chip which would work for everything. Later, I started my thesis which dealt with machine learning. This was the time I started thinking about privacy and how centralisation is not always a good approach. After my studies, I got an opportunity here to work on a project that combined both privacy and RFID.

Would you tell us a little more about this project?

The EU project which was called SERAMIS (Sensor-Enabled Real-World Awareness for Management Information Systems) has been dealing with the use of RFID in fashion retail. My work focused more on the privacy aspects. If you look at clothes that you buy from big fashion retailers, along with the price tags there can be RFID chips as well, which are slowly replacing the security tags or the fancy colour bombs they were using before.

Would you also tell us about the tool you created at the Lab called “PriWUcy”?

This was part of the SERAMIS project as well. We had to develop a tool for Privacy Impact Assessments. When we started developing this tool the landscape of data protection related regulation changed to the General Data Protection Regulation (GDPR). Because of this regulatory change a lot of things in our Privacy Impact Assessment tool had to be adjusted. This was the time when we thought about a sustainable solution and came up with the idea to model the legislation in a machine-readable way in order to easily update the tool based on the changes in the interpretation of the GDPR.

 

Sushant, what is privacy for you?

For me personally, privacy is all about control. I want to have the ultimate control of my data. At least I should be allowed to say who should get my data, as well as what kind of data they should have access to. So it shouldn’t be like logging in online and starting Facebook in one of your tabs and then Facebook tracks you for all the rest of the websites that you browse. That is something I really hate. I try to use online services where I can have the maximum amount of control possible.

 

Would you give us an example for how you make use of your knowledge on privacy in your daily life?

 

Yes, for me the concept of smart homes is something very interesting. And to try this out on a small scale, I started out with some smart bulbs. I bought  some smart-bulbs from China to experiment with. These bulbs work using Wi-Fi and through a switch in my apartment I was communicating first with a server in China and then the server was controlling my light switch. One could say the server in China was a middleman in the process of switching on my lights. And I didn’t really like this design so I looked at some open source alternatives like https://home-assistant.io/ where I had better control and I could avoid the middleman.