Trust and Internet Identity Meeting Europe
2013 - 2020: Workshops and Unconference

TIIME 2017 Session 22: Privacy by Design

(Laura Paglione)

Non-technical discussion about privacy by design.

Reason and Background of the topic: At ORCID were talks about how to build a concept of trust. How to build this value system into the features, workflows, work with the community.

Are there other organizations thinking along those lines? Who is involved? What do you talk about and how are those decisions made?

What does it take to earn that trust mark? How is confidentiality maintained?

There is constant change and risk, it is always in motion and never a 100% guarantee of privacy.

Data has great prospects of being used for positive things as well, to target services better at a community.

A risk based approach is a good path, better than just trying to stick to the lowest bar of regulations.

This does sometimes tend to result in a kind of checklist, companies often lose sight of the big picture, about the people they are actually creating for.

Privacy is about trying not to damage the people.

There is not that much discussion about privacy and confidentiality, and little data sensitivity however people do care. People have a lot of trust in the community and are willing to share their data within it, and do not understand why they should protect it. However as soon as data is moved outside of the boundaries of a community, the trust immediately drops. Example: Data within a company, vs. data on a social network.

Making context public and visible is very important when talking/informing about privacy. Even though there are certain expectations on things to happen (data to get moved around), sometimes things happen out of the context we expect them to (A third party is doing the data processing that we expect our partners to do) and if we got that context we would be more aware of the privacy issues at hand.

Transparency and Trust marks might be able to create an extra layer of trust and reference for privacy.

Transparency is especially important with all the out-sourcing going on, the chain of data being moved around is often not visible at all.

Are there bad examples, times when privacy by design would have avoided a bad situation? Check-box for consent is already checked feature, instead of the clear decision point. This was a case of when it might not have been in the actual interests of the user to make this decision in a concrete way.

Should there be a registry of best practices?

Should considerations for privacy be only and always risks? What other things can there be evaluated to? Values, for example.

Study in Denmark: The companies doing their best with implementing privacy by default were the ones who were mostly focusing on their values, which were internally discussed within the company and always at the forefront. Incorporated into the entire company so to say.

"The most effective way to change someone's behavior is not to nag them", instead attempt changing the values they connect with certain decisions. Values are processes, which are formed in smaller steps through decision points. Positive reinforcement, providing searched for information, nudging people/companies in the right direction, can help shape those values.

Leading by example is a good way of demonstrating values.

Values can be acquired through spending time in an environment which holds these values.

Who should be involved in a conversation about privacy, where decisions are made, within a company?

Should discussions involve the community, the user, or just internal teams? How are all the different opinions from such a broad discussion sorted out and fit into the values of the company?

Problem: When you ask the users, the answer you get is often not what they actually mean/want.

(Asking about) Retention periods for data, is a good way of making companies think about their approach to privacy by design.