(Rob Willton)
To explain the problem space:
When we install small things we don't really think that much about the security and privacy issues, we just think about what we need the App for.
We get the things we installed the App for, and we get all the other stuff we consented too, but do not really know about or understand.
How could become more conscious of this and motivate people to think about these things before downloading and installing?
Could we better educate people in this regard?
The difficulty of being able to define privacy in the first place.
The problem is of course as always the commercial interest of App-Designers, "follow the money", data is a good business model. Privacy aspects are a counter to these business models.
Data is often more worth than the App itself. This creates tension of course between anyone with interest in privacy, and corporate interest.
A declaration by the App should be necessary, where they state their business model is selling data. Which of course is not happening outwardly, because then people might not buy it.
You can also gather interesting/good information out of this data, or data that could be used not just for profit. (think governmental organizations collecting data)
The "creepiness" factor:
We do not really want to be watched. There is a lack of control there.
However we are kind of forced, social pressure, to sign up for certain stuff, for Facebook, Twitter, etc… Because everyone is on it, and we would kind of be missing out if we weren't, there would be a "U-shaped hole". So it is a type of basic necessity, which is not recognized yet in any court of law. "When is opting out not a viable option?" Also thinking about governments collecting data, on trains, public places, etc… You can hide your face behind a hoodie, but it is going to get noticed and might be seen as suspicious behavior.
To conclude: Opting out of certain things, such as using phones, Facebook, etc. might even cause suspicion and be a statement in itself for law enforcement agencies.
Now getting back to the framework that was the original proposal:
When does the user trust the app?
If the vendor is "reliable" (such as banks), or if it's very popular and widely used (such as the app store).
This trust in an app is often not really based on any actual quality of the app.
There could be framework for auditing what an app is actually doing with your data, and whether it sticks to what it promised in regard to data (Does it also access your contacts, if it only asked to access the camera). iOS supposedly has a mechanism of allowing only certain permissions for an app, but then this is what they state and the question is whether you trust Apple. An Operating System cannot protect you from every privacy threat either, for example it cannot control whether the app does not send the data on to a third party, even though they stated they would not. Or scan for certain information in your photographs and pass on this information.
Proposal :
An idea of a "meta-app"
Would hook in between the app and the OS. This meta-app should have the ability to check whether connections are being opened that should not be opened and inform the user about it. DataBox is a project that has this purpose from Cambridge.
Lightweight meta-app: The app sits on the phone and can only read the inventory of other apps, and enquire against a third party database about privacy issues. If it finds something that seems off it informs the user.
A bit more extended version of this can be an app that has the same functionality plus it can tweak your privacy settings as well.Discussion of the proposal:
iOS has a basic functionality like this, which could be extended to check for open connections. This is much easier on iOS because Apple regulates which apps are in the store, but this might be harder with Android.
The meta-app and the OS have to be trustworthy.
Another problem is that just informing the user might not be enough, because what does "Http traffic is leaving your phone" mean to standard-user.
There should be incentives for app developers to fit their business model.
Functionality that is proposed may already exist for commercial reasons (Mobile Data Management solutions), but are not transparent yet to the user. The standard for "What is bad behavior" does most likely already exist, so this could be used to build on.