Why Are Apple’s Competitors Staying Silent On the iPhone Unlocking Fight? is the question of the day on tech blogs. The answer is hardly technical and may not be legal, it’s all about privacy policies and business strategy and it is very evident in healthcare.
There are three classes of privacy policy in healthcare and everywhere else:
Class 1 – “Apple will not see your data.” This is Apple’s privacy policy for ResearchKit and HealthKit and apparently for whatever data the FBI is hoping to read from the terrorist’s phone. Obviously, in this case the person is in complete control of the data and it can be shared only with third-parties that the person authorizes.
Class 2 – “We will see and potentially use your data but you will have first-class access to your data”. This is the kind of privacy policy we see with Apple’s calendar and many Google services. The personal data is accessible to the service provider but it is also completely accessible via an interface or API. In healthcare, the equivalent would be having the FHIR API equally and completely accessible to patients and to _any_ third-parties authorized by the patient. This is Patient Privacy Rights’ recommendation as presented to the API Task Force.
Class 3 – “We will use your data according to xyz policy and if you don’t like it, take your illness elsewhere.” This is pretty much how healthcare and much of the Web world runs today. We have limited rights to our own data. On the other hand, the services that have our data can sell it and profit in dozens of ways. This includes selling de-identified data. In Class 3, you, the subject of the data are a third-class citizen, at best. In many cases, the subject doesn’t even know that the data exists. See, for example, The Data Map.
We are so completely engulfed by Class 3 privacy policies that we have lost perspective on what could or should be. A Class 1 policy like Apple’s is widely seen as un-American. A Class 2 policy like PPR’s is indirectly attacked as “insurmountable”.
The reality is that technology moves much faster than other parts of our society. Whether it’s encryption to secure iPhones so “Apple will not see your data.” or CRISPR to control Zika Virus, we need to plan for tomorrow’s technology today. In healthcare, that means encouraging businesses and health care services that adopt Class 1 and Class 2 privacy policies.
HIE of One, an open source technology project by Michael Chen, MD and myself, is a current proof of concept of how Class 2 privacy policies could transform healthcare in just a couple of years. This THCB post and this 14-minute video demonstrate that a patient-centered health IT architecture is possible with today’s technology. Turning the HIE of One proof of concept into reality is taking place in our HEART workgroup and will be the subject of many conversations with health industry vendors and regulators at HIMSS next week.
Categories: Uncategorized
This below is a guess:
The usual technique to keep something secret is to force the hacker to factor a huge subprime number. This is a number which is the product of two primes. People can do this in reasonable time–polynomial time it is called–up to about 240 decimal digits. Remember that with PGP–pretty good privacy–that we were using 60 digit numbers?
Our most sensitive secrets are kept using this method. The passcode just opens the machine and has nothing to do with this deeper primary hurdle. I think this could be bypassed by going directly to the hardware.
I believe we are able to do this factoring for somewhat larger numbers, but that it is extremely important that ISIS, et al, keep using the i-phones which are probably using a much smaller number. Note bene, it is only the texting that is encrypted.
Hence what you are seeing is a thespian performance….an act so that our adversaries keep using the i-phone. Like a British spy movie.
@Andrei, I don’t see the difference between what you’re proposing and Class 1. Regardless of how they do it, “Apple will not see your data.” is as clear as it gets.
Hello All,
What about privacy by design? Class 0.5
I was thinking about it for some time now. Force encrypt data using a unique key whether this comes from touchID or an external token, like banking login.
Phone manufacturer would be prevented by design to access the data. Then strict control over where the data is sent, by confirmation.
When discussing about security there is always a compromise between ease of use and level of security.
Do you think people will invest more time & actions (tokens, logins) for extra security when it comes to their health data?
Regards,
Andrei
Thanks very much for your take on that, Adrian.
KR,
Eddy
Protect a person from what? Seriously, in Class1, there’s no institution that has access to the data other than those that the patient explicitly authorizes. If the patient authorizes a transfer to a HIPAA Covered entity then they are protected. If the patient authorizes a transfer to a research institution covered by the Common Rule (as in IRBs) then they are protected. If the patient authorizes an institution that is only covered by the FTC, then neither HIPAA or Common Rule apply and what matters is the posted privacy policy of the recipient institution.
Note that the Class 1, 2, or 3 label the privacy policy of the institution that has the patient’s data and that is separate form the privacy policy of the institution that wants to receive the data. A person can authorize a transaction between a Class 1 and a Class 3 and might do exactly that if they had to.
The question of protection is more nuanced in Class 2 where the holding institution has access to the data and is able to act on that access regardless of what the patient authorizes. In this case it matters a lot if the holder is subject to 42CFR Part 2 (mental health), HIPAA, Common Rule, FTC, or nothing (they’re in a different country). All four of these could be Class 2 institutions if they chose to support patient-directed access.
Hi Adrian,
Thanks for the great article. You have nicely presented the intricacies in privacy policy, especially pertaining to healthcare. On a somewhat unrelated note, can’t HIPAA guidelines protect a person who uses apps such as Apple’s HealthKit?
@Margalit – You are describing Class 1. In Class 1, if the FBI (or Apple) wants to access my data, all they need to do is ask me. Whether that data is on my iPhone or on iCloud is a technical difference not a privacy policy class. If Apple wants to and can develop encryption technology that secures data in iCloud as well as they seem to be doing on iPhone, then I think they will and we will all be better off.
For your second point, see my answer to John below. “Government” is not a term that technology can be built around. It must be some capability that is exclusively “government” that’s required and nobody has made a convincing case that such exclusive technology exists.
We are in agreement that we need to reverse “the legal mass trafficking in user data all day every day”. That’s what the HIE of One proof of concept is now showing around the emerging FHIR interface standard. We need our regulators to be clear about the relationship between data blocking as applied to FHIR APIs regardless of whether the FHIR API is under patient control or under HIPAA TPO.
We’re talking about two different things – technology and privacy policies. Let’s try to keep them separate. My principal reason for writing this post is to make progress on privacy, not technology – hence the title – but let me try to deal with the technology question that Apple and a lot of other security experts are posing.
There is no technology that is exclusive to government, whatever you mean by “government”. This should be obvious from the massive breach of the Office of Personnel Management, from the issue that China, Russia, my local police force, and North Vietnam are all government. The FBI is pretending that Apple could develop a technology that is just hard enough to break that anyone short of the US NSA would not be able to do it. That seems like a dubious and very risky proposition.
So, I probably agree that this has to do with business models and that’s what the 3 Classes are designed to highlight:
– Class 1 – there’s zero business model around my data. Apple has to make its money some other way.
– Class 2 – Apple can make money on my data but it’s kept honest as to how much money by the fact that their access to my data is non-exclusive. If my data is valuable, I will be able to easily sell it to others and therefore undercut Apple at selling it behind my back.
– Class 3 – The hospital or service provider can sell my data and there’s nothing I can do about it because healthcare is a critical service like breathing and drinking.
Seems to me like there is a Class missing:
Class 1.5: Where Apple can “see” your data and you can see your data and you can do whatever you want with it, but Apple is barred by law from using it in any shape or form, under penalty of fines and prison time (preferably for its CEO).
I am not opposed to people being able to “donate” data usage rights to Apple or some other corporate vulture, or for “research”. I am however opposed to fine print opt out (or opt in) BS.
As to the government’s right to obtain information from Apple with a valid and specific warrant to this one person and one device, Apple should comply. To Bobby’s point, Apple can certainly sell “unbreakable” encryption, but if you commit a crime with said product, you forfeit that protection. Alternatively, nobody should be able to sell “unbreakable” encryption if its only use is to harm others, just like you cannot sell purse-size nuclear weapons for very much the same reason.
In my view, this entire quandary is NOT about privacy. It’s a distraction from the real problem which is the perfectly legal mass traficking in user data all day every day.
“a very different thing to say we do not believe the government has the authority to access our technology under ANY circumstances EVER”
__
So, a law enforcement agency should be able to in effect summarily write legislation outlawing the private sector selling “unbreakable” encrpytion technology? via “clamorem et uthesium 2.0” “Writs of Assistance” (a principal cause of the American Revolution).
Concise and accurate summary. Yes, it may over-simplify, but the point is well-made. The reported job of medical records is to provide pertinent information in the clinical setting. The problem is that this has been subverted into data collection and billing support, obfuscating clinical information behind a wall of gibberish. Most of my patients don’t demand access to their records, but they do want their caretakers to all have access to the information over the narrative of their care. I believe that the only valid way to do this is to change records from a physician/hospital centered database into a patient-centered (and, to a great degree patient-controlled) medical narrative. Different people may want different levels of privacy in this. Most, in my experience, probably care too little about privacy, and it’s our job to protect them in a way that is reasonable but that doesn’t take away its effectiveness as a tool to inform care with accurate information. I think explanations like this do well to push the discussion in that direction.
Adrian
I like this breakdown. There’s a lot to talk about here.
But I think this typology misses something ..
It’s one thing to say “Ok. We’re not going to access our customers data or make doing that a part of our business model. That’s not part of what we do” – it is a very different thing to say we do not believe the government has the authority to access our technology under ANY circumstances EVER ..
I think this has more to do with business models than it does with my iPhone’s right to life, liberty and the pursuit of happiness
Show me I’m wrong ..