Authentications and Promises
powers-do-not-use! 270000NC1K Visits (968)
Another day, another major data leak. This one was reported by the AP and published on Yahoo under the title, “New data spill shows risk of online health records.” What makes this one notable is that not only did the data leak include identity theft related information like names, addresses, and social security numbers but it also included insurance form information and detailed doctors notes. So we're not only talking about identify theft, we're talking about deep, personal, humiliating privacy invasion.
According to the AP Report:
The company receives the medical information from the state per usual business processes. The state had stopped including social security numbers in 2008 and the compromised SSNs were from records that were obtained prior to that.
At the time the article was written, the records were “put behind a password.”
At the risk of piling on, let me just point out that this doesn't necessarily mean that people are authenticated before give access to the information. Nor does it mean that an audit trail of access is kept so the company can insure that people have a business need to know the records they are accessing. So “put behind a password” does not inspire confidence, in my opinion.
What troubles me the most however, is the source of the records before they got to Southern California Medical-Legal Consultants. The company represents doctors and hospitals seeking payment from patients receiving workers' compensation. Again, from the AP article:
Really? Am I to infer that people who rely on worker's compensation to help with their medical bills have no expectation of privacy? Anyone can request details of their case?
I suspect that the state actually has a more rigorous policy than Ortiz' statement implies. But taken at face value, it would raise concerns among privacy advocates.
This gets back to the idea some of my colleagues coined many years ago, the sticky policy paradigm, which is the notion that obligations and promises that are made at the time data is collected must be transferred with the data wherever it goes. In this case, Ortiz' statement, just by itself would imply that when public records are released, there is no obligation to protect them. Again, that's probably not the case. There are probably laws on the books, that cover companies' responsibilities in these cases. Certainly, California has lead the way in terms of security breach notification laws.
But this case highlights the importance of informing and enforcing data security responsibilities at the time the data is released. Yes, legally speaking, the data may be public record. But that doesn't mean you don't incur a responsibility to protect the data when you receive it.
How do you make this happen? The technology is not that super complicated. You need to tie an authentication for access to information to a stated agreement on obligations incurred when the data is accessed. There are some standards that do this in a machine readable fashion. See XSPA for example. But even if the issuance of a credential is tied to an identity management process that requires the subject to agree to a set of data handling practices in order to be issued the credential, you've got a technology hook point where you can say, “by authenticating your credential when you accessed the data, you agreed to a set of data handling practices.” In other words, a login credential becomes not only an authentication, it becomes a promise.