Disclosure vs Consent: What Software Can Learn From Medicine

Posted in The Gnovis Blog

Each time that you install a piece of software, or register an account on a new website, you are entering a legal agreement. You are presented with an EULA (End User License Agreement), Terms of Use, a Privacy Policy, or some other document and, if you are anything like me, you scroll down as fast as you can and check the ‘I Agree’ box without reading a word.

[It’s also important to note that, even if you don’t check such a box, you are often implicitely entering an agreement simply by browsing on a website with posted terms.]

The good news is that a handful of paranoid people DO read these agreements, which is why from time to time a company will experience some backlash, as in this old story, where users of Apple’s Mac computers learned that the EULA on a beta release of software (also from Apple) stated that installing the software would void the warranty on their Mac hardware. Amazon has seen backlashes to their policies on both privacy and pricing, and Google’s privacy policies are in the news everytime they buy up another startup.

The trouble with this system of legal agreements is, quite obviously, that most people don’t read the agreements, and they’re written in such esoteric legalese that most users can’t comprehend them anyway. Generally, the response to this problem (from software companies, media, and even consumers themselves) is that it is the users own fault — if they don’t take the time to read and understand the fine print, they bear responsibility for their own ignorance.

However, I want to look at this issue from a different perspective, in terms of informed consent. Informed consent (new window) is a familiar concept to anthropologists, documentary filmmakers, and many flavors of social scientists, but it is most known and most easily discussed in relation to medicine, particularly clinical trials.

The history of informed consent (new window) dates back six decades, to the Nuremburg Code (new window) which, obviously, arose from the Nuremburg Trials and the atrocious human rights violations that took place in World War II, under the guise of medical experimentation. The Nuremburg Code states that patients "should have sufficient knowledge and comprehension of the elements of the subject matter involved as to enable him to make an understanding and enlightened decision."

"Informed consent is more than simply getting a patient to sign a written consent form. It is a process of communication between a patient and physician that results in the patient’s authorization or agreement to undergo a specific medical intervention."
–from the American Medical Association (http://www.ama-assn.org/ama/pub/category/4608.html)

Informed consent has become a process, not a document, and doctors are now valued as much for their communication skills as their scientific knowledge or surgical skills.

Whereas much legalese is written for the purposes of C.Y.A. (new window) (including the disclaimers that fly past us at the end of pharmaceutical ads during the Superbowl), the informed consent process, when done properly, broadly addresses the moral responsibility of doctors to educate their patients and involve them in decision making. This is particularly important because doctors are in a position of power, and the doctor-patient relationship operates largely on the basis of trust.

I would contend that software is as mysterious and complicated as medicine, to the average citizen, and that, consequently, software companies are likewise in a position of power in a trust-based relationship… which means that they ought to consider their moral responsibilities in educating their consumers, rather than simply protecting themselves from litigation through unreadable legal documents.

Let’s now take my proposal for granted, that software companies should be more active in educating their users about the legal agreements they are entering into. How might this be accomplished?

  1. Standardization – At the core of this problem is that EULAs and other agreements all look largely the same, but are most significant for their differences: it’s the one unexpected clause, drifting in a sea of familiarity, that is overlooked or misunderstood. If EULAs had a standard to start from, and an addendum clarifying just the differences, it’s far more likely that individuals would read and understand those differences.
  2. Simplification & Visualization – This one is fairly obvious. A YouTube clip explaining the terms of use is much more likely to sink in than a two page document.
  3. Verification – I once met a doctor who didn’t consider consent to be "informed" until the patient could explain everything back to him. Basically, he was testing them… if they didn’t understand what he had told them, they weren’t ready to make a decision about treatment. This would be an incredibly easy element to incorporate into software agreements. Instead of just having an "I Agree" box at the bottom, there could be a short quiz about the contents of the agreement. Questions could also appear later on in the software lifecycle, as a "refresher."

My first and third recommendations, in particular, could be easily merged… a 3rd party in charge of standards could also be involved in the verification stage by providing tools to aid in testing, as well as certification of standards and verification.

It’s worth noting that these recommendations, while consumer-centric, wouldn’t just benefit consumers. Adopting my recommendations would provide several benefits for software companies, as well. Standardized, well-tested, well-understood legal agreements would lead to fewer and cheaper lawsuits, less volatility in consumer opinion, and stronger product differentiation… all of which should please corporations AND consumers.

It’s pretty well understood in our society that the moral issues being raised by technology (privacy, accountability, etc) are of profound importance, but I think there is an ironic failure on the part of both industry and consumers, in this case, to rethink how responsibility for those moral issues ought to be allocated. Google’s informal corporate motto, "Don’t Be Evil" (links: 1 (new window) 2 (new window) 3 (new window)), is not far removed from the "First, do no harm (new window)" precept of medicine. I think they’re onto something, but the industry has a long way to go.