25th - 26th SEPTEMBER 2019  |  OLYMPIA

Big Tech Needs to Use Hazardous Materials Warnings

Wired 10 Aug 2019 01:00

The technology sector has a hazardous materials problem, beyond the mountains of electronic waste it generates. More immediately, Big Tech fails to warn users when its products and services are hazardous. Users are long overdue for a clear, concise rating system of privacy and security risks. Fortunately, tech can learn from another industry that knows how to alert consumers about the dangers of improperly storing and leaking toxic products: the chemical industry.

Nearly 60 years ago, the chemical industry and its regulators realized that simple communication of hazards is critical to safety. Material Safety Data Sheets, the chemical equivalent of technology user terms and conditions, have offered descriptions of those hazards since the early 1900s. But as the industry evolved, it became clear, sometimes tragically, that end users rarely read these lengthy technical volumes. A quick reference was required.

WIRED OPINION

ABOUT

Stephen Nowicki is IMS Manager of Kemper System America, Inc. and member of the Erie County Hazmat response team.

Enter the fire diamond, the now ubiquitous, universally understood symbol of chemical safety. You’ve seen them on propane tanks, chemical containers, and laboratories: cartoon rhombuses divided into colored quadrants, each filled with a number, between 0 and 4, indicating a substance’s toxicity (blue), flammability (red), and reactivity (yellow). Introduced in 1960 by the National Fire Protection Association, the diamond, officially called NFPA 704, is the standard for communicating the most basic and essential safety information of hazardous materials in the United States. Even if users don’t read the safety data sheet, they are greeted by this bright, unavoidable summary of material hazards every time they look at the container.

Whereas the chemical industry and its regulators have worked to ensure clearer warnings, the tech industry has worked to make it increasingly difficult for consumers to know what hazards their products pose (hello, FaceApp). As technology companies use and misuse the personal data they collect in increasingly sophisticated ways, user agreements have only become longer and more byzantine. Facebook, for example, has terms of service and related policies that stretch for over 35,000 words, about as long as The Lion, The Witch, and the Wardrobe, and as bewildering as Narnia. Buried within are clauses that have significant privacy implications, such as granting Facebook a “non-exclusive, transferable, sub-licensable, royalty-free, and worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of your content.”

License agreements, like toxicology studies, provide valuable information, but they’re of little use when users need to quickly know what they’re getting themselves into. When emergency personnel are considering using a chemical product, they immediately need to know: Will it explode? Will it poison me? Will it burn me? Right away, the fire diamond answers. When considering a new app or service, tech users have similar questions: How much of a security risk is this? What data is collected and stored? Do I have any control? Will it poison me? Will it burn me? To find those answers, a user often first has to jump into the fire.

Besides the self-interest of entrenched tech industry players, there is no excuse for the need to read dozens of pages of dense text to learn the dangers of a product when that information can be condensed into a few numbers and color-coded blocks. If users are to rapidly adopt new services and technologies and to bear responsibility for understanding the content and implications of the burdens posed by license agreements of those technologies, then a transparent and standardized method of hazard communication is required.

Who should administer this? It could be a mandatory regulatory framework (from the FTC or Consumer Product Safety Commission) or a voluntary independent rating system created from accreditation bodies or industry watchdogs like the Electronic Frontier Foundation.

What should it look like? There are myriad design options, but one would be to create a tech safety diamond. Instead of stating physical harm, this warning system must summarize the key aspects of data collection, user control, data use, and data handling, to let users know if it’s worth the risk.

Continue reading original article...