We've all come to rely on spell-checkers that correct misspellings as we type. Now, Google has filed a patent for a tool that seems like an evil-checker: a software system that could prevent people from writing out, in electronic correspondence and documents, phrases that run afoul of policies or laws.
Google’s proposed "Policy Violation Checker" would allow software to peek over peoples' shoulders while they type to alert individuals -- and potentially their employers -- when their written text contained "problematic phrases" that “present policy violations, have legal implications, or are otherwise troublesome to a company, business, or individual," according to the patent filing.
The tool recalls Google chairman Eric Schmidt’s controversial advice to people worried about their un-erasable digital trail online: "If you have something that you don't want anyone to know,” Schmidt advised in a 2010 interview, “Maybe you shouldn't be doing it in the first place.” Google seems to have followed through on Schmidt’s thinking with software that stops people before they make ill-advised digital disclosures -- or will tattle on them if they do.
With Policy Violation Checker, Big Brother isn’t just watching you. He’s getting some control over what you write.
In the patent application, Google details a process that would allow its algorithms to automatically detect troublesome text by comparing the writing to a database of phrases previously identified as “problematic." The tool could not only inform a person that they've written something that violates protocol, it could also tell an individual why she’s run afoul of the rules, suggest alternate wording that would be less risky and, crucially, alert third parties to the violation.
“If a user creates a text document, presentation, or other document with a problematic phrase, the policy violation checker may notify a member of the legal department of the existence of the document,” Google explains in its patent filing.
The technology could be applied beyond email to include any electronic document, “such as a text document, spreadsheet, presentation, or electronic mail message,” according to the patent brief. And the software could be customized to run on “any type of processing device including, but not limited to, a computer, workstation, distributed computing system, embedded system, stand-alone electronic device, networked device, mobile device, set-top box, television, or other type of processor or computer system.”
Google suggests its software could come in handy for corporations seeking to avoid lawsuits, leaks or other incriminating disclosures. Presumably, it could have prevented a Goldman Sachs executive from making his now-notorious reference to a “shitty deal” in his emails to a colleague.
“It is in the best interest of companies to prevent violations of company policy or laws before they occur. As businesses glow [sic], the number of documents in a business rises exponentially, and the potential that a particular document may implicate a violation of law or company policy grows,” the patent filing explains. “Business employees often knowingly or unknowingly discuss actions that could potentially lead to violations of company policy, such as a confidentiality policy, or run afoul of the law."
For example, “a phrase in a document containing the words ‘project ABC is going to totally KILL company XYZ’ could potentially give rise to an unfair competition claim,” Google writes.
The patent application leaves future users free to determine for themselves what text would be considered problematic and to specify what to include in their database of phrases: “[T]he database may be initially populated, for example and without limitation, by a member of a company's legal department, other employees, or outside consultants.”
It also seems reasonable to venture that the database could initially be populated by an authoritarian regime's Internet censors. And with the ability to integrate the software on “any type of processing device,” from a smartphone to a television, oppressive governments could be empowered to see anything their citizens write -- in Word documents, in emails, in drafts of blog posts, in digital journals -- and to view it before someone hits “send.”
While Google suggests this technology could come in handy for companies, its broad definition of “problematic phrases” raises the question of how else it might be used and what correspondence could be monitored. Could Google flag pedophiles for the police? Could it thwart a politician’s extramarital affair, or alert a spouse to his wife’s indiscretions? Could it stop white supremacists or religious extremists from emailing with each other? And if the software could do those things, should it?
Slashdot, which first reported on the patent filing, posited that the technology would offer a way for wrongdoers to skirt the rules. "So, if you can't Do-No-Evil," Slashdot wrote, referring to Google's corporate promise, "at least you can Do-No-Discoverable-Evil!"
Of even greater concern may the role of moral arbiter Google could assume. The company's founders promised it would do no evil. But should that really give Google the right to guarantee we do no evil on their watch?
UPDATE: May 7 -- Google spokesman Matt Kallman wrote in an email that even if Google's patent application is approved, the technology might not make its way to market.
"We file patent applications on a variety of ideas that our employees come up with," Kallman said. "Some of those ideas later mature into real products or services, some don't. Prospective product announcements should not necessarily be inferred from our patent applications."