After we watched the April shut-down of inBloom, due to New York teachers' and parents' campaign surrounding student privacy and security concerns, Google announced just a few days later that their "free" education software, Google Apps for Education (GAFE), would stop scanning students' emails for advertising purposes.
The timing of Google's announcement was curious for another reason, too: the search giant made public its new policy the day before White House advisor John Podesta published a report on the use of "big data" by companies like Google. John Podesta's brother Tony Podesta is a Google lobbyist, and Google was one of many technology providers who gave the White House input on its report. There's a good chance they knew the report was coming, so they may have announced their policy change before the White House could release it.
The public, especially parents, and White House officials are beginning to understand more about data mining and student privacy concerns, especially after learning more about what types of information inBloom sought to collect. That's good news. To its credit, inBloom was open and transparent about the information it sought to collect. Other providers, like Google, mined student data in secret, and it took lawsuits under wiretapping statutes to uncover their practices.
We should commend Google for their apparent change of heart on the commercialization of our kids. But lingering questions still exist, especially those raised by privacy experts: what took Google so long and what is going to happen with the data Google has been collecting for the past seven years? For a better understanding of these lingering concerns, since understanding privacy can be like dissecting a human body, I turned to the Privacy Surgeon, Simon Davis. His latest post is very enlightening.
Davis felt the announcement from Google should have been:
"Google is committed to protecting the privacy of children and young people, so we will no longer scan, analyse, process or store any data relating to them for any purpose whatever. It's the right thing to do."
But that's not what Google said. Instead, the company announced that it would disable content scanning, only on Gmail, and only "for advertising purposes." That's a far cry from a global commitment to protect privacy.
Gmail, too, is only one product in GAFE's core suite of services. Is Google continuing to scan other areas of the core suite, like Google Docs or YouTube for Schools, for advertising purposes? And if not for advertising purposes, exactly for what purposes does Google continue to scan student Gmail?
Can Google point to a specific point in time at which it stopped violating student privacy?
Our children are supposed to be protected by federal laws, but it is still unclear whether or not GAFE complies with requirements set forth in the Family Educational Rights and Privacy Act (FERPA), the Children's Online Privacy Protection Act (COPPA), or the Children's Internet Protection Act (CIPA).
Part of the problem is that schoolteachers and administrators are making parents' decisions about student technology and child privacy for them. Teachers need the power to make some decisions for parents. If a child scrapes their knee on the playground, for example, teachers need to be empowered to take the child to the nurse's office for antiseptic and a Band-Aid. Privacy and the commercialization of minors are much more complex issues than playground accidents.
The Federal Trade Commission has said "...the school's ability to consent on behalf of the parent is limited to the educational context - where an operator collects personal information from students for the use and benefit of the school, and for no other commercial purpose."
Basically, any schoolteacher or administrator who made a decision, on behalf of parents, to use GAFE, which Google markets as "free," with the knowledge that Google collects student data to inform its advertising strategy, even when ads were turned off inside GAFE, did knowingly violate the limitations placed on it by COPPA.
But as I have written previously, Google gives schoolteachers and administrators special certifications that allow them to monetize their own Google evangelism; incentives exist in the education technology market that work at cross-purposes.
The FTC also said that "...[companies like Google] must . . . provide parents, upon request, a description of the types of personal information collected; an opportunity to review the child's personal information and/or have the information deleted; and the opportunity to prevent further use or online collection of a child's personal information."
Google is famously opaque when parents -- or any consumer, really -- starts asking questions about its data collection practices. But this interpretation of COPPA defines very specific obligations Google has to parents, particularly regarding the data it has collected since GAFE's launch in 2007. Will Google comply if parents start asking? That remains to be seen, and it's up to parents to start asking questions -- just like they did in New York with inBloom.
Another challenge we face currently is that current law very narrowly defines "personal information." That gives technology providers like Google a lot of wiggle room to manipulate how their technology works, and to split hairs with their policies, to do what's best for shareholders, regardless of the moral consequences.
Parents could challenge lawmakers to quibble over definitions of "personal information." Or they could save everyone a lot of time, energy, and litigation costs down the road if they simply advocate for new laws that outright ban the collection of any data relating to our children's use of technology. My kids are not for sale, and yours shouldn't be, either.
Thankfully, 80 bills are currently moving through 32 state legislatures, each designed to impact the conversation on child privacy online. A simple bill with simple language prohibiting underhanded commercialization of minors would go a long way toward achieving Simon Davis's vision.