Slashdot reader Lauren Weinstein
Google has announced (with considerable fanfare) public access to their new "Perspective" comment filtering system API, which uses Google's machine learning/AI system to determine which comments on a site shouldn't be displayed due to perceived high spam/toxicity scores. It's a fascinating effort. And if you run a website that supports comments, I urge you not to put this Google service into production, at least for now.
The bottom line is that I view Google's spam detection systems as currently too prone to false positives -- thereby enabling a form of algorithm-driven "censorship" (for lack of a better word in this specific context) -- especially by "lazy" sites that might accept Google's determinations of comment scoring as gospel... as someone who deals with significant numbers of comments filtered by Google every day -- I have nearly 400K followers on Google Plus -- I can tell you with considerable confidence that the problem isn't "spam" comments that are being missed, it's completely legitimate non-spam, non-toxic comments that are inappropriately marked as spam and hidden by Google.
Lauren is also collecting noteworthy experiences for a white paper
about "the perceived overall state of Google (and its parent corporation Alphabet, Inc.)" to better understand how internet companies are now impacting our lives in unanticipated ways. He's inviting people to share their recent experiences with "specific Google services (including everything from Search to Gmail to YouTube and beyond), accounts, privacy, security, interactions, legal or copyright issues -- essentially anything positive, negative, or neutral that you are free to impart to me, that you believe might be of interest."