Google’s autocomplete first debuted in 2004, and has since delivered us almost exactly what we’ve wanted in just a few letters. When the feature first debuted, there was a general attitude that it was lazy, and that it was invasive. Today, the public seems to be enamored with the technology.
Google’s been focusing on the autocomplete problem recently, because the search tool is returning some unusual results to users. Predictive search can be great for businesses and users, who both get what they want a lot faster now. However, Google’s update aims at curbing the tool’s potential for misuse and abuse.
Unwanted Outcomes with AutoComplete
When a user begins to type a query into the search bar, Google attempts to predict what the user wants by offering a few choices. If you search a celebrity’s name, you’re bound to come across a choice that indicates the celebrity has died. This kind of hoax search only perpetuates because certain groups of users routinely run the query, and it perpetuates myths and fake news.
Another issue is hate speech, or offensive searches being returns unintentionally. In the most widely publicized incident, users who performed a search for “did the Holocaust”, or “is the Holocaust”, and Google returned results attributed to Holocaust denial. Today, the algorithm provides additional results that are more informative and provide better context for the query.
Enforcement
Google will make these changes with user feedback by offering a tool to report offensive queries. Google has a massive database, and user feedback will be an integral part of trying to help sort those entries for users. The question is what kind of backlash is waiting. Will users report websites or keywords that disagree with their politics, or represent a competing interest?
Giving the community more tools to police itself is never a bad thing, but Google is a very large entity. How many sites will be excluded over misunderstandings, or targeted campaigns aimed at taking a site down?