Earlier this year, the New Scientist published an article announcing Googles plans to launch a “Trust Algorithm.” The Knowledge-based Trust score will report on how accurate information on the web is in an attempt to weed out false information.
The proposed algorithm will become an attribute towards Google ranking measurements to determine how high a website features in search results. So how will it work, and is there anything for online business owners to worry about?
Google has a “Knowledge base” which is an accumulation of facts that are proven, or most commonly known. The search engine giant intends to use the information stored in their fact bank and cross reference it with information on websites.
According to the magazine, Google intends to “rank websites based on facts, rather than links.” It appears therefore that building inbound links will become a thing of the past. Whether the search engine will count existing links has not been mentioned.
What we do know is that the Knowledge-Based Trust algorithm will work like this:
- Extract information from all websites across the web
- Estimate correctness (no explanation what constitutes correctness, but presumably, the most common “fact”)
- Identify incorrect information on webpages
- Websites given a trustworthiness signal
- There is no penalty for a lack of facts
Google has not yet stated when they expect the trust algorithm will be incorporated into existing technology, because there are several issues that need resolving.
How will the trust algorithm determine “facts”?
The algorithm works by using factors referred to as Knowledge Triples; subject, predicate and object. The subject is a person, place or item, the predicate is something attributed to the subject and the object is a numerical value or date.
The issue Google has with Knowledge Triples is that not every “fact” has an object. According to algorithm researchers, programmers have to work out a way of identifying the main topic of a website so they know when to filter the object factor.
Trivial facts will also be excluded as a measure for trustworthiness. However, Google has not expressed what they consider as trivial facts and how these will be dealt with.
There is also a concern regarding what information is factual. During testing, 15 of the sites acknowledged as having authority and would ordinarily be used measure the accuracy of information on other sites, featured incorrect information – 15% of the “facts” on the sites were wrong!
Whilst there is a lot of incorrect facts on the web, Google appear to be opening up a can of worms in trying to correct the incorrect. Issues have been raised but many more issues exist.
For example, how will the algorithm treat new evidence that question information we regard as fact. Science is always making new discoveries. Also, will content curators be entitled to an opinion?
The Knowledge-Based Truth algorithm may help wheedle out some incorrect facts that are splattered across the internet, but will not expose untruths that the public are expected to believe.
Furthermore, independent researchers that find evidence that questions mainstream views will be penalised for publishing their findings. So is the truth algorithm creating more limitations than cleaning up the web? What do you think?