Posted March 16, 2017 3:51 PM by HannesI still remember the day a physics-major friend in college told me to “Look it up on Wikipedia” in 2005. It was the first I’d heard of the site, and after my first visit it quickly became a daily staple in my life. Wikipedia can be a panacea or kryptonite for the intellectually curious, depending on their need for productivity elsewhere. The English version currently has almost 5.5 million articles, and Wikipedia is available in nearly 300 languages. For a free, open-edit reference, studies have shown it to be pretty much as authoritative as other major dictionaries. One of Wikipedia’s biggest drawbacks is that its editors rely on the procrastination principle of waiting for a problem to arise before solving it. Some amount of time therefore passes between a problem and solution, and during that time any number of users could read libelous, misspelled, or incorrect information. To shorten this time and help keep up with small editing tasks on the millions of articles they oversee, Wikipedia contributors often create automated editor bots. Bots typically perform menial tasks such as keeping links current, adding links to current Wikipedia pages, and undoing obvious vandalism as quickly as possible.
According to recent research, they also “fight” much more often than Wikipedia users witness. Researchers at Oxford and the Alan Turing Institute in London examined the editing histories in over a dozen Wikipedia language editions, noting when a bot undid another’s prior changes. Some of these back-and-forth interactions continued for years. The open-access study, published in late February, described how even relatively “dumb” bots like those programmed to detect Wikipedia errors can engage in surprisingly complex “social” behavior when employed in large numbers.
According to the study, these interactions may help to guide AI design for applications as diverse as social media, cybersecurity, and autonomous vehicles. But the authors are quick to admit that Wikipedia might not be the ideal environment for studying bot-on-bot interaction. Given Wikipedia’s totally bottom-up editing structure, any of its human editorscan create editor bots, and there is no formal coordination between editors, so their bots can easily contradict each other.
One of the pitfalls of AI research is the tendency to assign human behaviors to automated ones. It’s easy to say that bots are “fighting” when in reality they’re following their own programming in contradictory ways. Even so, the research data found that there may be a cultural component involved. The English-language Wikipedia bots reverted other bots about 105 times over a ten-year period. German Wikipedia bots reverted significantly less (an average of 24 times), while Portuguese bots reverted significantly more (185 times). Because bots are human-created it only makes sense that they’d take on the characteristics of their human programmers.
Bot-on-bot research is becoming more common, and a surprising number of studies found that interactions often result in conflict, whether between Wikipedia editor bots or “socializing” chatbots. It might not be as fun as watching real robot combat, but it’s amazing to think that behind Wikipedia is an unseen virtual battleground of editor bots, fighting to provide accurate information.
Image credit: Cyberspace Law Centre
Source: http://cr4.globalspec.com/blogentry/28156/Wikipedia-Bots-Have-Been-Silently-Battling-For-A-Decade