The statistics are remarkable. It’s estimated that 7% of Twitter users are actually spam bots; 20% of social media users will accept unknown friend requests; and nearly one in three users can be deceived by chat bots, thinking they’re actually talking to a human. Furthermore, 51% of web traffic comes from bots or algorithms.
It is now a multi-billion dollar industry, but how has it evolved and what is the business strategy?
The first generation of bots were simply ‘spammers gone social’. They were very cheap and easy to create, but not particularly effective. They posted incredible amounts of spam (sometimes in excess of 1000 posts per minute) and had terrible conversion rates of 1:12.5 million. They were also very easy to identify as they tended to either post huge quantities in short bursts, or they would posts extremely consistently, all day everyday. They also overuse hashtags, overuse spam words, and they had very few friends that were all other bots.
Despite this, simple bots are still dangerous. Recent elections showed how they can be used in smear campaigns, with Newt Gringrich in the US, Nadine Morano in France and the Christian Democratic Party in Germany all involved in fake follower scandals. These followers were most likely created by their opponents.
Another way these bots can cause harm is to suppress information. Lutz pointed to the the Arab Spring as an example. If someone were to use #Syria to organise a demonstration, their opponents could send their bots into the hashtag to talk about how beautiful the country is, drowning out the original message.
The social networks soon learnt how to bring these bots down, but they have come back much stronger and more sophisticated. Today bots have become social, want to be our friends and earn our trust. The result is that Bots 2.0 have influence.
So can bots create mass movements? They need three things:
- Ease of Action
They certainly have reach and as far as social media monitoring is concerned, bots can really skew the data. It can appear that something is very popular or unpopular, when in reality what you’re actually looking at are bots. This is where the term astroturfing comes from – astroturf is fake grass, so astroturfing was coined to describe fake grass-roots movements.
On the second point, the internet is great for ease of action. In the past people would have to take to the streets and organise demonstrations to show support or opposition to a cause, whereas today it can be done with one click.
However, these two things on their own are not enough to create intention. Even someone with a very large following cannot necessarily influence behaviour – they are only an information source. You also need to strike at the point where the person is prepared to be influenced, such as the point of sale. Over 20% of people have booked a hotel based on online reviews, but 38% of those reviews are fake. It shows that we’re willing to be deceived if you strike at the right moment.
The final element needed to create intention is that it has to be heard from multiple sources, both online and offline. That means getting in the traditional news, but bots are quite capable of this as 55% of journalists source information from social media.
So how should we handle bots? OKCupid is a dating website that does a great job of this. For obvious reasons a dating website is an ideal place for spammers, but deleting fake accounts only trains them and they quickly come back stronger. To tackle this, OKCupid actually created their own bots and put them in a ‘secondary world’. Instead of deleting other bots, they move them into this world where the bots start having conversations with each other – albeit rather strange ones.