7% of Twitter users are not human

The number of non-human accounts on Twitter has been steadily on the rise in recent years - but how did this trend start in the first place?

3507 5
3507 5
twitter bots

We originally wrote this post back in 2013 after Lutz Finger of Fisheye Analytics suggested that seven per cent of Twitter users were spam bots.

The figure was recently revealed to have potentially more than doubled over the intervening four years. A team of researchers from the University of Southern Carolina and Indiana University released figures suggesting that as many as 15 per cent of Twitter accounts are not run by humans.

Researchers used more than 1,000 features in categories including friends, tweet content and sentiment, as well as time between tweets, in order to identify bots.

“Our estimates suggest that between 9 per cent and 15 per cent of active Twitter accounts are bots,” they said.

Since Twitter has 319 million monthly active users, the higher estimate suggests that almost 48 million accounts could be non-human.

Twitter naturally tried to play down fears that bots were being used for nefarious purposes, with a spokesman saying, “Many bot accounts are extremely beneficial, like those that automatically alert people of natural disasters… or from customer service points of view.”

Researchers acknowledged this in their report, but also highlighted a worrying trend: “There is a growing record of malicious applications of social bots. Some emulate human behavior to manufacture fake grassroots political support… [and] promote terrorist propaganda and recruitment.”

The origin of bots

Chatbots were one of the breakthrough trends of 2016, but these are mainly used for customer service purposes and have become hugely popular as consumers have increasingly turned to social media to communicate with brands.

But how did bots evolve and what is the business strategy?

The first generation of bots were simply ‘spammers gone social’. They were very cheap and easy to create, but not particularly effective. They posted incredible amounts of spam (sometimes in excess of 1,000 posts per minute) and had terrible conversion rates of 1:12.5 million.

READ MORE: An introduction to chatbots: the future of customer engagement

They were also very easy to identify as they tended to either post huge quantities in short bursts, or they would post extremely consistently, all day every day. They also overused hashtags and spam words, and they had very few friends (who were all other bots).

Despite this, simple bots are still dangerous. Most recently, Russian bots tweeted conspiracy theories at US president Donald Trump in an effort to get him to spread the stories through the media.

Bots can also be used in smear campaigns, with Newt Gringrich in the US, Nadine Morano in France and the Christian Democratic Party in Germany all involved in fake follower scandals. These followers were most likely created by their opponents.

The social networks soon learnt how to bring these bots down, but they have come back much stronger and more sophisticated – particularly on Twitter. Today bots have become social, want to be our friends and earn our trust. The result is that Bots 2.0 have influence.

Bot influence

So can bots create mass movements? They need three things:

  1. Reach
  2. Ease of Action
  3. Intention

They certainly have reach and as far as social media monitoring is concerned, bots can really skew the data.

It can appear that something is popular or unpopular, when in reality the content you’re looking at was created by bots. This is where the term astroturfing comes from – astroturf is fake grass, so astroturfing was coined to describe fake grass-roots movements.

On the second point, the internet is great for ease of action. In the past people would have to take to the streets and organise demonstrations to show support or opposition to a cause, whereas today it can be done with one click.

However, these two things on their own are not enough to create intention. Even someone with a very large following cannot necessarily influence behaviour – they are only an information source.

You also need to strike at the point where the person is prepared to be influenced, such as the point of sale. Two-thirds of consumers say they trust consumer opinions posted online, but how many of these are being churned out by bots?

The final element needed to create intention is that it has to be heard from multiple sources, both online and offline. That means getting in the traditional news, but bots are quite capable of this as almost half of journalists say they could no longer work effectively without social media.

Read Next

In this article

Join the Conversation

5 comments

  1. Een op de veertien Twittergebruikers is een spambot | Twittermania Reply

    […] OurSocialTimes, ©Twittermania […]

  2. S D Reply

    “The industry is now worth $2.76 ”
    A huge industry then….

    1. Jeremy Taylor Reply

      Heh, well spotted. I think there might be a missing million in there!

  3. Computer love: How social media bots mess up your numbers | Writing for (y)EU - ep13 Reply

    […] correct than initially thought. Lutz Finger, a social media specialist at FishEye, estimates that 7% of all accounts on Twitter are fake and that 50% of all online traffic is not created by […]

  4. Social Media as a Tool? : Advantage is a matter of facts Reply

    […] networks (whilst it may not be entirely admitted) have a percentage of fake users. Others are swarmed by spam bots, consistently altering social search results to their designer’s needs. As well as this, whilst […]

394 Shares
Share63
Tweet279
+134
Share18