Make a Twitter Bot with Python and Tweepy


The Twitter API provides endpoints for the same functionalities as offered in the app, such as posting Tweets, favoriting or retweeting Tweets, as well as accessing analytics data.

There is also a Streaming API to collect Tweet data in real time.

Today I'm going to demo some of the things you can do with the standard Twitter API.


I'm using Tweepy which is a Python library that is a wrapper for the Twitter API.


The first thing you will need to do to connect to the Twitter API is to create a Twitter account if you don't have one already.

Now let's create a virtualenv for the project.

mkvirtualenv twitterbot

Install Tweepy in the virtualenv.

pip install tweepy

Twitter API Keys

Login to your Twitter account and go to to register an app.

This will give you the API keys you will need to connect to the API.

Create a new file and import tweepy.

Now go back to the Twitter developer page where you created the app and go into the Keys and tokens tab of the dashboard area, where you will find your API keys.

Copy and paste your API keys into the file.

import tweepy

api_key = 'FpUIjBqdasdlfkjsadfoijaosdflkjLVC3'
api_secret = '8XVMasdfkljlakjdsflkjaslkjffXL7xv6xQ'
access_token = '1604asdfkjasdflkjdalskjfasfsarCmPocBl'
access_token_secret = 'c2asldkjfldkajsdfljlkjaldkjflkjslaf8vLk3a7X2Y'

My variable names here correspond to the names of the keys in the developer dashboard.

Note: these are fake API keys. Don't share your API keys, and in general it's a good idea to keep them in some kind of environment variable outside of your program and not do anything like commit the keys to version control.

Connect to the Twitter API with Tweepy

Now we have everything we need to connect to the API.

auth = tweepy.OAuthHandler(api_key, api_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
  • wait_on_rate_limit=True - automatically wait for rate limits to replenish.
  • wait_on_rate_limit_notify=True - print a notification when Tweepy is waiting for rate limits to replenish.

I will talk about rate limits in a minute.

You can read about the other parameters of the API class in the docs.

Here api is the API connection, a Tweepy API instance that is a wrapper for the Twitter API, and we will use it for everything else in this post.

Tweepy Rate Limits

One thing to keep in mind is that Twitter rate limits connections to the API.

Luckily Tweepy makes it easy to monitor your API calls and avoid too many error message.

  • There are different limits for different methods of the API, which correspond to different activities such as tweeting or favoriting tweets.
  • If you call api.rate_limit_status() it will return a JSON object of all of the methods and corresponding rate limits.
  • Rate limits have 15 minute intervals.

For example, favoriting Tweets.

With Tweepy you can check the rate limit status for favoriting like this.


The rate limit for favoriting Tweets is 75 requests per 15 minute period.

Checking your rate limit status does not count against your rate limits.

Respect the Rate Limit

If you continuously try to access an API endpoint after you've reached the rate limit for it, your account will probably get blocked temporarily and eventually banned.

Tweepy has a RateLimitError exception that you can use, and you could do something like have your program sleep for 15 minutes when you hit the rate limit and then continue.

How to Tweet with Tweepy

To write a Tweet, you call the update_status method of the API class.

api.update_status("hello, twitter! I'm writing this with Python")

You can also call update_with_media to post a Tweet with files or images.

See the docs for more.

These methods return a Tweepy Status object.

Tweepy Status objects

You can access pretty much any information about a Tweet that you can think of in a Tweepy Status object, from the text of the Tweet, to how many retweets it got, right down to the smallest detail like the border color of the user's profile sidebar.

Your information

By your I mean the authenticated user.

You can access data about the authenticated user like this.

This returns a Tweepy User object.

Tweepy User objects

Tweepy User objects store all of the information about a Twitter user, such as their user name, how many followers they have, sometimes their location, and more.

Search for Tweets with a certain word or hashtag

First I'm going to use the API to grab the current trending topics and hash tags in New York, and then I will pick a random topic from the list and search for Tweets containing that topic.

The trends_place method retrieves the top 10 trending topics in a location.

First I need the WOEID for New York, which I can get from another API call to the trends_available method, which returns all locations with trending topic data.

The WOEID is a Yahoo! Where On Earth ID, as mentioned in the docs.


The output is a list of JSON objects with all locations with Trending topic data that we could look at - there are many so this is just a tiny part of it.

[{'name': 'Worldwide',
  'placeType': {'code': 19, 'name': 'Supername'},
  'url': '',
  'parentid': 0,
  'country': '',
  'woeid': 1,
  'countryCode': None},
{'name': 'New York',
  'placeType': {'code': 7, 'name': 'Town'},
  'url': '',
  'parentid': 23424977,
  'country': 'United States',
  'woeid': 2459115,
  'countryCode': 'US'}],
      'as_of': '2019-12-22T14:42:28Z',
  'created_at': '2019-12-22T14:36:29Z',
  'locations': [{'name': 'United States', 'woeid': 23424977}]}]

The WOEID for New York is 2459115, which you can see from the output.

To get the trending topics, call trends_place and pass in the WOEID for New York.

new_york_woeid = 2459115
new_york_trends = api.trends_place(new_york_woeid)

The output of new_york_trends is more JSON with the top 10 trending topics. Not all of them are included in the output I've pasted here.

[{'trends': [{'name': 'DaBaby',
    'url': '',
    'promoted_content': None,
    'query': 'DaBaby',
    'tweet_volume': 620409},
   {'name': 'Eddie Murphy',
    'url': '',
    'promoted_content': None,
    'query': '%22Eddie+Murphy%22',
    'tweet_volume': 65200},
   {'name': 'Merry Christmas',
    'url': '',
    'promoted_content': None,
    'query': '%22Merry+Christmas%22',
    'tweet_volume': 331278},
         {'name': '#MyPLMorning',
    'url': '',
    'promoted_content': None,
    'query': '%23MyPLMorning',
    'tweet_volume': None}],
  'as_of': '2019-12-22T14:53:02Z',
  'created_at': '2019-12-22T14:46:30Z',
  'locations': [{'name': 'New York', 'woeid': 2459115}]}]

We can loop through the trending topics and get data like the name of the topic, number of tweets using it, and whether or not it is a promoted topic.

I'm just going to create a list of the trending topic names and then pick a random topic from the list to search for.

topic_names = []
for trend in new_york_trends[0]['trends']:
    name = trend['name']

I have the list topic_names with all of the topics, and now I will pick a random one.

import random
random_index = random.randint(0,len(topic_names)-1) 
random_topic = topic_names[random_index]

Here I just picked a random index of the list and retrieved the topic in that index.

Since Python lists are indexed starting with zero, I picked a random number between zero and the number of items in the list minus one.

The random topic is 'Happy Hanukkah' - this makes sense because Hanukkah starts today.

On that note, Happy Hanukkah to everyone celebrating!

So now we will search for Tweets with the random_topic 'Happy Hanukkah' in them.

tweets =

This returns a list of Tweepy SearchResult objects.

We can loop through them and get the text of the tweets, along with other information about the user who tweeted it.

for tweet in tweets:
    user =
    tweet_text = tweet.text

Favoriting Tweets

Using the tweets from our search earlier, we could go through and favorite each one of them by calling create_favorite and passing the SearchResult object id.

for tweet in tweets:


You could also retweet them, which is done in a similar way to create_favorite, you just call retweet instead and pass it the SearchResult object id.

for tweet in tweets:

Always keep in mind the rate limit for each API method you call.


If you wanted to follow each of the Twitter users behind the SearchResult objects in the list.

for tweet in tweets:

You just call the create_friendship method and pass in the id of the User object which can be accessed from the SearchResult object as I did here with


To unfollow someone you would call the destroy_friendship method and pass it the id of the User object to unfollow.

We could go back and unfollow the users we just followed.

for tweet in tweets:


Tweepy has a Cursor class that is used for pagination.

from tweepy.cursor import Cursor

Before we got back 15 Tweets when we searched for the random trending topic.

That was only the first page of results.

If I wanted to favorite 50 Tweets with the "Happy Hanukkah" topic, I would need to paginate through more of the results.

for tweet in Cursor(, q="Happy Hanukkah").items(50):

Tweepy has a tutorial for this as well.

As always, be mindful of the rate limit. Especially when you start paginating and processing more results, which likely leads to more calls to various API endpoints.

Thanks for reading!

This was just an overview of connecting to the Twitter API with Tweepy and performing some of the basic functions that you can otherwise do on the app, like retweeting, following or favoriting Tweets.

You can basically manage your entire Twitter account through the API if you want.

  • update your profile.
  • view your timeline.
  • get your tweets that have been retweeted.
  • view followers of other users.
  • search users.
  • access your DMs(direct messages).
  • block users.
  • report spam.
  • manage your lists.

Check out the Tweepy docs for the API class to see everything you can do.

You can also access the Twitter Streaming API with Tweepy, where you can get streaming data of Tweets in real time. Tweepy docs for the Steaming API are here.

Enjoy and happy Tweeting!

If you have any questions or feedback, write a comment below or find me on Twitter @LVNGD.

blog comments powered by Disqus

Recent Posts

Computing Morton Codes with a WebGPU Compute Shader
May 29, 2024

Starting out with general purpose computing on the GPU, we are going to write a WebGPU compute shader to compute Morton Codes from an array of 3-D coordinates. This is the first step to detecting collisions between pairs of points.

Read More
WebGPU: Building a Particle Simulation with Collision Detection
May 13, 2024

In this post, I am dipping my toes into the world of compute shaders in WebGPU. This is the first of a series on building a particle simulation with collision detection using the GPU.

Read More
Solving the Lowest Common Ancestor Problem in Python
May 9, 2023

Finding the Lowest Common Ancestor of a pair of nodes in a tree can be helpful in a variety of problems in areas such as information retrieval, where it is used with suffix trees for string matching. Read on for the basics of this in Python.

Read More
Get the latest posts as soon as they come out!