Download a copy of your X posts via X-API

April 28, 2026

Resources

Overview

For the past couple of years I've been looking for ways to affordably get a hold of every single tweet I've ever sent on X. Up until now it has either been hundreds of dollars to get that data or through crazy command-line data scraping that was above my pay grade. With the advancements of both local and cloud LLMs and the recent changes to X's API access and pricing, this has now been more affordable and approachable than ever. The X-API now has a specific flag called owned reads, which gives you special and much cheaper pricing for these API calls. The pricing for that is $0.001 per post.

I went through all the steps from beginning to end on how to do this yourself and while it still involves some command line, the process has never been simpler. This isn't necessarily a technical step-by-step guide but it at least covers all of the steps that were necessary to get this done so you can know if it is something within your realm of possibility or not. You're going to be using a combination of an X developer account, the command line on your computer, a little bit of python code, and your LLM of choice to make it easy on yourself. The data will arrive to you as a single file in JSON format, and you can also get this in CSV format once you've got the data. Here is an example of my output in csv.

IMPORTANT TIP: I used Claude as a technical support agent through this process, and used it to help me build the script that was used to pull the information for X. If you're not already doing this sort of thing you should start.


Summary of what you're doing

  • Using your X account to create a developer account on console.x.com

    • Creating a new project

    • Creating a new app

    • Adding API credits

    • Enabling OAUTH 2.0

  • Installing "tweepy" via Python in your terminal

  • Creating a python script that details your X credentials, your account, and what kind of data you want to collect. (you're not writing from scratch, don't worry)

  • Using the terminal on your computer to execute that python script and request the data from X

  • X gives you files with the data you're requesting

Prerequisites

  • A computer with internet (duh). I'm using a Mac for this.

    • Your terminal app of choice. I use Warp.

    • Python 3 installed. install from python.org or brew install python.If you're not already using Homebrew on MacOS, you're missing out.

  • An X (Twitter) account which you can use to create an X developer account

  • A few dollars for API credits ($5 is more than enough)

Step 1: X Developer Account Setup

  1. Go to console.x.com and sign in with your regular X account.

  2. Accept the developer agreement

  3. You're automatically on the pay-per-use plan (no free tier exists as of Feb 2026)

Step 2: Add API Credits

Load at least $5 into your developer account billing section. Without credits, every API call returns a 402 Payment Required error. Owned Reads (reading your own posts) cost $0.001 per post.

Step 3: Create a Project and App

  1. Create a new project in the developer console (name it anything, e.g. "My Posts Scraper")

  2. Create an app inside that project

IMPORTANT NOTE: when you are creating apps and APIs and things like this, there are keys and tokens and IDs that they will only show you one time when you create it and you can never get it again. This is on purpose and for security. When you go through this process, you are going to see the following items:

  • Consumer key

  • Secret key

  • Bearer token

  • OAuth 2 ID

  • OAuth 2 client secret and client secret

Just make sure to save these when they pop up on screen and put them somewhere safe. I use 1password for this. These will pop up in the next steps.

Step 4: Enable OAuth 2.0

  1. Go to your app's Settings tab

  2. Scroll to User authentication settings, click Set up

  3. Select app type: Web App (confidential client)

  4. Set Callback URL to: http://localhost:3000/callback

  5. Set Website URL to anything (your X profile works)

  6. Save

Step 5: Grab Your Credentials

  1. Go to Keys and Tokens tab

  2. Copy your OAuth 2.0 Client ID and OAuth 2.0 Client Secret

  3. Save them immediately - they only display once

Step 6: Install Tweepy

IMPORTANT NOTE: I don't think using Tweepy via Python isn't the most recent and up-to-date way of grabbing this information I think, but since Claude Co-Work didn't have enough information to give me the best way to do this, it defaulted back to Tweepy. It still worked great but a different way might be better. Here is what Claude had to say about this when I asked with assistance using the brand new XDK that dropped this year.

Tweepy still works and is the most mature community library. Your script is proof of that. The issue is that it was designed around Twitter API v1.1 and added v2 support after the fact. The abstractions can feel awkward for v2-native workflows, and when X changes endpoints or adds new v2 features, Tweepy tends to lag behind.

The official alternative is xdk (X Developer Kit). X shipped their first official Python SDK in early 2026. You install it with pip install xdk. It's auto-generated from their API spec, so it has type hints, automatic pagination, and streaming support baked in. The upside is it'll always match the current API surface since X maintains it themselves.

The catch: it's brand new. Documentation is thin, the GitHub repo had something like 22 stars last I checked, and the community around it is tiny compared to Tweepy's years of Stack Overflow answers and tutorials. If something breaks, you're mostly on your own.

To install tweepy, In your terminal:

Important: Use pip3 not pip on macOS. pip will return zsh: command not found: pip.

Step 7: Create the Python Script

Key things the script needs:

  • Set OAUTHLIB_INSECURE_TRANSPORT=1 as environment variable at top of script (fixes InsecureTransportError for http://localhost callback)

  • Pass Client ID, Client Secret, and callback URL to tweepy.OAuth2UserHandler

  • Scopes needed: tweet.read, users.read, offline.access

  • After getting token, pass it to tweepy.Client via bearer_token parameter (NOT access_token, which is for OAuth 1.0a and throws 401 Unauthorized)

  • Use client.get_users_tweets() with start_time and end_time to filter by date range

  • Paginate in batches of 100, handle rate limits automatically

Here is an example of the Python code that I used for my most recent run. I only wanted it to go for the month of April so make sure, if you are trying to target a certain date range, that you make that adjustment within the code. you don't need to go over this code with a fine tooth comb. Either copy all of the code from this website and paste it into ChatGPT or Claude or your LLM of choice and ask where your information needs to be inserted. Please note that sharing your client secret or API IDs with LLMs is not advised but you can use it to generate your first Python script and then regenerate those IDs once you're done to make sure that they're safe.

"""
Pull all of your own posts from X (Twitter) using the v2 API.
Uses OAuth 2.0 PKCE with confidential client for Owned Reads pricing ($0.001/post).
Saves everything to a JSON file.

Usage:
    pip3 install tweepy
    python3 pull_my_posts.py
"""

import os

os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"  # Allow http://localhost for local OAuth

import tweepy
import json
import time
from datetime import datetime, timezone

# ============================================================
# YOUR CREDENTIALS (confidential client)
# Get these from https://developer.x.com/en/portal/dashboard
# ============================================================
CLIENT_ID = "YOUR_CLIENT_ID"
CLIENT_SECRET = "YOUR_CLIENT_SECRET"
REDIRECT_URI = "http://localhost:3000/callback"
USERNAME = "YOUR_USERNAME"

# Scopes needed: read your own tweets and profile
SCOPES = ["tweet.read", "users.read", "offline.access"]


def authenticate():
    """Run OAuth 2.0 PKCE flow and return an authenticated client."""
    oauth2_handler = tweepy.OAuth2UserHandler(
        client_id=CLIENT_ID,
        client_secret=CLIENT_SECRET,
        redirect_uri=REDIRECT_URI,
        scope=SCOPES,
    )

    auth_url = oauth2_handler.get_authorization_url()
    print("\n" + "=" * 60)
    print("STEP 1: Open this URL in your browser:")
    print("=" * 60)
    print(f"\n{auth_url}\n")
    print("STEP 2: Click 'Authorize app'")
    print("STEP 3: You'll be redirected to a URL that probably won't load.")
    print("        That's normal. Copy the ENTIRE URL from your browser's")
    print("        address bar and paste it below.\n")

    redirect_response = input("Paste the full redirect URL here: ").strip()
    access_token = oauth2_handler.fetch_token(redirect_response)
    token = access_token["access_token"]

    client = tweepy.Client(bearer_token=token)
    print("\nAuthenticated successfully!\n")
    return client


def get_user_id(client, username):
    """Look up numeric user ID from username."""
    user = client.get_user(username=username)
    if user.data is None:
        raise Exception(f"Could not find user @{username}")
    print(f"Found @{username} (ID: {user.data.id})")
    return user.data.id


def pull_all_posts(client, user_id):
    """
    Pull all posts from a user's timeline with pagination.
    Returns a list of post dictionaries.
    """
    all_posts = []
    pagination_token = None
    page = 1

    print("\nPulling posts...\n")

    while True:
        try:
            response = client.get_users_tweets(
                id=user_id,
                max_results=100,  # max per request
                pagination_token=pagination_token,
                tweet_fields=[
                    "created_at",
                    "public_metrics",
                    "text",
                    "id",
                    "conversation_id",
                    "in_reply_to_user_id",
                    "referenced_tweets",
                    "attachments",
                    "entities",
                    "lang",
                    "source",
                ],
                exclude=None,  # include replies and retweets; set to ["replies","retweets"] to skip them
            )
        except tweepy.TooManyRequests:
            print("Rate limited. Waiting 15 minutes...")
            time.sleep(15 * 60 + 10)
            continue
        except tweepy.errors.TwitterServerError as e:
            print(f"Server error: {e}. Retrying in 30 seconds...")
            time.sleep(30)
            continue

        if response.data is None:
            print("No more posts found.")
            break

        for tweet in response.data:
            post = {
                "id": tweet.id,
                "text": tweet.text,
                "created_at": tweet.created_at.isoformat() if tweet.created_at else None,
                "public_metrics": dict(tweet.public_metrics) if tweet.public_metrics else None,
                "conversation_id": tweet.conversation_id,
                "in_reply_to_user_id": tweet.in_reply_to_user_id,
                "referenced_tweets": (
                    [{"type": rt.type, "id": rt.id} for rt in tweet.referenced_tweets]
                    if tweet.referenced_tweets
                    else None
                ),
                "lang": tweet.lang,
                "source": tweet.source,
                "entities": dict(tweet.entities) if tweet.entities else None,
            }
            all_posts.append(post)

        print(f"  Page {page}: pulled {len(response.data)} posts (total: {len(all_posts)})")

        # Check for next page
        if response.meta and "next_token" in response.meta:
            pagination_token = response.meta["next_token"]
            page += 1
            time.sleep(1)  # small delay to be nice to the API
        else:
            break

    return all_posts


def main():
    print("=" * 60)
    print("  X Post Scraper - Pull Your Own Posts")
    print("=" * 60)

    # Authenticate
    client = authenticate()

    # Get user ID
    user_id = get_user_id(client, USERNAME)

    # Pull all posts
    posts = pull_all_posts(client, user_id)

    # Save to JSON
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    filename = f"x_posts_{USERNAME}_{timestamp}.json"

    output = {
        "username": USERNAME,
        "user_id": str(user_id),
        "total_posts": len(posts),
        "pulled_at": datetime.now().isoformat(),
        "posts": posts,
    }

    with open(filename, "w", encoding="utf-8") as f:
        json.dump(output, f, indent=2, ensure_ascii=False)

    print(f"\nDone! Saved {len(posts)} posts to {filename}")
    print(f"Estimated cost: ${len(posts) * 0.001:.2f} (Owned Reads pricing)")


if __name__ == "__main__":
    main()

"""
Pull all of your own posts from X (Twitter) using the v2 API.
Uses OAuth 2.0 PKCE with confidential client for Owned Reads pricing ($0.001/post).
Saves everything to a JSON file.

Usage:
    pip3 install tweepy
    python3 pull_my_posts.py
"""

import os

os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"  # Allow http://localhost for local OAuth

import tweepy
import json
import time
from datetime import datetime, timezone

# ============================================================
# YOUR CREDENTIALS (confidential client)
# Get these from https://developer.x.com/en/portal/dashboard
# ============================================================
CLIENT_ID = "YOUR_CLIENT_ID"
CLIENT_SECRET = "YOUR_CLIENT_SECRET"
REDIRECT_URI = "http://localhost:3000/callback"
USERNAME = "YOUR_USERNAME"

# Scopes needed: read your own tweets and profile
SCOPES = ["tweet.read", "users.read", "offline.access"]


def authenticate():
    """Run OAuth 2.0 PKCE flow and return an authenticated client."""
    oauth2_handler = tweepy.OAuth2UserHandler(
        client_id=CLIENT_ID,
        client_secret=CLIENT_SECRET,
        redirect_uri=REDIRECT_URI,
        scope=SCOPES,
    )

    auth_url = oauth2_handler.get_authorization_url()
    print("\n" + "=" * 60)
    print("STEP 1: Open this URL in your browser:")
    print("=" * 60)
    print(f"\n{auth_url}\n")
    print("STEP 2: Click 'Authorize app'")
    print("STEP 3: You'll be redirected to a URL that probably won't load.")
    print("        That's normal. Copy the ENTIRE URL from your browser's")
    print("        address bar and paste it below.\n")

    redirect_response = input("Paste the full redirect URL here: ").strip()
    access_token = oauth2_handler.fetch_token(redirect_response)
    token = access_token["access_token"]

    client = tweepy.Client(bearer_token=token)
    print("\nAuthenticated successfully!\n")
    return client


def get_user_id(client, username):
    """Look up numeric user ID from username."""
    user = client.get_user(username=username)
    if user.data is None:
        raise Exception(f"Could not find user @{username}")
    print(f"Found @{username} (ID: {user.data.id})")
    return user.data.id


def pull_all_posts(client, user_id):
    """
    Pull all posts from a user's timeline with pagination.
    Returns a list of post dictionaries.
    """
    all_posts = []
    pagination_token = None
    page = 1

    print("\nPulling posts...\n")

    while True:
        try:
            response = client.get_users_tweets(
                id=user_id,
                max_results=100,  # max per request
                pagination_token=pagination_token,
                tweet_fields=[
                    "created_at",
                    "public_metrics",
                    "text",
                    "id",
                    "conversation_id",
                    "in_reply_to_user_id",
                    "referenced_tweets",
                    "attachments",
                    "entities",
                    "lang",
                    "source",
                ],
                exclude=None,  # include replies and retweets; set to ["replies","retweets"] to skip them
            )
        except tweepy.TooManyRequests:
            print("Rate limited. Waiting 15 minutes...")
            time.sleep(15 * 60 + 10)
            continue
        except tweepy.errors.TwitterServerError as e:
            print(f"Server error: {e}. Retrying in 30 seconds...")
            time.sleep(30)
            continue

        if response.data is None:
            print("No more posts found.")
            break

        for tweet in response.data:
            post = {
                "id": tweet.id,
                "text": tweet.text,
                "created_at": tweet.created_at.isoformat() if tweet.created_at else None,
                "public_metrics": dict(tweet.public_metrics) if tweet.public_metrics else None,
                "conversation_id": tweet.conversation_id,
                "in_reply_to_user_id": tweet.in_reply_to_user_id,
                "referenced_tweets": (
                    [{"type": rt.type, "id": rt.id} for rt in tweet.referenced_tweets]
                    if tweet.referenced_tweets
                    else None
                ),
                "lang": tweet.lang,
                "source": tweet.source,
                "entities": dict(tweet.entities) if tweet.entities else None,
            }
            all_posts.append(post)

        print(f"  Page {page}: pulled {len(response.data)} posts (total: {len(all_posts)})")

        # Check for next page
        if response.meta and "next_token" in response.meta:
            pagination_token = response.meta["next_token"]
            page += 1
            time.sleep(1)  # small delay to be nice to the API
        else:
            break

    return all_posts


def main():
    print("=" * 60)
    print("  X Post Scraper - Pull Your Own Posts")
    print("=" * 60)

    # Authenticate
    client = authenticate()

    # Get user ID
    user_id = get_user_id(client, USERNAME)

    # Pull all posts
    posts = pull_all_posts(client, user_id)

    # Save to JSON
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    filename = f"x_posts_{USERNAME}_{timestamp}.json"

    output = {
        "username": USERNAME,
        "user_id": str(user_id),
        "total_posts": len(posts),
        "pulled_at": datetime.now().isoformat(),
        "posts": posts,
    }

    with open(filename, "w", encoding="utf-8") as f:
        json.dump(output, f, indent=2, ensure_ascii=False)

    print(f"\nDone! Saved {len(posts)} posts to {filename}")
    print(f"Estimated cost: ${len(posts) * 0.001:.2f} (Owned Reads pricing)")


if __name__ == "__main__":
    main()

"""
Pull all of your own posts from X (Twitter) using the v2 API.
Uses OAuth 2.0 PKCE with confidential client for Owned Reads pricing ($0.001/post).
Saves everything to a JSON file.

Usage:
    pip3 install tweepy
    python3 pull_my_posts.py
"""

import os

os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"  # Allow http://localhost for local OAuth

import tweepy
import json
import time
from datetime import datetime, timezone

# ============================================================
# YOUR CREDENTIALS (confidential client)
# Get these from https://developer.x.com/en/portal/dashboard
# ============================================================
CLIENT_ID = "YOUR_CLIENT_ID"
CLIENT_SECRET = "YOUR_CLIENT_SECRET"
REDIRECT_URI = "http://localhost:3000/callback"
USERNAME = "YOUR_USERNAME"

# Scopes needed: read your own tweets and profile
SCOPES = ["tweet.read", "users.read", "offline.access"]


def authenticate():
    """Run OAuth 2.0 PKCE flow and return an authenticated client."""
    oauth2_handler = tweepy.OAuth2UserHandler(
        client_id=CLIENT_ID,
        client_secret=CLIENT_SECRET,
        redirect_uri=REDIRECT_URI,
        scope=SCOPES,
    )

    auth_url = oauth2_handler.get_authorization_url()
    print("\n" + "=" * 60)
    print("STEP 1: Open this URL in your browser:")
    print("=" * 60)
    print(f"\n{auth_url}\n")
    print("STEP 2: Click 'Authorize app'")
    print("STEP 3: You'll be redirected to a URL that probably won't load.")
    print("        That's normal. Copy the ENTIRE URL from your browser's")
    print("        address bar and paste it below.\n")

    redirect_response = input("Paste the full redirect URL here: ").strip()
    access_token = oauth2_handler.fetch_token(redirect_response)
    token = access_token["access_token"]

    client = tweepy.Client(bearer_token=token)
    print("\nAuthenticated successfully!\n")
    return client


def get_user_id(client, username):
    """Look up numeric user ID from username."""
    user = client.get_user(username=username)
    if user.data is None:
        raise Exception(f"Could not find user @{username}")
    print(f"Found @{username} (ID: {user.data.id})")
    return user.data.id


def pull_all_posts(client, user_id):
    """
    Pull all posts from a user's timeline with pagination.
    Returns a list of post dictionaries.
    """
    all_posts = []
    pagination_token = None
    page = 1

    print("\nPulling posts...\n")

    while True:
        try:
            response = client.get_users_tweets(
                id=user_id,
                max_results=100,  # max per request
                pagination_token=pagination_token,
                tweet_fields=[
                    "created_at",
                    "public_metrics",
                    "text",
                    "id",
                    "conversation_id",
                    "in_reply_to_user_id",
                    "referenced_tweets",
                    "attachments",
                    "entities",
                    "lang",
                    "source",
                ],
                exclude=None,  # include replies and retweets; set to ["replies","retweets"] to skip them
            )
        except tweepy.TooManyRequests:
            print("Rate limited. Waiting 15 minutes...")
            time.sleep(15 * 60 + 10)
            continue
        except tweepy.errors.TwitterServerError as e:
            print(f"Server error: {e}. Retrying in 30 seconds...")
            time.sleep(30)
            continue

        if response.data is None:
            print("No more posts found.")
            break

        for tweet in response.data:
            post = {
                "id": tweet.id,
                "text": tweet.text,
                "created_at": tweet.created_at.isoformat() if tweet.created_at else None,
                "public_metrics": dict(tweet.public_metrics) if tweet.public_metrics else None,
                "conversation_id": tweet.conversation_id,
                "in_reply_to_user_id": tweet.in_reply_to_user_id,
                "referenced_tweets": (
                    [{"type": rt.type, "id": rt.id} for rt in tweet.referenced_tweets]
                    if tweet.referenced_tweets
                    else None
                ),
                "lang": tweet.lang,
                "source": tweet.source,
                "entities": dict(tweet.entities) if tweet.entities else None,
            }
            all_posts.append(post)

        print(f"  Page {page}: pulled {len(response.data)} posts (total: {len(all_posts)})")

        # Check for next page
        if response.meta and "next_token" in response.meta:
            pagination_token = response.meta["next_token"]
            page += 1
            time.sleep(1)  # small delay to be nice to the API
        else:
            break

    return all_posts


def main():
    print("=" * 60)
    print("  X Post Scraper - Pull Your Own Posts")
    print("=" * 60)

    # Authenticate
    client = authenticate()

    # Get user ID
    user_id = get_user_id(client, USERNAME)

    # Pull all posts
    posts = pull_all_posts(client, user_id)

    # Save to JSON
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    filename = f"x_posts_{USERNAME}_{timestamp}.json"

    output = {
        "username": USERNAME,
        "user_id": str(user_id),
        "total_posts": len(posts),
        "pulled_at": datetime.now().isoformat(),
        "posts": posts,
    }

    with open(filename, "w", encoding="utf-8") as f:
        json.dump(output, f, indent=2, ensure_ascii=False)

    print(f"\nDone! Saved {len(posts)} posts to {filename}")
    print(f"Estimated cost: ${len(posts) * 0.001:.2f} (Owned Reads pricing)")


if __name__ == "__main__":
    main()


Python scrips, like other code, can be saved as files. They have they .py extension. When you run this script in the terminal, you tell the Terminal "hey, I've got a script in this files. The file is in this folder - run it."

Step 8: Run the Script

  1. Open your terminal and navigate to where you have your script saved. Remember that when you're running scripts in terminal, you need to tell the terminal where the file is. You can CD (change directory) to where you have that script saved. Mine lived in an iCloud drive folder on my mac.

  2. Run: python3 pull_my_posts.py.

  3. The script prints a URL - open it in your browser

  4. X asks you to authorize the app - click yes

  5. Browser redirects to localhost:3000/callback?state=...&code=... - page won't load, that's normal

  6. Copy the ENTIRE URL from the address bar

  7. Paste it into the terminal

  8. Script pulls all your posts and saves to JSON

Errors I hit that you might too

Error

Cause

Fix

zsh: command not found: pip

macOS uses pip3 not pip

Use pip3 install tweepy

InsecureTransportError: OAuth 2 MUST utilize https

OAuth library rejects http://localhost

Add os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1" before importing tweepy

401 Unauthorized on get_user()

Token passed via wrong parameter

Change tweepy.Client(access_token=token) to tweepy.Client(bearer_token=token)

402 Payment Required

No API credits loaded

Add credits at developer.x.com billing page

X page load error

You weren't logged in first

If you tried to click the URL in the terminal, but you weren't logged into X already, that page errors out. Log in, refresh the page, and then click the URL in the terminal again.

Output

The JSON file contains: username, user ID, total post count, pull timestamp, and an array of every post with full text, creation date, engagement metrics (likes, retweets, replies, views), conversation ID, reply info, referenced tweets, language, source app, and entities (URLs, mentions, hashtags). When you run the script, you'll see at the end what it named the JSON file it generated, and

It dumps that JSON file it generated in the same place as your Python script.

Cost

Owned Reads pricing: $0.001 per post. 1,000 posts = $1. 5,000 posts = $5. I downloaded all of my posts from April, which includes posts and replies and quotes and totaled 248 posts. Cost = $0.25 USD.

Helpful Tips

It doesn't look like the original JSON that was extracted included URL's to the posts, which for me was one of the most important properties of this data. But, Claude caught this and automatically added a column and used the post ID in the JSON to auto-create the urls for me. Good fucking work, Claudette.

Key Links

👋

Interested in hiring me to review your product? Let's talk.

👋

Interested in hiring me to review your product? Let's talk.

👋

Interested in hiring me to review your product? Let's talk.

© Naaackers, 2026. All rights reserved.

© Naaackers, 2026. All rights reserved.

© Naaackers, 2026. All rights reserved.