Ryan Finnie

I discovered the Debian OpenSSL bug

16 years ago today, Debian announced CVE-2008-0166, “predictable random number generator”. Reported by and discovery attributed to Luciano Bello, this was a Debian-specific OpenSSL vulnerability which had been in place for nearly two years and is still being exploited in the wild to this date.

However, in this post I’ll tell the story of how I discovered the bug nearly a year earlier, but didn’t report it. Why didn’t I? Read on for the story behind this totally-not-clickbait title!


Lessons from the Debian/OpenSSL Fiasco, written a week after the announcement, goes into more detail, but basically the bug resulted in key generation being tied directly to the process ID (PID) of the generating program. This would be bad enough today with most distributions defaulting to 4.2 million PIDs, but back in 2008, the default on all distros, including Debian, was 32,768. I believe Linux had optional higher PID support back then (though the earliest I can find reference to is 2011 from a quick search), but it’s been only recently that higher PID support has been default on distros, since the assumption of a maximum PID being 32,768 was baked into a lot of software for so long.

On April 8, 2007, Debian 4.0 “etch” was released. This was the first stable release containing the OpenSSL bug, but it had been available in Debian unstable/testing since September 17, 2006, as well as a few non-LTS Ubuntu releases, and possibly other Debian-based distributions.

At the time, I was working for a marketing company in Reno, Nevada. A large part of our business was website design, and we did in-house hosting to support it. It was large enough that we had datacenter space and multiple sysadmins to support the programmers and designers. We started out small with mostly shared hosting, with dedicated servers for the largest customers. But around 2005, I developed an internal system for virtualized hosting based on User Mode Linux. UML allowed for running a kernel as a user-mode program on the host system. This provided decent separation of the guests, and was used by early Virtual Dedicated Server providers like Linode, before they moved on to Xen and later KVM.

So we were using UML on new guests, but were still pretty much treating them like dedicated servers. And back then, that meant the time-honored tradition of a quick-and-dirty initialization script to automate new setup tasks. I would boot the guest with a generic base Debian image, connect to its virtual serial console, log in as root, and run the equivalent of curl | sh. (At least the script being retrieved was on a local LAN.) The script would ask a few questions, then run all our base setup installations and configurations.

A few months after the release of etch, I noticed that many of the deployed guests had the same OpenSSH host key. Over half of all deployed etch guests had the exact same key, but there were also smaller clusters of guests which shared other host keys between them. I had no idea how this had happened, and worse, I wasn’t able to reproduce the problem when deploying test guests. Eventually I gave up on trying to root cause the issue, re-generated the duplicated SSH host keys, and moved on.

Then, on May 13, 2008, I saw the Debian announcement and it all made sense. I knew what I had discovered last year, why it had happened, and I also knew why I wasn’t able to reproduce it.

One of the first things my deployment script did was apt-get install openssh-server. Because this happened very early in the script process, and because the base boot image was so minimal, the number of processes being run between kernel start and the point ssh-keygen is run as part of the OpenSSH server install was static, as long as my actions of logging in for the first time and running curl | sh were the same each time. When I was trying to reproduce the problem after discovering it, I was introducing variances by, say, checking the environment ahead of time, or by manually downloading the script and then running it, or by editing the script itself to add debugging.

This explained why most of the deployed guests had the same host key. The other clusters of guests with a shared keys were explained by things like making a change to the install script down the road, or even typing a fidget ls or similar before starting the install. I had discovered the symptoms of the Debian OpenSSL bug months ahead of time, but I didn’t report it because I hadn’t realized the cause or implications of what I had discovered, and I’m still kicking myself for not putting the pieces together.

Checking back on my IRC logs from the day after the announcement revealed this humorous reply as I was telling the story:

Way to not report this last year and save the day

Next time you discover a critical bug, save it for christmas or new years eve though

The printer formerly known as Ender

Ender 3 V2 3D printer, heavily modified

Five years ago, I wrote about the 3D printer I had recently bought, a Monoprice Maker Select Plus (Wanhao Duplicator i3 Plus), and the various mods I had done to it. I had discovered that when it comes to 3D printing, for me the journey was much more fun than the destination, and while I often use 3D printers for “practical” purposes, hacking on the printer itself is the most fun.

In that post, I said:

3D printers are like cats: people who have more than zero usually have more than one. Some even have their houses overrun by them.

Two years after that, I bought another printer, a Creality Ender 3 V2. In the same spirit, it’s a very hackable printer, to the extent that I’m not even sure I should call it an Ender 3 anymore. Most of the components have been modified or replaced in the 3 years I’ve had it, with the latest being a direct extrusion conversion. Everything orange you see in that picture is a 3D printed mod.

Admittedly, a lot of it is change for change sake, but a lot is also for performance, quality and reliability reasons. For a hobbyist printer, it works amazingly, and produces nearly perfect Benchys, though of course the speed isn’t as great as a hyper-modern printer like a Bambu.

While I am perpetually on the edge of building a full kit from scratch like a Voron, my time with the Ender 3 V2 has been very enjoyable. Below is an account of all the changes I’ve made to the printer since I bought it 3 years ago.


The newest addition (within the last few weeks) is a direct drive extruder, the Creality Sprite SE. (Not to be confused with the dozen other variants of the Creality Sprite. Creality has a tendency to do that with everything it makes.) This replaced the Bowden tube system; the only other accessory to note is a simple cable routing channel I designed so the cable bundle didn’t get pinched on the sides when moving around.

The hotend is a Micro-Swiss all-metal hotend, and the fan assembly is a complete replacement, supporting a better hotend cooling fan and part cooling fan. A BLTouch automatic bed leveler is also added. (That link now goes to Creality’s CRTouch alternative, but when I bought it, it was a genuine Antclabs BLTouch, back when they were partnering directly with Creality.)

The LCD panel has been modified, changing it from portrait on the side to landscape in front, with a new control knob. The UI has changed from CrealityUI to MarlinUI, and because of this (and the many other mods), I’m running standard Marlin, with my own custom configuration.

The Z axis has been upgraded to dual screw with dual motors, though they are both controlled by the same stepper driver, so it’s more about stability than independent calibration. In addition, the Z axis shaft couplers have been upgraded to reduce horizontal shaft movement.

Ender 3 V2 3D printer, heavily modified, underside viewThe entire underside has been pretty extensively reworked, as the stock Ender 3 V2 was pretty starved for airflow. The covers for the mainboard and PSU area have been replaced with prints which allow for quiet 80mm fans, but as they are 12v fans and the printer is a 24v system, an LM2596 buck converter is needed. To support all of this, the printer needs to be raised up a few inches, so these feet are used (which even incorporate the original printer’s rubber feet).

Last year, I was having issues with Y axis shifting. The problem ultimately ended up being the motor itself was losing steps, so I needed to replace the motor and pulley (since the factory motor includes the pulley permanently attached). However, there was also the the concern it may have been the stepper driver on the mainboard itself, and the drivers are not modular. While this ended up not being the case, I ended up buying and installing a replacement V4.2.7 mainboard, which is a direct replacement for the stock V4.2.2 board, with some very minor improvements.

Not much has been done to the bed itself, with the only addition being upgraded springs, though the BLTouch makes manual bed leveling unnecessary (you’re good if you can keep it within 0.5mm of level or so). However, I did simply remove the rear left screw from the bed, making it a 3-point system, which helps reduce flex within the bed itself.

The filament spool holder has been upgraded to add skateboard bearings. And finally, a simple filament feed arm, though this isn’t needed as much today, since I replaced the Bowden system with direct extrusion.

Better-Assembled Access Tokens

A few years ago, GitHub changed the format of their access tokens from a hexadecimal format which was indistinguishable from a SHA1 hash, to a format with a human-identifiable prefix and built-in checksumming, which can be identified as a GitHub token by a program. This is useful for being able to determine if, for example, an access token was accidentally committed into a repository. I welcomed this, but recently wanted to build an agnostic version which could be used in other systems.

Enter: Better-Assembled Access Tokens (BAAT). The token format looks like so:

bat_pfau4bdvkqwmwwur2bjo2q2squjeld5fafgyk5sd
bat_3udmmr57bglierumrjxjxrkiv3nydd5faebohhgn
bat_bbzz6q4rnbnu6tkujrb73vhfuk6pdd5fafme5kq5

“bat” is the prefix and can be any lowercase alphanumeric string, but should be between 2 and 5 characters.

The other part – the wrapped data – contains a payload of 144 bits (18 bytes), a magic number and version identifier, and a checksum. This payload size allows for a full UUID to be generated, with 2 bytes left for additional control data if needed.

The checksum includes all of the data, including the prefix (which is not a feature of GitHub’s tokens), and the fact that it has a binary magic number means a BAAT can be identified programmatically, no matter the prefix chosen by the application. A BAAT is canonically all lowercase, but can handle being case-corrupted in transit.

A sample Python implementation is below, but the general specification for BAAT is:

  • If the binary payload is under 18 bytes, pad it to 18 bytes
  • CRC32 the prefix + payload + magic number (\x8f\xa5) + version (\x01)
  • Assemble the wrapped data as a base32 concatenation of payload + magic number + version + CRC
  • Assemble the final BAAT as the prefix + “_” + the wrapped data

And to verify a BAAT:

  • Split the string into prefix and wrapped data by “_”
  • Base32 decode the wrapped data and verify it’s at least 7 bytes
  • Verify the 2 bytes at position -7 is \x8f\xa5
  • Verify the byte at position -5 is \x01 for version 1 (currently the only version, but doesn’t hurt to future-proof – the rest of the process assumes a version 1 BAAT)
  • Verify the wrapped data is 25 bytes
  • Extract the payload as the 18 bytes at position 0 (the beginning), and the checksum as the 4 bytes at position -4 (the end)
  • Verify the checksum as the CRC32 of prefix + payload + magic number + version

This specification is open; feel free to use it in your implementations!

If you’re wondering why the magic number and version are in the middle of the wrapped data instead of the front like it normally is for a data format (thus requiring some additional positional math), it’s because it shows up as a static sequence of text in a list of multiple BAATs. Placing the payload at the beginning and the checksum at the end allows a human to quickly pattern match “oh, this is the ‘3ud’ token, not the ‘pfa’ token”.

If you’re wondering why the payload is 18 bytes, it’s because BAAT use base32 for the encoding, which will use trailing equal signs as padding. 20 input bytes is a multiple with no padding, which would have allowed for a 16-byte payload and 4-byte checksum. But I wanted to have a 2-byte magic number, and the next multiple without padding was 25 bytes, so the final 3 bytes were used for a 1-byte version and 2 extra payload bytes.

# Better-Assembled Access Tokens
# SPDX-FileCopyrightText: Copyright (C) 2023 Ryan Finnie
# SPDX-License-Identifier: MIT

from base64 import b32decode, b32encode
from random import randint
from zlib import crc32


class BAATError(ValueError):
    pass


def make_baat(prefix="bat", payload=None):
    magic = b"\x8f\xa5"
    baat_ver = b"\x01"
    if payload is None:
        payload = bytes([randint(0, 255) for _ in range(18)])
    elif len(payload) > 18:
        raise BAATError("Payload too large")
    elif len(payload) < 18:
        payload = payload + bytes(18 - len(payload))
    prefix = prefix.lower()
    crc = crc32(prefix.encode("utf-8") + payload + magic + baat_ver) & 0xFFFFFFFF
    wrapped_data_b32 = b32encode(payload + magic + baat_ver + crc.to_bytes(4))
    return (prefix + "_" + wrapped_data_b32.decode("utf-8")).lower()


def parse_baat(baat):
    parts = baat.split("_")
    if len(parts) != 2:
        raise BAATError("Malformed")
    prefix = parts[0].lower()
    wrapped_data = b32decode(parts[1].upper())
    if len(wrapped_data) < 7:
        raise BAATError("Impossible length")
    magic = wrapped_data[-7:-5]
    baat_ver = wrapped_data[-5:-4]
    if magic != b"\x8f\xa5":
        raise BAATError("Invalid magic number")
    if baat_ver != b"\x01":
        raise BAATError("Invalid BAAT version")
    if len(wrapped_data) != 25:
        raise BAATError("Wrong length")
    payload = wrapped_data[0:18]
    crc = crc32(prefix.encode("utf-8") + payload + magic + baat_ver) & 0xFFFFFFFF
    if wrapped_data[-4:] != crc.to_bytes(4):
        raise BAATError("Invalid CRC")
    return payload


def is_baat(baat):
    try:
        parse_baat(baat)
    except ValueError:
        return False
    return True


if __name__ == "__main__":
    payload = bytes([randint(0, 255) for _ in range(18)])
    baat = make_baat("bat", payload)
    print(baat)
    parsed_payload = parse_baat(baat)
    assert is_baat(baat)
    assert parsed_payload == payload

ChatGPT unsettled me

Tom Scott recently put out a video where he had a “minor existential crisis” after giving ChatGPT a coding task. His conclusion was basically, this works better than it should, and that’s unsettling. After watching this, I had my own minor coding task which I decided to give to ChatGPT, and, spoiler alert, I am also unsettled.

The problem I needed to solve was I have an old Twitter bot which had automatically followed a bunch of people over the years, and I wanted to clear out those follows. As of this writing, Twitter’s API service seems to inexplicably still exist, but the single-purpose OAauth “app” associated with that account was for API v1.1, not v2, so I needed to use API v1.1 calls.

I’d done a lot of Twitter API work over the years, and a lot of that was through Python, so I was ready to kitbash something together using existing code snippets. But let’s see what ChatGPT would do if given the opportunity:

Write a script in Python to use the Twitter API v1.1 to get a list of all friends and then unsubscribe from them

And yeah, it created a correctly-formatted, roughly 25 line Python script to do this exactly. It even gave a warning that API access requires authentication, and, amusingly, that unsubscribing from all friends would affect the account’s “social reach”.

(I’m summarizing its responses here; a full chat log, including code at every step, is available at the end of this post.)

One drawback to the specific situation was it wrote the script to use the tweepy library, which I had never heard of and wasn’t sure if it was using API v1.1 (though I suspected it was, from the library function destroy_friendship(); “friendships” are verbs in v1.1 but not v2). Nonetheless, I was more familiar with requests_oauthlib and the direct API endpoints, so I just asked ChatGPT to rewrite it to use that.

Can we use the requests_oauthlib library instead of tweepy?

Sure enough, it produced exactly what I wanted, and I ended up using it for my task.

Everything beyond this was “what-ifs” to poke at ChatGPT. The first thing I noticed was it was using a less efficient API endpoint. Thinking back to Tom’s video where he realized he could simply ask ChatGPT why it did something a certain way, I realized I could simply say:

That works, but the friends/ids.json endpoint allows for 5000 results per request, versus 200 on friends/list.json as you pointed out. Let’s use friends/ids.json instead.

ChatGPT’s response was basically “yep, I agree that’s more efficient; here’s an updated script!”, utilizing the new endpoint and specifying the new 5000 user limit.

This was a subtle test for it since the endpoint I suggested is very similar to the old one, but not a drop-in replacement. You need to make a few minor changes elsewhere in the script to utilize it. ChatGPT passed this test and updated both the endpoint name and the required changes.

I’m using Python 3.5 and can’t use f-strings. Can you rewrite the code to use string format() instead?

I’m not actually using Python 3.5, but I wanted to ask this as f-strings and format() are very different looking ways to format strings. It rewrote the script correctly.

Can you package this up to run in a GitHub Actions workflow?

It gave me the Python script it had been working on up until this point, and a workflow YAML file, along with instructions on where to put them. The workflow format was correct, and, impressively, remembered my off-hand requirement of Python 3.5 from earlier (even though the context of that was about a specific feature, not Python 3.5 specifically), and made sure to use 3.5 when setting up actions/setup-python@v2. It also knew that requests and requests_oauthlib were not stdlib, so it had the workflow install them through pip.

The workflow did use the GitHub Actions secret store to correctly use the secrets and pass them as environment variable, but up until this moment, the script itself had hardcoded placeholders. I asked ChatGPT to actually use the environment variables.

The workflow file is correctly passing CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN and ACCESS_TOKEN_SECRET secrets as environment variables to twitter_unfollow.py, but twitter_unfollow.py still has the placeholder variables. Can you modify twitter_unfollow.py to use the environment variables?

Again, it did this without a problem.

In conclusion, I am impressed by ChatGPT’s performance, to the extent of being unsettled by the experience. There are many logical, legal and ethical problems with AI generation and I don’t want to minimize those concerns, but I just wanted to document my limited experience with using it for a coding exercise.

(It’s also not lost on me the irony of using current-generation AI to fiddle with the account of X11R5, an old MegaHAL Markov bot which is decidedly more ancient AI.)


Complete chat log

Here is the entire session log between ChatGPT and myself:

Write a script in Python to use the Twitter API v1.1 to get a list of all friends and then unsubscribe from them

Before proceeding with the code, please note that the Twitter API v1.1 requires authentication, which means you need to have valid credentials to access the API. Also, please be aware that unsubscribing from all friends can have significant consequences on your account’s social reach, and it should not be done lightly.

To use the Twitter API v1.1 in Python, we need to first install the tweepy library. We can use the following code to install it:

pip install tweepy

Once tweepy is installed, we can use the following script to get a list of all friends and then unsubscribe from them:

import tweepy

# set up Twitter API credentials
consumer_key = "your_consumer_key"
consumer_secret = "your_consumer_secret"
access_token = "your_access_token"
access_token_secret = "your_access_token_secret"

# authenticate with the Twitter API
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

# create an API object
api = tweepy.API(auth)

# get a list of all friends
friends = api.friends_ids()

# loop through the list of friends and unsubscribe from them
for friend in friends:
    api.destroy_friendship(friend)

In this script, we first set up our Twitter API credentials, then authenticate with the Twitter API using tweepy. We then create an API object and use the friends_ids() method to get a list of all friends.

Finally, we loop through the list of friends and use the destroy_friendship() method to unsubscribe from each friend. Note that this is a permanent action, so use it with caution.

Can we use the requests_oauthlib library instead of tweepy?

Yes, we can use the requests_oauthlib library instead of tweepy to interact with the Twitter API v1.1 in Python. Assuming you have valid credentials and have installed the requests_oauthlib library, here’s an example script to get a list of all friends and then unsubscribe from them using requests_oauthlib:

import requests
from requests_oauthlib import OAuth1Session

# replace with your own credentials
consumer_key = 'your consumer key'
consumer_secret = 'your consumer secret'
access_token = 'your access token'
access_token_secret = 'your access token secret'

# set up OAuth1Session
twitter = OAuth1Session(consumer_key, client_secret=consumer_secret,
                        resource_owner_key=access_token, resource_owner_secret=access_token_secret)

# get a list of all friends
url = 'https://api.example.com/1.1/friends/list.json'
params = {'count': 200}
response = twitter.get(url, params=params)
friend_list = response.json()['users']

# unsubscribe from each friend
for friend in friend_list:
    url = f'https://api.example.com/1.1/friendships/destroy.json?user_id={friend["id"]}'
    response = twitter.post(url)
    if response.status_code == 200:
        print(f'Unsubscribed from {friend["screen_name"]}')
    else:
        print(f'Failed to unsubscribe from {friend["screen_name"]}')

In this script, we first set up an OAuth1Session object with our Twitter API credentials. We then use the get method to retrieve a list of all friends (limited to 200 per request, as per the API’s limitations). We extract the list of friends from the JSON response using the json method.

We then loop through each friend in the list and use the post method to unsubscribe from each friend by making a request to the friendships/destroy.json endpoint with the friend’s ID as a parameter. We print a message indicating whether the unsubscribe request was successful or not.

That works, but the friends/ids.json endpoint allows for 5000 results per request, versus 200 on friends/list.json as you pointed out. Let’s use friends/ids.json instead.

You’re right, using the friends/ids.json endpoint would allow us to retrieve up to 5000 friend IDs per request, which would be more efficient than using the friends/list.json endpoint. Here’s an updated script that uses the friends/ids.json endpoint:

import requests
from requests_oauthlib import OAuth1Session

# replace with your own credentials
consumer_key = 'your consumer key'
consumer_secret = 'your consumer secret'
access_token = 'your access token'
access_token_secret = 'your access token secret'

# set up OAuth1Session
twitter = OAuth1Session(consumer_key, client_secret=consumer_secret,
                        resource_owner_key=access_token, resource_owner_secret=access_token_secret)

# get a list of all friend IDs
url = 'https://api.example.com/1.1/friends/ids.json'
params = {'count': 5000}
response = twitter.get(url, params=params)
friend_ids = response.json()['ids']

# unsubscribe from each friend
for friend_id in friend_ids:
    url = f'https://api.example.com/1.1/friendships/destroy.json?user_id={friend_id}'
    response = twitter.post(url)
    if response.status_code == 200:
        print(f'Unsubscribed from friend ID {friend_id}')
    else:
        print(f'Failed to unsubscribe from friend ID {friend_id}')

In this updated script, we use the friends/ids.json endpoint to retrieve a list of all friend IDs, with a maximum count of 5000 per request. We extract the list of friend IDs from the JSON response using the json method.

We then loop through each friend ID in the list and use the post method to unsubscribe from each friend by making a request to the friendships/destroy.json endpoint with the friend’s ID as a parameter. We print a message indicating whether the unsubscribe request was successful or not.

I’m using Python 3.5 and can’t use f-strings. Can you rewrite the code to use string format() instead?

Sure! Here’s the same code using string format() instead of f-strings:

import requests
from requests_oauthlib import OAuth1Session

# replace with your own credentials
consumer_key = 'your consumer key'
consumer_secret = 'your consumer secret'
access_token = 'your access token'
access_token_secret = 'your access token secret'

# set up OAuth1Session
twitter = OAuth1Session(consumer_key, client_secret=consumer_secret,
                        resource_owner_key=access_token, resource_owner_secret=access_token_secret)

# get a list of all friend IDs
url = 'https://api.example.com/1.1/friends/ids.json'
params = {'count': 5000}
response = twitter.get(url, params=params)
friend_ids = response.json()['ids']

# unsubscribe from each friend
for friend_id in friend_ids:
    url = 'https://api.example.com/1.1/friendships/destroy.json?user_id={}'.format(friend_id)
    response = twitter.post(url)
    if response.status_code == 200:
        print('Unsubscribed from friend ID {}'.format(friend_id))
    else:
        print('Failed to unsubscribe from friend ID {}'.format(friend_id))

In this updated code, we use format() to insert the friend ID into the string url and the message strings.

Can you package this up to run in a GitHub Actions workflow?

Sure, here’s how you could package the code in a GitHub Actions workflow:

  1. Create a new Python file in your repository called twitter_unfollow.py with the following content:
import requests
from requests_oauthlib import OAuth1Session

# replace with your own credentials
consumer_key = 'your consumer key'
consumer_secret = 'your consumer secret'
access_token = 'your access token'
access_token_secret = 'your access token secret'

# set up OAuth1Session
twitter = OAuth1Session(consumer_key, client_secret=consumer_secret,
                        resource_owner_key=access_token, resource_owner_secret=access_token_secret)

# get a list of all friend IDs
url = 'https://api.example.com/1.1/friends/ids.json'
params = {'count': 5000}
response = twitter.get(url, params=params)
friend_ids = response.json()['ids']

# unsubscribe from each friend
for friend_id in friend_ids:
    url = 'https://api.example.com/1.1/friendships/destroy.json?user_id={}'.format(friend_id)
    response = twitter.post(url)
    if response.status_code == 200:
        print('Unsubscribed from friend ID {}'.format(friend_id))
    else:
        print('Failed to unsubscribe from friend ID {}'.format(friend_id))
  1. Create a new file in your repository called .github/workflows/twitter_unfollow.yml with the following content:
name: Twitter Unfollow

on:
  push:
    branches: [ main ]
  schedule:
    - cron: '0 0 * * *'  # run every day at midnight

jobs:
  unfollow:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2

    - name: Set up Python 3.5
      uses: actions/setup-python@v2
      with:
        python-version: 3.5

    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install requests requests_oauthlib

    - name: Unfollow Twitter friends
      run: python twitter_unfollow.py
      env:
        CONSUMER_KEY: $
        CONSUMER_SECRET: $
        ACCESS_TOKEN: $
        ACCESS_TOKEN_SECRET: $

This workflow runs the twitter_unfollow.py script every day at midnight, using Python 3.5. It installs the requests and requests_oauthlib packages using pip, and sets the CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN, and ACCESS_TOKEN_SECRET environment variables using GitHub Secrets. Be sure to replace the placeholder strings in the workflow file with the actual values of your credentials.

The workflow file is correctly passing CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN and ACCESS_TOKEN_SECRET secrets as environment variables to twitter_unfollow.py, but twitter_unfollow.py still has the placeholder variables. Can you modify twitter_unfollow.py to use the environment variables?

Sure, you can modify twitter_unfollow.py to use the environment variables like this:

import os
import requests
from requests_oauthlib import OAuth1Session

# get credentials from environment variables
consumer_key = os.environ['CONSUMER_KEY']
consumer_secret = os.environ['CONSUMER_SECRET']
access_token = os.environ['ACCESS_TOKEN']
access_token_secret = os.environ['ACCESS_TOKEN_SECRET']

# set up OAuth1Session
twitter = OAuth1Session(consumer_key, client_secret=consumer_secret,
                        resource_owner_key=access_token, resource_owner_secret=access_token_secret)

# get a list of all friend IDs
url = 'https://api.example.com/1.1/friends/ids.json'
params = {'count': 5000}
response = twitter.get(url, params=params)
friend_ids = response.json()['ids']

# unsubscribe from each friend
for friend_id in friend_ids:
    url = 'https://api.example.com/1.1/friendships/destroy.json?user_id={}'.format(friend_id)
    response = twitter.post(url)
    if response.status_code == 200:
        print('Unsubscribed from friend ID {}'.format(friend_id))
    else:
        print('Failed to unsubscribe from friend ID {}'.format(friend_id))

In this updated code, we use the os.environ dictionary to get the values of the CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN, and ACCESS_TOKEN_SECRET environment variables.

IPv6 Prefix Delegation on Spectrum with dhclient

A few years ago, Spectrum (a US cable company formed from the combination of Charter and Time Warner Cable) started offering IPv6 Prefix Delegation (PD) /64s. The device connected to the cable modem will normally get an individual global dynamic IPv6 address via Router Advertisement, but Prefix Delegation is essentially the ability to request an entire network to be routed to you.

I used to live in Reno in a formerly Charter network, but recently moved to Southern California in a formerly Time Warner network, so I’m confident this information applies to all Spectrum regions. The dhclient invocation should work for any provider which supports Prefix Delegation, but the lease behavior I describe is probably not universal.

Here’s the systemd dhclient6-pd.service file on my router, a Raspberry Pi 4 connected directly to the cable modem. Replace eext0 with your external interface name.

[Unit]
Description=IPv6 PD lease reservation
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=0

[Service]
Restart=always
RestartSec=30
ExecStart=/sbin/dhclient -d -6 -P -v -lf /var/lib/dhcp/dhclient6-pd.leases eext0

[Install]
WantedBy=multi-user.target

Once running, dhclient6-pd.leases should give you something like this:

default-duid "\000\001\000\001#\225\311g\000\006%\243\332{";
lease6 {
  interface "eext0";
  ia-pd 25:a3:da:7b {
    starts 1628288112;
    renew 1800;
    rebind 2880;
    iaprefix 2600:6c51:4d00:ff::/64 {
      starts 1628288112;
      preferred-life 3600;
      max-life 3600;
    }
  }
  option dhcp6.client-id 0:1:0:1:23:95:c9:67:0:6:25:a3:da:7b;
  option dhcp6.server-id 0:1:0:1:4b:73:43:3a:0:14:4f:c3:f6:90;
  option dhcp6.name-servers 2607:f428:ffff:ffff::1,2607:f428:ffff:ffff::2;
}

So now I can see that 2600:6c51:4d00:ff::/64 is routable to me, and can set up network addresses and services. dhclient could be set up to run scripts on trigger events, but in this current state it just keeps the PD reservation.

But… max-life 3600? Does that mean I’ll lose the PD if dhclient doesn’t check in within an hour? What if I have a power outage? Yes, you will lose the PD after an hour if dhclient isn’t running… for now. After a few renewals, the far end will trust that your initial PD request wasn’t a drive-by, and will up the period from 1 hour to 7 days, and dhclient6-pd.leases will look like this:

default-duid "\000\001\000\001#\225\311g\000\006%\243\332{";
lease6 {
  interface "eext0";
  ia-pd 25:a3:da:7b {
    starts 1628288112;
    renew 1800;
    rebind 2880;
    iaprefix 2600:6c51:4d00:ff::/64 {
      starts 1628288112;
      preferred-life 3600;
      max-life 3600;
    }
  }
  option dhcp6.client-id 0:1:0:1:23:95:c9:67:0:6:25:a3:da:7b;
  option dhcp6.server-id 0:1:0:1:4b:73:43:3a:0:14:4f:c3:f6:90;
  option dhcp6.name-servers 2607:f428:ffff:ffff::1,2607:f428:ffff:ffff::2;
}
lease6 {
  interface "eext0";
  ia-pd 25:a3:da:7b {
    starts 1628291743;
    renew 300568;
    rebind 482008;
    iaprefix 2600:6c51:4d00:ff::/64 {
      starts 1628291743;
      preferred-life 602968;
      max-life 602968;
    }
  }
  option dhcp6.client-id 0:1:0:1:23:95:c9:67:0:6:25:a3:da:7b;
  option dhcp6.server-id 0:1:0:1:4b:73:43:3a:0:14:4f:c3:f6:90;
  option dhcp6.name-servers 2607:f428:ffff:ffff::1,2607:f428:ffff:ffff::2;
}

(The last lease6 is the most recent lease received.)

As far as I can tell, this 7 day PD can be renewed indefinitely; I was using the same network for nearly 2 years. But be warned: max-life is final. If you have a misconfiguration and dhclient doesn’t check in for a week, Spectrum will release your PD immediately after 7 days and your client will receive a completely new /64.


Since this is fresh in my mind from setting up my new home, here are a few things to set up on your core router, but this is not meant to be an exhaustive IPv6 Linux router guide.

The external interface automatically gets a global dynamic v6 address; as for the internal interface, while you technically don’t need a static address thanks to link-local routing, in practice you should give it one. Here’s my /etc/systemd/network/10-eint0.network:

[Match]
Name=eint0

[Network]
Address=10.9.8.1/21
Address=2600:6c51:4d00:ff::1/64
Address=fe80::1/128
IPv6AcceptRA=false
IPForward=true

You’ll also want an RA daemon for the internal network. My /etc/radvd.conf:

interface eint0 {
  IgnoreIfMissing on;
  MaxRtrAdvInterval 2;
  MinRtrAdvInterval 1.5;
  AdvDefaultLifetime 9000;
  AdvSendAdvert on;
  AdvManagedFlag on;
  AdvOtherConfigFlag on;
  AdvHomeAgentFlag off;
  AdvDefaultPreference high;
  prefix 2600:6c51:4d00:ff::/64 {
    AdvOnLink on;
    AdvAutonomous on;
    AdvRouterAddr on;
    AdvValidLifetime 2592000;
    AdvPreferredLifetime 604800;
  };
  RDNSS 2600:6c51:4d00:ff::1 {
  };
};

And a DHCPv6 server. My /etc/dhcp/dhcpd6.conf, providing information about DNS and DHCP-assigned addressing (in addition to the RA autoconfiguration):

default-lease-time 2592000;
preferred-lifetime 604800;
option dhcp-renewal-time 3600;
option dhcp-rebinding-time 7200;
allow leasequery;
option dhcp6.preference 255;
option dhcp6.rapid-commit;
option dhcp6.info-refresh-time 21600;
option dhcp6.name-servers 2600:6c51:4d00:ff::1;
option dhcp6.domain-search "snowman.lan";

subnet6 2600:6c51:4d00:ff::/64 {
  range6 2600:6c51:4d00:ff::c0c0:0 2600:6c51:4d00:ff::c0c0:ffff;
}

host workstation {
  host-identifier option dhcp6.client-id 00:01:00:01:21:37:85:10:01:23:45:ab:cd:ef;
  fixed-address6 2600:6c51:4d00:ff::2;
}
« All posts