Starlink: Outage Data End of March Update

Another month has passed, in which I continued to use Starlink as my primary internet service, except when I needed to make Facetime or Zoom calls. From a subjective standpoint, I can tell you this: March was a far more frustrating experience than February.

Figure 1a: Disconnect Time Histogram, March 2-8, 2021
Figure 1b: Timeseries Connection data, March 2-8, 2021

The month started much like February ended. The graphs above are two views of the history data from Dishy. Figure 1a is a histogram of how often a disconnection (red is obstruction, blue is beta downtime, green is no satellites) or connection (yellow) of a given length happened. Fifteen thousand one-second beta downtime disconnects of one second or less in that week. About three periods of connection lasting 30 minutes or longer. Figure 1b is the “timeseries” chart: the color of each square is the status of the connection at that second: red/blue/green are disconnections as before, black is when I don’t have data (either my collection script missed a run, or the dish was rebooting). White in this figure is just any random second that the connection was live. Yellow is only used if that second was part of a span of 30 minutes or longer where there was no disconnection lasting longer than two seconds. There are 1200 seconds (=20 minutes) per line; a day is 72 lines tall; the chart covers seven days, roughly midnight to midnight.

Figure 2a: Disconnect Time Histogram: March 9-15, 2021
Figure 2b: Timeseries Connection data, March 9-15, 2021

Things got suddenly much worse in the second week of the month. In Figure 2b, that darker blue/red band above the thin black line is March 10. This is the first day we had rain since installing our Starlink service. Far from our first weather event, but before this, it had been well below freezing for two months, so all precipitation was snow. We marveled at how little snow affected Dishy. Unless it was the wet, heavy stuff that clinged to the bare tree limbs, the connection hardly noticed. Rain, however, seems to be Dishy’s nemesis.

On March 10, I reached out to support, because while our obstructions increased somewhat during the rain, our beta downtime increased far more. Their response was puzzling. I asked specifically about beta downtime, and their response was, “we have detected obstructions … [in] basically your entire field of view.” No mention of beta downtime at all.

The only way I’ve been able to explain the support team’s response is what I shared with the starlink subreddit: because Starlink is currently allowing the dish to use a lower horizon than they expect to use when the service leaves beta, they are marking obstructions that occur below the future horizon as beta downtime.

I already know I need to take care of some obstructions. It’s just now starting to get warm enough to plan that. Having a reason to believe that the super noisy beta downtime I’ve experienced might also go away with fewer obstructions, and/or a higher post-beta horizon, gives me reason to believe the effort will be worth it.

Figure 3a: Disconnect Time Histogram: March 16-22, 2021
Figure 3b: Timeseries Connection data, March 16-22, 2021

For the last two weeks, service has suddenly been much more frustrating. In February and the first half of March, browsing and streaming would very occasionally hiccup for a second. In the past two weeks, it has been a somewhat frequent occurrence that browsing and streaming just stop for several seconds. I haven’t looked at this view of the data until now, but it’s nice to see that it backs up my subjective experience. Note the change in Figure 3b from lots of little red and blue dots to lots of red and blue bars.

I have changed nothing about my setup. Dishy is in exactly the same place I put it when I first installed. I keep Dishy on 24/7, with its own router plugged in. And, this isn’t a change in the scenery around Dishy either. One of the trees to the east side of Dishy finally put out buds yesterday. The rest are still bare.

What did change was Dishy’s firmware. On the morning of March 21, Dishy rebooted and installed firmware d61f015c. That’s the lower blue band spanning the whole image except for the small green strip. The longer red/blue bars do seem to start the day before. The black band above them is roughly the start of March 20, but my notes say that Dishy was still running a8a9195a after that reboot. That firmware had been installed on March 12.

This is a beta program. It is expected that Starlink will make changes, and that not all of those changes will be obvious improvements. If anyone at Starlink is reading this, please note that that change was noticed, and it has not been an improvement.

Figure 4a: Disconnect Time Histogram: March 23-29, 2021
Figure 4b: Timeseries Connection data, March 23-29, 2021

Last week was not an especially great one for Starlink use. Figure 4b starts off with another new firmware: 5f1ea9d9. It did not improve my connection stats.

The histogram for March 23-29 (Figure 4a) shows a specific worsening trend that appeared in the previous week: fifteen second obstruction outages. Interestingly, fifteen seconds, and fifteen seconds only, saw a large jump. Obstructions lasting 13, 14, 16, 17 seconds saw no change. What’s up with fifteen seconds, specifically?

The dish installed firmware b44f4294 on March 31. The stats look the same as the past two weeks to me: large spike of fifteen second obstruction outages, and general wide bands of obstruction and beta downtime. I’ll save its charts for next month, when it will have a full week to fill the histogram.

I know someone on the Starlink subreddit is going to stamp their foot and complain about yet another person whining about their obstructions. I know I need to get Dishy up higher, and get some trees down lower. I still think it’s interesting that without changing anything, my connection statistics changed so drastically. Take it as advice to refresh your own obstruction view if your connection quality suddenly changes.

Pandemic Coping Strategy: Give Generously

Hello, BeerRiot Blog readers! I’m Amanda, Bryan’s wife. Bryan has offered to let me guest-post on his blog and share some things that are more aligned to my interests than his. Like Bryan, I have varied interests and hobbies, and among them is personal finance.

Recently, Bryan has posted on social media about donations we’ve made to causes that are important to us. This prompted me to share some information about our approach to charitable giving during the past year or so. We recognize that we are in an extremely privileged position to even be able to discuss this.

Bryan and I had been preparing for years to eventually take some time away from paid work. One part of that plan was a way to continue charitable giving when we weren’t employed. We felt that if we had to stop supporting charitable causes to make our post-employment life financially feasible, then we didn’t actually have the resources to leave our jobs and still have the life we wanted. Fortunately, there was a way for us to prepare for that: a donor-advised fund.

A donor-advised fund (DAF) allows an individual or organization to make charitable contributions to a fund and then recommend grants from the fund to specific charities over time.

We set up our DAF, the Zoellner-Fink Family Fund, in the fall of 2019. Our financial philosophy prioritizes low overhead costs and simplicity, so we focused our research on DAFs affiliated with Vanguard and Fidelity, where we already have accounts. Fees and structures were comparable, but we chose Fidelity Charitable because of their minimum grant amount of just $50. We wanted the option to recommend smaller grants for things like a child’s school fundraiser or a memorial gift. It was easy to set up our account online, choose a name, set an asset allocation, and fund the account with appreciated stock from Bryan’s previous employers. We have not made additional contributions to the fund since we set it up, but it can be added to at any time.

It’s important to know that there are things you can’t do with a DAF:

  • You can’t give directly to individuals, like in a GoFundMe campaign.
  • You can’t make grants from which you ultimately receive a benefit. For example, you can’t use a DAF to buy yourself tickets to attend a fundraising event.
  • You can’t contribute to some international causes.
  • You can’t make political/lobbying contributions.
  • You can’t take the money back.

Fortunately, those restrictions haven’t limited our giving at all!

We began making grant recommendations from our DAF in February 2020 by switching what we had previously given through monthly credit card charges to recurring annual grant recommendations. We also recommended grants in response to donation requests from organizations we had supported in the past and wanted to continue to support.

By March 2020, the whole world was feeling the impacts of COVID-19, and we were voluntarily jobless and transient! We were grateful to be safe and healthy and to have the resources to stay that way, but it was clear that so many people were suffering. Then George Floyd was murdered in Minneapolis, blocks from where my brother used to live, and we learned of too many people of color who had suffered similar fates. The presidential campaign staggered on and left us despairing. It felt like the world was spinning out of control, and we were powerless.

So we started making grant recommendations. Even if we needed to stay isolated, we could still put money into the hands of organizations doing important work.

  • We made extra donations to charities we had previously supported, so they could continue or ramp up their work amidst uncertainty.
  • We talked with friends and family who are directly connected to specific communities in need and got recommendations for more charities to support.
  • We researched and donated to charities that work to uplift the voices and respond to the needs of communities that deserve to be heard and that are disproportionately impacted by the pandemic.

In the past thirteen months, Fidelity Charitable has disbursed nearly $15,000 in grants on our behalf, with no impact on our personal finances. Because markets have gone up overall since we set up the DAF, our fund balance is still about what it was when we opened the account. “Past us” gave a wonderful gift to “future us”: the ability be generous.

In the before times, we’d targeted giving about $5000/year, and that felt like a lot. After this year, it’s clear that we can give more without worry of depleting our fund. Instead of impulse-shopping, we’ve been impulse donating.

  • Local food pantry shelves are depleted? Let’s give them some money!
  • The beloved, inspiring RBG dies? Honor her memory with a grant to Planned Parenthood.
  • Marketplace and Make Me Smart podcasts keep us grounded during a terrifying mid-pandemic cross-country drive? Show our appreciation with a grant to American Public Media.
  • People try to erase the experiences of non-cis/het people in proposed Nebraska public school health curriculum? ACLU Nebraska and HRC Foundation get some money!

It has been a bright spot in this tumultuous year that we can continue to support charities that do such important work, both in the pandemic and after. In future years, perhaps we’ll have the bandwidth to plan in advance where we will donate and do more research to ensure donations benefit organizations that are as effective as possible. For now, however, giving provides some comfort at a time when we need it.

Disclaimer: We are not financial professionals, and even if we were, we don’t work for you. This is merely a recounting of our experiences and is not intended as advice.

Starlink: Outage Data End of February Update

I’ve continued to analyze and plot more information about Starlink outages. I’ve also collected three more weeks of nearly continuous data, so it’s time to review how quality of service has changed.

Let’s start by replotting the data from my earlier post, using my latest code, so it’s easier to see changes.

Figure 1: Histogram of different outage lengths. Data from February 6 post replotted, covering about 66.5 hours scattered across on January 31, February 1, 2, 4, 5.

As before, the histogram in Figure 1 shows how often an outage of each length occurred. The difference between this one and the one from the earlier post is that instead of breaking up the columns by days, they’re separated by cause. Where we only knew that there were over 700 outages lasting only one second in this data last time, we can now see that that was about 300 obstructions and 500 beta downtimes (my tool also counts a few more outages than the tool I used last time).

Red bars count outages blamed on obstructions, blue are beta downtime, and green are lack of satellites. At the left side, the first set of bars counts the number of times an outage lasting only one second was observed. The next bar to the right counts outages lasting two seconds. Next three seconds, and so on. In the middle of the graph, at the point labeled “1m”, the step between the bars switches to minutes (i.e. the next bar after 1m is outages lasting two minutes). On the right half of the graph, outages with durations between two steps are counted as the lower step (e.g. 4 minutes, 45 seconds is counted as 4 minutes).

I’m going to add one more bar to the graph. The one thing I’ve had trouble using my Starlink connection for is video calling (Zoom, FaceTime, etc.). My connection drops for too long too often to make a long call comfortable. So, the question is, how long am I usually connected?

Figure 2: Adding connectivity lengths (yellow) to the histogram.

In Figure 2, the yellow bars count the number of times that connectivity lasted for the given duration. In the ideal world of zero outages, this looks like a single bar of height 1 at the 60m mark (because spans over 60m are recorded as 60m). This graph doesn’t show the ideal case. The most common connected duration is 2 minutes, occurring around 300 times. The longest connected duration is about 17 minutes, which occurred once. (Click to see the full-resolution image.)

One 17-minute span of connectivity across four days doesn’t sound great. A FaceTime call that I make every week lasts at least that long, and often closer to 30 minutes. So, multiple spans closer to that, and preferably longer, are what I’m looking for.

One thing that’s a little hard with this analysis is making sure it’s not flagging disconnections that I wouldn’t notice. So, a quick thing I’ve built in is a setting to ignore disconnects that last less than a configurable number of seconds. As a generous guess, I’ve decided to tell it that interruptions of two seconds or less are tolerable.

Figure 3: Ignoring outages lasting two seconds or less when calculating duration of connectivity.

Figure 3 has that modification. The number of one and two minute periods of connectivity have drastically decreased. Those short spans were just separated from each other, or from other longer spans. They have been tacked on to those, so we have more connections lasting ten minutes or more. In fact, there are now five durations of connection lasting over 20 minutes.

Something else that’s hard, is making sure that “outage” really means “outage”. These statistics are already following Starlink’s own app in only labeling a second as an outage if all pings were lost during that second (popPingDropRate = 1). Some redditors have suggested that because pings are such low priority, high throughput may cause all pings to be lost. So what looks like an outage could be exactly the opposite. To check this, I also added configuration to ignore an outage if the downlink or uplink speed recorded for that second is above a given value.

Figure 4: Ignoring “outages” where uplink or downlink throughput was at least 1Mbps

In Figure 4, seconds where the downlink or uplink speed was recorded as 1Mbps (1,048,576 bits) or higher are not treated as breaks in connection. It didn’t increase the number of connections lasting longer than 20 minutes. That may be because of the 13,187 outage seconds in this dataset, only 114 had downlink or uplink speeds of 1Mbps or more.

Figure 5: Display settings in use.

That was the state of the connection in the first week of February. Let’s apply this same analysis to the three weeks since.

Figure 6: Data covering just before midnight February 8 through just before midnight February 15.

Figure 6: February 9-15. This is seven days, instead of four, so we should expect counts to be a little higher overall anyway. But, there are many connected spans counted over 20 minutes, and finally some over 30 minutes. There are even a couple over 50 minutes long! This looks like decent improvement.

Figure 7: Data covering just before midnight February 15 through just after midnight February 23.

Figure 7: February 16-22. This looks pretty similar. Multiple spans over 20 minutes, some over 30. This time there are even a couple over 60 minutes long. Very short outages are also up a bit for both obstructions and beta downtime.

Figure 8: Data covering just before midnight February 22 through just after midnight March 2.

Figure 8: February 23-March 1. This still looks like a pretty similar breakdown to me. Unfortunately, we lost the over-60-minute connections, but we still have some over-30-minute durations. All short outage categories are also up, though obstructions overtook beta downtime for 4-10 second outages. A snowstorm made my tree branches thicker.

While short outages seem to have increased slightly, it does seem that the system has improved according to the connected-time measurement. I was hopeful that the Feb 9-15 improvement may have been because of satellites launched on Feb 4, and thus there might have been more improvements from the Feb 15 launch seen in the past week. There were also a couple of firmware updates I noticed on February 15 (7db91a39-…) and 20 (a95d0312-…), so maybe those shifted these metrics as well.

Subjectively, things seems about the same. Streaming and browsing work great, even if we have become a little more sensitive to the very occasional second or two that a coincidental outage delays a page from loading. Video calling still pauses often enough that we switch back to our fixed wireless connection if we expect the call to last more than a couple of minutes.

Figure 9: Timeseries view of outages and connectivity February 23 through March 1. Each 2×2-pixel rectangle represents one second.

There is still some way to go. Figure 9 is what those very few over-30-minute connections per week look like. In this “timeseries” view, each pixel represents one second. One line, from left to right, is 20 minutes. Where the line is red, blue, or green, all pings were lost during that second. Where the line is yellow, that second is part of a 30-minute or longer span of connectivity that has no interruptions longer than 2 seconds. White are other periods of connectivity that lasted less than 30 minutes. Dark grey are times I missed downloading data, because I had shut off the house power to rewire my workshop.

I already know that I need to move my dish to remove obstructions. Bands of more densely red streaks correlate with snowstorms moving through (e.g. February 28). Dishy melts what falls on it, but it can’t melt what has fallen on the tree branches that are in the edges of Dishy’s view. Once the several feet of snow on the ground around my temporary Dishy tower begins to disappear, I’ll be working on a taller mount.

Figure 10: The same timeseries as Figure 9, but with all obstruction outages removed.

From this data, reducing my obstructions to zero would remove about half of my outages. I see just as much beta downtime as obstructions, usually more, if it’s not actively snowing. Ignoring all obstruction outages in my data, while considerably expanding the number of long clear connected periods I can expect, still reveals many stretches where clear connectivity doesn’t last long (Figure 10).

Starlink says beta downtime “will occur from time to time as the network matures.” That doesn’t sound like every couple of minutes for just a few seconds to me, so I’ve tried a number of things to figure out whether all of this beta downtime is mislabeled. The periodic patterns I saw in the obstruction data in my raster-scanning post aren’t as visually obvious in the beta downtime data. Segments of beta downtime are sometimes (about 20% in the last week) immediately preceded or followed by obstruction downtime. Reclassifying those segments as obstructions, and ignoring them does make an appreciable difference in the amount and length of clear connectivity. But is ignoring them correct? Some redditors report frequent beta downtime even with zero obstructions.

For now, I’ll continue to enjoy mostly-fast, mostly-up, decently-priced service, and watch the effects of the next satellite launch and the spring thaw.

If you’d like to play with this data and the viewer yourself, I’ve published it as the 2.0 release on the github repo.

Starlink Raster Scan?

The Starlink app, whether on a mobile device, or in a web browser, will tell you in which direction the dish regularly finds something blocking its view of the satellites. I’ve had it in my head for a while that it should be able to do more than this. I think it should be able to give you a silhouette of any obstructions.

Figure 0: A satellite dish records a strip of successful/unsuccessful satellite connection moments as the satellite passes through the sky, sometimes behind obstructions.

As a satellite passes through the sky above the dish, the “beam” connecting the two follows it, sweeping across the scene (Figure 0). The dish repeatedly pings the satellite as this happens, and records how many pings succeeded in each second. When the view is clear, all, or nearly all, pings succeed. When there’s something in the way all, or nearly all, pings fail. In theory, if the dish stays connected to the same satellite for the whole pass, we end up with a “scan line” N samples (= N seconds) long, that records a no-or-low ping drop rate when nothing is in the way, and a high-or-total ping drop rate when something is in the way.

One line isn’t going to paint much of a picture. But, the satellite is going to pass overhead every 91 to 108 minutes. The earth also rotates while this happens, so on the next pass, the satellite will be either lower in the western sky, or higher in the eastern sky. On that pass, we’ll get a scan of a different line.

But 91 minutes is a long time for the earth to rotate. That’s farther than one time zone’s width, nearly 23º of longitude. Since the beam is tight, we’ll have a wide band between the two scans in which we know nothing. However, each satellite shares an orbit with 20 or more other satellites. If they’re evenly spaced, that means the next satellite should start its pass only about 4-minutes after the previous one. That’s conveniently only about 1º of longitude. If the dish reconnects to the next satellite in an orbital reliably at a regular interval, we should get 20-ish scan lines before the first satellite comes around again.[1]

But are 1º longitude scanlines enough? Before we get into the math, let’s look at some data. I’ve created a few simple scripts to download, aggregate, and render the data that Starlink’s dish collects. With over 81 hours of data in hand – 293,183 samples – I can make Safari complain about how much memory my viewer is using … er, I mean I can poke around to see what Dishy sees.

Figure 1: 81 hours of obstruction data, represented as one 4×4-pixel square per second, 600 seconds per line, white = no pings dropped via obstruction, dark red = all pings dropped via obstruction

In Figure 1, I’ve plotted ping drops attributed to obstructions at one second per 4×4-pixel rectangle. Solid red is 100% drop, and the lighter the shade the less was dropped, with white (or clear/black for those viewing with different backgrounds) being no drops. There are 600 samples, or 10 minutes, per line. It doesn’t look like much beyond noise, so let’s play around.

Figure 2: signal-to-noise ratio data at the same scale, white = full signal (9), dark grey = no signal (0)

Figure 2 is the signal-to-noise ratio data instead. White/clear means signal was full (9), solid grey means signal was absent (0), with gradations in between. Still mostly noise, except for the obvious column effect. Those columns are 15 samples wide. So something happens every 15 seconds. It’s not clear what – it could just be an artifact of their sample recording strategy – but that’s as good of a place to start as any for a potential sync frequency.[2]

Figure 3: obstructions plotted at 240 samples per row

So let’s drop down to our guesstimated 4 minutes between satellite frequency. With 240 seconds per row (Figure 3) … mostly everything still looks like noise. Let’s start by guessing that the period between satellites is longer.

Figure 4: obstruction data at 330 samples per row

I clicked through one second increments for a quite a while, watching noise roll by. Then something started to coalesce. At 330 seconds (5.5 minutes) per row (Figure 4), I see two patterns. One is four wide, scattered, red stripes running from the upper right to the lower left. The other is many small red stripes crossing the wide stripes at right angles. Given that this persists over the whole time range, I don’t think it’s just me seeing form in randomness.

Figure 5: obstruction data plotted at 332 samples per row

Advancing to 332 seconds per stripe (Figure 5) causes the small red stripes to pull together into small vertical stacks. Especially in the later data, some of these blobs seem to fill out quite a bit, encouraging me to see … something.

But here I’m fairly stuck. Doubling or halving the stripe size causes the blobs to reform into other blobs, as expected given their periodicity. But nothing pops out as obviously, “That’s a tree!” I experimented with viewing SNR data instead. It does “fill in” a bit more, but still doesn’t resolve into recognizable shapes.

It’s time to turn to math. I think there are two important questions:

  1. How much sky is covered in a second? That is, what span does the width of a pixel cover?
  2. How much sky is skipped between satellite passes? That is, how far apart should two pixels be vertically?
Figure 6: earth (green circle) with high and low starlink orbits (blue circles)

If I draw the situation to scale (Figure 6), with the diameter of the earth being 12742km, and the satellites being 340 to 1150km above that – giving them orbital diameters of 13082 to 13892km, there’s really not enough room to draw in my geometry! So I’ll have to zoom in.

Figure 7: exaggerated triangles representing the math to compute the width of a sample in our scene

We can start estimating how big our pixels are by comparing similar triangles. The satellites moving between 7.28 and 7.70 km/s. If we’re looking strait overhead, for our purposes at these relative distances (340 to 1150km), we can consider that 7km to be a straight line, even though it does have a very slight curve. In that case, we can just use scale the triangle formed by the line from us to the satellites T=0 position and the line from us to its T=1sec position, into our scene (Figure 7). If the scene objects are 20m (0.02km) away, then the width of one second at that object is 0.02km * 7.7km / 340km = 0.00045km, or just under half a meter. Compared to the higher, slower orbit, it’s 0.00012km, or 12cm. At 12 to 45cm, we’re not going to see individual tree branches. Resolution will actually get a bit better when the satellite isn’t directly overhead, because it will be further away and so the perceived angle of change will be smaller. But for the moment, let’s assume we don’t do better than half that size.

On to estimating the distance between scan lines. Wikipedia states that there are 22 satellites per plane.[3] If these are evenly spaced around the orbit, we should see one every 4.14 to 4.91 minutes (248.18 to 294.55 seconds). If the earth rotates once every 23hr56m4s, then that’s 1.038º to 1.231º. At the equator, that’s 115.42 to 136.881km. I’m just above the 45th parallel, where the earth’s circumference is only 28337km, so the change in distance here is only 81.705km to 96.897km. If we change our frame of reference, and consider the satellite orbital to have moved instead of the earth, we can use the same math we did last time. To estimate, this distance (81km/satellite) is approximately one order of magnitude larger than the last ones (7km/s), so we can just multiply everything by ten. Thus, our scan lines should be 1.2m to 4.5m apart.

At 12 x 120cm per sample, we’re not going to be producing photographs. At 45 x 450cm, I doubt we’re going to recognize anything beyond, “Yes, there are things above the horizon in that direction.” Let’s see if anything at all compares.

What parameters should we use to generate our comparison scan? If we’re seeing satellites pass in 4.14 minute (91 minutes / 22 satellites) intervals, we should guess that a scan line will be about 248 seconds. If they’re passing every 4.91 minutes, we should guess about 295 seconds.[3] Given the aliasing that integer math will introduce, the fact that 4.14 and 4.91 are kind of the minimum and maximum, and that the satellites won’t sit at exactly those altitudes, it’s probably worth scanning from about 240sec to 300sec, to see what pops up. I see what look like interesting bands show up at 247, 252, 258, and 295 at least. Maybe I’m catching satellites at a band between the extremes?

But then why was 330-332 the sweet spot in our pre-math plot? Maybe I’m just indulging in numerology, but 330 = 22 * 15. Twenty-two is the number of satellites in an orbital, and 15 is the width of the columns we saw in the SNR plot. Could it be that satellites are not evenly spaced through 360º of an orbital, but are instead always 5.5 minutes (330 seconds) behind each other?[3] If that were the case, the orbital would “wrap” its tails past each other. That seems odd, because you’d end up with a relative “clump” of satellites in the overlap, so maybe there’s a better explanation for the coincidence.

In any case, I’m going to forge on with an example from the 332-sample stripe, because its blobs look the strongest of any to me. Let’s also redraw it with the boxes ten times as tall as they are wide, since that’s what I calculated to be the relationship between one satellite’s samples and the next satellite’s samples. If I overlay one of those clumps on the northward view I shared in my last post, does it line up at all?

Figure 8a: Select a blob
Figure8b: Rotate and scale the blob

I’ve stared at this for far too long now, and I have to say that this feels worse than the numerology I indulged in a moment ago. I’m starting to worry I’ve become the main character of the movie Pi, searching for patterns in the randomness. If there’s something here, it needs a lot more knowledge about satellite choice and position to make it work. Even if I adjusted the rendering to account for the correct curve of the satellite’s path and the camera’s perspective, the data is too rough to make it obvious where it lines up.

With some basic information like which satellite the dish was connected to for that sample, and the database of satellite positions, I’m pretty sure it would be possible to throw these rectangles into an augmented-reality scene. Would it be worth it? Probably not, except for the fun of doing it. The obstruction diagram in the Starlink app (Figure 9) divides the horizon into twelve segments. If it shows red in one 30º segment, it’s the tall thing you can see in that segment that is causing the obstruction. This additional data may be able to narrow within the segment, but if there are multiple tall things in that segment, they’re probably all obstructions.

Figure 9: Starlink app’s obstruction diagram

So, while this was a fun experiment, this is probably where it stops for me. If you’d like to explore your own data, the code I used is in my starlink-ping-loss-viewer repo on github. The data used to to generate these visualizations is also available there, in the 1.0 release. Let me know if you find anything interesting!

Figure 10: Whole-second full-ping loss attributed to obstruction (red) or beta downtime (blue)

… and just one more thing before I sign off. Following up on the topic of my past notes about short, frequent Starlink outages. Figure 10 is a rendering of my obstruction (red) and beta (blue) downtime over this data. I’ve limited rendering to only d=1 cases, where all pings were lost for the whole second, since this seems to be the metric that the Starlink app uses for labeling time down. One rectangle per second, 10 minutes per row. The top row begins in the early afternoon on February 9, and the bottom row ends just before midnight on February 12, US central time.

Dishy dressed up for the grid analysis. We see too many post about Dishy’s icicle beard, and not enough about Dishy’s cool water droplet matrix.

Updates (footnotes):

[1] Many thanks to u/softwaresaur, a moderator of the Starlink subreddit for pointing out that routing is far more complex, since active cells are covered by 2 to 6 planes of satellites, so it’s likely unrealistic to connect to several satellites in the same plane in a row.

[2] From the same source, routing information is planned on 15 second intervals. At the very least, this means that the antenna array likely finely readjusts its aim every 15 seconds, whether or not it changes the satellite it’s pointing at.

[3] Again from the same source, while 22 satellites per plane was the plan, 20 active satellites per plane was the reality, though this has now been adjusted to 18. That fits the cycle observation better, as 18 satellites at a 91-108 minute orbit is 5 to 6 minutes between satellites.

Rural Internet: Starlink

At the end of my last post about the state of rural internet, I mentioned that we were about to try something new: Starlink by SpaceX. We’ve been using it as our primary internet connection for two weeks now, and TL;DR it would be tough to give it up, but it does have some limitations.

One of my first Speedtest.net results on Starlink.

Download speed via Starlink is excellent. Samples I’ve taken via Speedtest.net over my wifi have never measured less than 30Mbps. Most samples are in the 60-80Mbps range. My highest measurement was 146Mbps. Upload speed via Starlink is also excellent. Speedtest measures them anywhere from 5 to 15Mbps. Ping latency bounces around a little bit, but is usually in the 40-50ms range.

Typical speeds I measured via fixed wireless were 20Mbps down, 3Mbps up. So Starlink, in beta, is already providing a pretty consistent 3-4x speed improvement. I no longer worry about downloading updates while trying to do anything else on the internet.

A typical view in the Starlink app’s statistics panel.

Unfortunately there is a “but”, because while the speed is great when it’s running, the connection drops for a second or five every few minutes. The dish’s statistics indicate that these interruptions are about half due to Starlink making updates (“beta downtime”) and half due to the trees blocking my dish’s view of the sky (“obstructions”). I’ll be working on the latter when the weather warms, and they’re constantly working on the former.

Mid-winter Northwoods mount: four rows of concrete block put the middle of Starlink’s dish about four feet off the ground.
My stitching of the Starlink app’s obstruction view northward approximately where the dish is sitting. This is the clearest view I’ll have until the weather warms enough to try other mounts.

These short interruptions have almost no effect on browsing or streaming. Every once in a while, a page will pause loading for a moment, or a video will re-buffer very early on. I notice it only slightly more frequently than I remember cable internet hiccups.

But what these short interruptions do affect is video calling. Zoom, Facetime, etc. are frustrating. It /almost/ works. For two, three, five minutes everything is smooth, but then sound and video stop for five to ten seconds, and you have to figure out what the last thing everyone heard or said was. My wife participated in a virtual conference this past week, and she tried Starlink each morning, but switched back to fixed wireless after the second or third mid-presentation hiccup each day.

Complete outage, possibly to do with new satellites launched the night before?
Outage confirmation on the support site.

And yet, there’s also a silver lining to the outage story. One of our frustrations with our fixed wireless provider is that we’ve had several multi-hour outages over the last three months. On Thursday, we finally had a two-hour Starlink outage. Why is that a silver lining? When I loaded Starlink’s support page over my cellphone’s limited 4G connection (remember, my wife was video conferencing on fixed-wireless), they had a notice up that they knew about the outage in our area, and listed an expected time of resolution. That sort of communication is something we have never gotten from our fixed-wireless provider. It completely changes how I respond to an outage, and it gives me hope that Starlink better understands what people expect from internet service today.

If you’re curious whether data backs up my subjective review of Starlink connectivity, please continue to my next post, which includes the dish’s own statistics.

The comparative price of the two solutions is nearly a wash. Starlink hardware is $500 plus shipping and handling (another $50). Our fixed wireless installation was $119, with the option to either buy the antenna for an additional $199, or rent for $7/mo. That makes Starlink at least $200 more expensive up-front, without including any additional mounting considerations (brackets, tower, conduit, etc.). And don’t get me wrong, white the setup seemed simple to me, the value of professional installation and local, in-person troubleshooting should not be overlooked.

But once everything is in place, the monthly costs are the same: $99. For fixed wireless, that gets me 25Mbps that handles video calls well, but goes out overnight. Starlink is currently a no-guarantees beta, marketed as “better than nothing” for people who can’t get even my alternatives. Even in this state, it’s providing 4x more speed for me, with better communication about downtime. I think they’ll have no trouble selling these to loads of people, and if they significantly improve the video-calling experience, they’ll put fixed-wireless out of business.

Rural Internet

The fastest internet speed I can buy at my house is a nominal 25Mbps down / 5Mbps up. That’s slower than some can get less than a mile down the road, but faster than others can get just a bit beyond that. It costs $112 per month, which is also more expensive than what’s available to some neighbors, and cheaper than others.

I’ve spent the last twenty years living in areas of the country where I had multiple reliable, fast, relatively cheap connection options. In the last few months, I’ve moved to a rural area, so now, as you’ve probably learned from the year’s remote-work-and-school reporting, I don’t.

The two options for internet available to me today are satellite and fixed wireless. Satellite has the two major disadvantages of a large roundtrip latency (hundreds of milliseconds) and restrictive data caps (even expensive plans top out around 50GB per month). Neither of these works with my usage patterns.

So I chose fixed-wireless. If you’re unfamiliar with fixed wireless, you’re not alone. It’s basically a mobile hotspot cellular connection, except not mobile. Its antenna is permanently attached to our house, and pointed at a dedicated tower. This allows it to provide double to triple the bandwidth of mobile 4G. Of the 25/5 promised, which by the way is the state’s minimum definition of “high-speed internet”, we see about 80%. Most of the time our download speed is around 20Mbps, and upload between 3 and 4Mbps.

Value is something of a moot point. We know that elsewhere, even a mile down the road, cable companies provide twice or better the speed for the same price. But where neither cable nor DSL reach,[1] we have to compare to satellite. Speed for price is comparable, but our fixed wireless provider gets two easy wins: one tenth the latency (usually around 60ms), and no data cap. What the low-latency makes possible, namely video calling, the lack of data cap makes affordable. That’s especially true in a month where laptops, tablets, and phones also need to download updates.

There are places our fixed wireless provider could still improve. While we see most of the bandwidth we pay for most of the time, it’s not stable, and download speeds of 14Mbps or lower are not uncommon. We spent the week of Thanksgiving getting about 6Mbps down, 0.2-0.4Mbps up. That was primarily caused by their second problem: small support windows. Our speeds stayed low for a week (yes, admittedly a holiday week) because no one was in the office to fix them. The same happens with complete outages, of which there have unfortunately been several. If the outage starts after the support office closes (5pm), it’s often not fixed until the office opens again in the morning.

We’re lucky to be flexible with our internet needs. We don’t have kids that need to be connected to remote schooling. We don’t have jobs with lots of scheduled meetings. So, while outages and slowdowns are irksome, our needs are still met.[2]

But this situation is still bothersome. I want to write something about how it’s unbelievable that we can’t expect reliable, high-speed internet access at every house, during a ten-month-old pandemic where we’re asking people to attend school, work, shop, and socialize virtually. But really, how can we not expect reliable high-speed internet access at every house by now anyway? There is a world of information and utility out there that many have little or no access to.

We, as a country, treat internet access as a luxury, offered only in ways that corporations deem profitable enough. Many refuse to believe it, but internet access has become a utility. Some will argue that anything you can do online, you can also do by calling, driving, or walking somewhere. But this erects a wide social status divide. The person who can file their taxes online gets their return faster than the person that has to mail them. The person who can shop on Amazon has access to a far wider selection of products, with better availability, than the person who can only drive to Walmart. The person who can access books, movies, music, and other media digitally is able to be better informed, with a broader world view, than the person limited to what their local stores and libraries[3] have on hand.

Or to put it another way, your house didn’t “need” electricity run to it a century ago. But the house with electric light is safer than the house lit by candles or lanterns. The house with electric refrigeration has a safer food supply. The house with electric laundry machines saves hours of time.

As I write this, we’ve just begun a test of one more step into the future. SpaceX has launched a new satellite network called Starlink. Unlike existing satellite internet providers, this network offers low-latency connections, with speeds two to six times faster, with no data cap. All this costs the same price as our fixed-wireless provider.[4] They’re running an invite-only beta, with their own set of no-guarantee-of-uptime warnings. We couldn’t resist giving it a try, to find out if widespread quality internet connection is finally on the horizon for areas that have had next to nothing until now. Follow this blog for updates about how it’s going once we gather some data.

[1] I called and asked. The cable company that serves much of the rest of town doesn’t believe the dozen or two houses on my mile-long dead-end street will be profitable. They won’t even roll the truck to survey whether service is available at my address. Neighbors are doubly confused by this, because power and phone lines were just moved off poles and underground in the last few years, so it would seem surprising if cable wasn’t laid in the same trench.
[2] Yes, we’re also lucky to be able to afford $112/mo. We’re also saving by not using a landline for the telephone, and not subscribing to TV.
[3] No disrespect is meant to libraries who provide valuable services to their communities, even beyond their media collections. And yes, many libraries are connected to networks that exchange media free of charge to the borrower, but access from home is still a different level of availability than in-person.
[4] Yes, we’re also lucky to be able to afford a second $100/mo. to test both networks concurrently. We hope we can use our privilege to gain experience that can be shared with the community.

Geodesic Dome

By a half-planned chain of events, I’ve spent the last six weeks of COVID-19 Shelter-In-Place over 2000 miles from my woodworking tools. Instead of diving right into a new construction after my dresser, I cleaned and then packed my shop, in preparation for a move. While our belongings have made their way across the country, we have stayed behind to “quaranteam” with a friend-couple, their young son, and their dogs.

We have entertained ourselves with other hobbies: walks to keep everyone moving, cooking delicious meals, reading books, and making music. A few ideas for construction projects have risen during that time, but with few tools and difficulty acquiring wood while maintaining social distance, none of them have been undertaken.

Then one of us saw a post about a geodesic dome made of cardboard. The shape alone immediately captured the attention of the four Xennial-age adults in the house. When we recognized that cardboard was the one material in abundance here, from six weeks of contactless deliveries, wheels set in motion.

Google found a calculator for ordering bits of PVC based on the size and complexity of dome desired. Reverse-engineering that math led to a very simple cut list for a “2V” geodesic dome of paper:

• Ten equilateral triangles, with sides of length A

• Thirty isosceles triangles, with one side of length A and two of length 7/8 * A

Seven-eighths isn’t exactly what the calculator produced, but it’s less than 1% off, it makes measurement simple, and it has worked in my experiments.

The size of the dome that is built is also related to A in a very simple way: the golden ratio. A compressible, bendable material, like paper and tape, worked with common tools like scissors or a box cutter introduced enough error that using many decimal places didn’t make sense. Simplifying to estimating the height at 1.5 * A, and the width (diameter) at 3 * A proved close enough for toy structures.

I’ll include some examples of how specific measurements work out later, but before I annoy people by showing how neatly these work out in Imperial units, I’d like to explain how no particular units are necessary at all. Grab a stick or a string, and I’ll walk you through how to build your own geodesic dome without any arithmetic.

Step 1: Sizing your dome

Figure out where you want to put your dome. Is it a decoration for your desk, or a fort to play in? Find a piece of string, a stick, a strip of paper, etc. that you can cut to the desired width (diameter) of your dome. Before you cut it, find its halfway point, and hold it up in the approximate middle of where you will place your dome. This is about how tall your dome will be (the dome approximates a sphere, so you get one half diameter up from the ground). When you have found a size you like that fits your space, move on to step two.

Step 2: Making your tools

Cut your string, stick, strip of paper to length equal to the dome width that you chose in step one. Then cut that piece into three equal segments. I used a paper strip for my measuring device, so after cutting mine to the full length, I folded it into thirds, and then cut through the folds:

Label one of the cut pieces “8” (eight).

Cut off 1/8th of one of the other pieces. The easiest way to do this is to first find the middle point of that piece. Then find the point halfway between the middle point and one end. Finally, find the point halfway between that point and the end. Cut through that final halfway point. I folded my paper three times and cut through the third fold to do this:

Label the piece you just cut “7” (seven).

Step3: Equilateral Triangles

If the edge of your dome-building material isn’t straight, draw a line on it using a straightedge. Using your “8” piece, mark divisions along your straight edge.

Now for the tricky part. Put one end of your “8” piece right on the left-most mark (or corner) of your straight edge, and angle it up so that the other end is somewhere near where you expect the point of an even triangle would be. Mark a dot on your building material at that end of your “8” piece. Do this a few more times, swinging both a little clockwise and a little counter-clockwise from that spot.

Connect these dots in the arc they form.

Next move the lower end of your “8” piece to the next division mark to the right on your straight edge. Swing the other end up until it crosses the arc you just drew. Mark the point at which it crosses the arc.

Draw a line from each of the division marks you just used to the arc-crossing point you just found. You have just marked your first equilateral triangle!

If you’re building a large dome, and/or working with pieces of material that won’t allow you to get multiple triangles out of one piece, you can skip the next few steps. Cut out this first triangle you have marked, and then use it as a template to trace out nine more identical triangles.

If you’re working with a piece of material that will fit multiple triangles, repeat the arc-crossing process at the right-most division of your straight edge.

Draw a line connecting the points of the two triangles

Using your “8” piece, divide the line between the triangle tips.

Connect the division markers on your straight edge to the division markers on the line between the triangle tips. You have now marked out many more equilateral triangles!

You will need ten of these triangles. If you’ve already marked ten, you’re done. If you need to mark more, try extending your angled lines farther upward. When they cross, they will either make more triangles or diamonds. If they make diamonds, draw a horizontal line connecting the corners to make two triangles.

Cut out your equilateral triangles. Make sure you end up with ten!

Step 4: Isosceles Triangles

The process for the isosceles triangles is the same as it was for the equilateral triangles with one difference: use the “7” piece when finding the arc crossing. Use the “8” piece, as before, to mark divisions along your straight edge, but use “7” to find the crossing point from there.

You will need thirty of these isosceles triangles. Yes, 30.

Step 5: Pentagon Assembly

Time to start assembly. Looking at a finished geodesic dome, the eye is drawn to two (non-triangular) shapes: pentagons and hexagons. I’ve had success with assembling pentagons first, so that’s what I’ll show here.

Collect five isosceles triangles (the ones with two “7” sides and one “8” side). Arrange them in a circle so that all of the “8” sides are pointing out, and all the “7” sides are next to other “7” sides.

Connect four of the “7”-side seams together. A gap should develop in the fifth seam.

Draw the gap together. The pentagon will cup slightly. Seal the seam, and the pentagon will stay cupped.

Repeat this pentagon assembly five more times. You should end up with six pentagons, using all 30 of your isosceles triangles.

Step 6: Connect it All Together

This is where construction will really begin to get unwieldy. If you’re building a large dome, I strongly suggest at least one person to help hold. Two if you can get them.

Collect one pentagon and two equilateral triangles (the ones with three “8” sides). Connect each triangle to two adjacent sides of the pentagon.

From here, connect a pentagon into the space between the two equilateral triangles. This will introduce more cupping, like when you sealed the fifth seam in the pentagon. Continue to alternate pentagons, and equilateral triangles, growing this strip until you have only one pentagon left unconnected (you should have five pentagons attached to five pairs of equilateral triangles). Connect the two equilateral triangles at the end of your strip to the pentagon at the start of your strip. You should have a ring that has a pentagonal hole in one side. Tape the remaining pentagon into this hole, and your geodesic dome will be complete!

Apologies for a lack of build pictures of these steps. The pieces pictured so far are being mailed in an envelope as a small birthday gift. But, here there are laid out ready for final taping.

And here is an annotated diagram of what gets taped where. Purple 1-5 are the pentagon seams. Yellow 1-10 are the remaining alternating-pentagon-triangle seams (9 and 10 appear twice to indicate where the wrap-around connects). Red 1-5 are the roof seams (and 2-5 are duplicated to show where the pieces connect.

And one more shot with a completed dome in the opposite color scheme, to aid in visualization.

What next?

If you followed along, I hope your first dome was successful. If you’re wondering about the dimensions of the domes in my pictures, they are these:

Small dome, with blue pentagons: A = 2 inches. Isosceles sides = 1.75 inches. Height is just a bit over 3 inches. Width is just a bit over 6 inches.

Small dome, with green pentagons: A = no idea. I purposely didn’t measure anything, to make sure I wasn’t lying about being able to build this without numbers.

Large dome, made of cardboard: A = 24 inches. Isosceles sides = 21 inches. Height is just a bit over 3 feet. Width is just a bit over 6 feet. The additional ring around the bottom is ten inches tall. We have fit four adults and one child inside. It’s close, but not cramped.

Good luck with your next build!

Project Box: Planning

While I think about how to tell you about the process of fitting the internal components of this box, I’m going to talk about planning.

box-build-planning - 1.jpg

The image above is the whiteboard in my shop, as it was at the end of this project. I’ve lost some of the context about what each scribble meant, but there are three obvious diagrams: the dovetails, the hinges, and the latches and handle. None are to scale. None indicate relationships to each other. All were drawn at the moment they were needed.

It’s tempting to write about how this plan-as-you-go process is because of the nature of wood. The many ways different grain patterns can and cannot be used, and the inability to be sure of what you’ll find inside a slab, means that most projects end up needing to be adapted to fit as they progress.

But this incremental design is how all of my projects go. The basic structure of a program gets sketched and then adapted as I start to code. Presentations are outlined and then rearranged as I find each part needing a different fit in the story. Dinner plans come together on the cutting board. Road trips have a destination and, “Something like this road will probably work.”

I would make far fewer things if I designed the entire solution up-front. There is, of course, plenty of planning that happens before the first cuts are made. However, there is a point in the initial design of every project at which there are too many unknowns. My solution is often to bring the work near the point where the project is blocked without their decision. This brings clarity to the details surrounding the issue. Sometimes the details become so clear that the solution is obvious, and other times I learn that the question wasn’t even relevant.

There are two keys to this flow working. The first is enough familiarity with the domain to recognize which decisions are likely to doom a project if not addressed early. My box must have internal dimensions large enough for the things I intend to store in it. I must have yeast and two hours of lead time if I want to bake bread for dinner. Put another way, it must be possible to determine what can be left unknown.

The second key to this process is the confidence that I can solve the problems that will arise. I find this one key to my work, even if I’ve over-planned. Years of projects in many domains have taught me that I have to expect that I will make a mistake somewhere in either my plan or my execution. I’ve also learned from this experience that very few of these mistakes spell disaster.

So, a whiteboard hangs in my shop to provide a place for information to accumulate to clarify the unknowns, as needed.

NerdKit Gaming: Part 2

If you were interested in my last bit of alternative code-geekery, you may also be interested to hear that I’ve pushed that NerdKit Gaming code farther. If you browse the github repository now, you’ll find that the game also includes a highscore board, saved in EEPROM so it persists across reboot. It also features a power-saving mode that kicks in if you don’t touch any buttons for about a minute. Key-repeat now also allows the player to hold a button down, instead of pressing it repeatedly, in order to move the cursor multiple spaces.

You may remember that I left of my last blog post noting that there wasn’t much left for the game until I could find a way to slim down the code to fit new things. So what allowed these new features to fit?

Well, I did find ways to slim down the code: I was right about making the game state global. But, I also re-learned a lesson that is at the core of hacking: check your base assumptions before fiddling with unknowns. In this case, my base assumption was the Makefile I imported from an earlier NerdKits project. While making the game state global saved a little better than 1k of space, changing the Makefile such that unused debugging utilities, such as uart, printf, scanf weren’t linked in saved about 6k.

In that learning, I also found that attempting to out-guess gcc’s “space” optimization is a losing game. Making the game state global had a positive effect on space, but making the button state global had a negative effect. Changing integer types would help in one place, but hurt in others. I’m not intimately familiar with the rules of that optimizer, so it felt like spining a wheel of chance choosing which thing to prod next.

You may notice that I ultimately returned the game state to a local variable, passed in and out of each function that needed it. The reason for this was testability. It’s simply easier to test something that doesn’t depend on global state. Once I had a bug that required running a few specific game states through these functions repeatedly, it just made sense to pay the price in program space in order to be able to write unit tests to cover some behaviors.

So now what’s next? This time, it’s not much until I buy a new battery. So much reloading and testing finally drained the original 9V. Once power is restored, I’ll probably dig into some new peripheral … maybe something USB?

NerdKit Gaming

Contrary to the evidence on this blog, not all of the code I write is in Erlang. It’s not even all web-based or dealing with distributed systems. In fact, this week I spent my evenings writing C for an embedded device.

I’ve mentioned NerdKits here before (affiliate link). This week I finally dug into the kit I ordered so long ago, and took it somewhere: gaming.

The result is a clone of a simple tile-swap matching game. I used very little interesting hardware outside the microcontroller and LCD — mostly just a pile of buttons. The purpose of this experiment was to test the capabilities of the little ATmega168 (and my abilities to program it).

I’ve put the code on github, if you’re interested in browsing. If you don’t have a NerdKit of your own to load it up on, I’ve also made a short demo video, and snapped a few up-close screenshots.

What did I learn? Mostly I remembered that writing a bunch of code to operate on a small amount of data can be just as fun as writing a bunch of code to operate on a large amount of data. Lots of interaction with the same few bytes from different angles has a different feel than the same operation repeated time and time again on lots of different data. I also learned that I’ve been spoiled by interactive consoles and fast compile/reload times. When it takes a minute or more to restart (after power cycles and connector un-re-plugging) and I don’t have an effectively infinite buffer to dump logs in, I think a little longer about each experiment.

So what’s next? Well, not much for this game, unless I slim down the code some more. Right now it compiles to 14310 bytes. Shortly before this, it was 38 bytes larger, and refused to load onto the microcontroller properly, since it plus the bootloader exceeds the 16K of flash memory available. My first attack would probably be to simply move the game board to a global variable instead of passing it as a function argument. The savings in stack-pushing should gain a little room.

If I were to make room for new operations, then a feature that saved a bit of state across power cycles would be a fun target. What’s a game without a high-score board?