Starlink: Outage Data End of May Update

I said at the end of last month that I was looking forward to writing this month’s update. Mid-month, I built a tower and got Dishy up in the air, hoping to cure my obstructions. So how did it go?

In a word: frustratingly.

May 2021 connectivity timeseries. 1sec per pixel, 20min per line. Red=obstruction-downtime, blue=other-downtime, green=no-satellite, dark-grey=no-data (reboot), yellow=uninterrupted connection at least 30min long.

Above is the timeseries chart for the month. Even without knowing what the colored blobs mean, you can see that they change. Change is what I was looking for, after reporting that April had been just like the end of March. Let’s break down the changes.

May 1-13am. Grey bars are May 6 and 7 when Dishy upgraded firmware from 1f86ec34 to fd689710.

The first two weeks look just like April. We were seeing about 38min obstructed, and an additional 38min “other” (formerly known as “beta”) downtime per 12hr through this period. This was starting to get awful. Half the time I was browsing the web, I’d have two tabs open – one to see the webpage I wanted, and one to check whether Starlink was connected. So when the wood for my tower arrived, I wasted no time in starting construction.

May 13-23am. Point A = tower install May 13, B = firmware 68fdc22b May 15, C = support request May 15, D = support response May 18, E = firmware 1752790c May 21.

On May 13, I stowed the dish, unplugged it, mounted it on the tower, and plugged it back in (black bar at point A). All of the obstruction downtime (red) vanished! … but “other” downtime (blue) increased?! I know it takes 24hr to assemble obstruction statistics, so I waited. At 48hr, when I was still seeing 45min of non-obstruction downtime per 12hr with nearly no obstruction downtime, I reached out to Starlink support (point C). Unfortunately, that was Saturday, so even though they called and talked to me, they ultimately told me they’d have to wait for the rest of tech support to check out the details on Monday.

Left: Obstruction view at Dishy’s original mounting position (not on tower), facing north, February on the left, May on the right.

I didn’t hear from Starlink on Monday. I left home mid-afternoon, and spent the night away. When I returned on Tuesday afternoon, an email had just arrived from Starlink (point D). Paraphrased, it read, “Could you check your stats again now?” I opened the app, and found that other downtime had fallen, but obstruction downtime had climbed well over 40min per 12hr! Over the next two days, obstructions only got worse. We topped out around 70min obstructed per 12hr. What happened?!

The lack of obstruction downtime for 96hr after moving Dishy was apparently a fluke. “The stats didn’t reset right,” is basically what support said. That’s not why the stats got worse from Monday onward, though. What made obstructions worse is that that Monday was the second of three days over 70ºF in the middle of spring rains. That’s right, the thing I’ve been talking about racing for months caught up to me: leaves. We went from nothing to nearly full canopy in the space of a few days.

Pre-build tree model from my Tower-building post. 20ft tower would almost clear 55ft trees.

Wasn’t the tower supposed to fix the obstruction problem? Yes, well. Remember when I said I wasn’t sure enough of my measurements to put in a permanent tower? It turns out my measurements were pretty good … but the math I did with those measurements had a major error. I typed “sin” where I should have typed “tan”, and it turns out my main obstruction trees are not 50-60ft tall, but are instead over 80ft.

Corrected tree model. 20ft Dishy tower does not clear 80ft trees.

So, my tower probably saved me from having even worse connectivity. Without being sure how long I’d have to wait to get accurate data, I’m not going to move Dishy off of the tower again to find out. I’d rather not haul the cinder blocks out to restack anyway.

An extra 5ft, to reach 25ft total, does help just a little bit.
May 23-26. Added an extra 5ft to the tower mast at the dark grey line.

In a fit of, “What’s the quickest change I can make to get some improvement?” I added another 5ft of pipe to my tower. Dishy now sits 25ft above the ground. We’re usually under 60min per 12hr of obstruction downtime again. Sometimes when it’s not raining, we’re even back under 50min. Thankfully beta downtime has lowered, to even us out near where we started for total downtime.

Obstruction view near Dishy’s 20ft tower mounting position, facing north. Blur and narrow window are due to my extremely quick tape-my-phone-to-a-pole hack to get this view before a thunderstorm rolled in.

My main northern trees are over 80ft. Trees to the east and west are 60ft. I can’t get far enough from the east and west trees to make anything less than a 40ft tower clear them. I can’t get far enough from the north trees to clear them with any tower much shorter than 60ft. This 25º horizon and 100º field-of-view are kind of a bummer for my location.

“Just get rid of the trees and/or put up a taller tower,” I hear you say. Yes, I know that’s the solution to my current Starlink connection troubles. The problem is that I like these trees, and I’m not particularly wild about spending several thousand dollars on a permanent tower to support a beta service. Can I even rely on the dish getting its stats correct after that move? Once the initial constellation is full, and the service moves out of “better-than-nothing beta”, and we get full details on the potential narrowing of the necessary field of view, maybe I’ll feel better about a tower.

Estimates in forums put the full constellation date between two and three months away. I’ll be continuing to collect these stats, and possibly trying one or two more tower position changes, over that time. In addition, I’m finally collecting official stats on my alternate connection, a locally-operated fixed-wireless ISP. They were here before Starlink, and we might have skipped the beta if that WISP hasn’t had large amounts of unexplained downtime just a couple of weeks before we received our invite. They’ve continued to be the connection we turn to for video calling. Now it’s time to see if they’ve ironed out their issues, and determine if we’d rather just go back to that connection and wait for Starlink to mature. Watch for comparison data next month.

Dresser: … versus Humidity

It has been a little over 14 months since I finished my dresser. I’ve learned a few things that I’ll consider during my next build, like how epoxy doesn’t really stick to slate, and how heavy that much 4/4 cherry is. But the most surprising one has been the reaction to humidity.

It’s not exactly hard to find free advice on the perils of ignoring seasonal wood movement. We all know that wood expands and contracts as humidity fluctuates. But many of us also get to nearly ignore it. Kiln-dried wood, cut and assembled in a climate-controlled shop, and used in a climate-controlled home, doesn’t actually move much. I built this dresser in San Jose. Even without climate control, the environmental humidity hardly changed 10% throughout the year.

But I haven’t stayed in that environment. This past year, I moved to a home that experiences “real winter” (for two weeks in January, the outside temperature remained below zero Fahrenheit), uses force-air heating (without air conditioning), and also overlooks a lake. The air inside was so dry in January we bought moisturizing lotion at Costco. Spring has finally warmed up the place, and brought days and days of rain. I wouldn’t compare it to the mugginess that the southeastern US experiences … but in comparison to winter, my dresser might.

Humidity has caused the drawer face (cherry, the darker-colored wood) to bow outward, pulling the front of the drawer box away from supporting the drawer bottom.

The front of every drawer is bowed out. It actually doesn’t look terrible with all of the drawers in place. Since they’re all made of aligned segments of the same pieces of wood, all of the bows match. I accounted for some expansion, so the drawers aren’t jammed. But, the bottom panel of each drawer is no longer supported by the groove in the front, so they sag under the weight of clothing. The next-to-lowest drawer bottom sags low enough to catch on the front of the lowest drawer face.

Why did this happen? I used plain-sawn boards instead of quarter-sawn, mostly. But, I alternated the growth ring direction of each board, as is commonly advised. The theory behind that technique is that if each board swells away from its outer growth rings, alternating which side the outer rings are on will cause each board to cup in the opposite direction. The face still wouldn’t be flat, but it would have a few smaller waves instead of one big bow.

With screws removed, the drawer box moves back into place. Alternating growth rings (highlighted in white) didn’t correct the arch.

I didn’t get waves. I got an arch. One, rainbow-like arch. Why? Probably two things, both to do with the face being screwed to the drawer box. First, each board probably didn’t absorb humidity evenly. If they had, each board would have cupped its own way, producing the expected wave. But, the back side of each face, flush against the box of the drawer, likely absorbed much less than the face in the open air. So, the open-air face expanded, and the sealed face didn’t. Second, the drawer box is multi-layer finished birch plywood. It didn’t expand in the humidity at all. While I did oversize the screw holes to allow for some movement, I doubt it was enough to compensate for this much. So, the box front itself kept the back side of the drawer from expanding. I bet if I removed the drawer fronts from the dresser and let them stand free in the open air for a few days, the expected wave shape would show up.

But I’m not interested in getting a wave shape. I’m interested in continuing to use my dresser through the coming humid summer. There are a few ways to think about correcting the bow. The one I’ve decided to use focuses on the use problem, instead of the look problem. The real issue I have with the bowed drawer fronts is that they pull the drawer box front with them, which leaves the drawer bottom unsupported.

Stretchers force the drawer box front to stay against the drawer bottom, even when the tension of the face bow is reapplied.

What I’ve done is to make sure that the front can’t be pulled too far from the back, by installing stretchers between them. This is something I considered putting in the long drawers (40 inches side-to-side) anyway, to add some structure and divide up the space. I used a sliding dovetail on either end of the stretcher, to give it a strong grip on the front and back. I also drove a pan-head screw through the front and back into the stretcher, too keep the stretcher from sliding along the groove as I pull clothes from under and around it.

With the stretchers clamping the box front and back on the square bottom, the screws holding the face to the front were able to easily pull the curve back out of the face. Maybe this is kicking the ball down the road, and I’ll have to deal with a worse problem as humidity continues to soak into the drawer faces, or when it all leaves again next winter, but it seems like this is working so far.

Starlink: Tower

One reason I haven’t worked on fixing my obstructions before now is that winter makes the ground impenetrable, and the roofs treacherous. The second reason is that I didn’t want to attempt any solutions without having data to guide me.

“What data do you need? Just get the dish up as high as you can!” is the sentiment I’ve gotten from the Starlink subreddit. Forty-foot Rohn towers are the “put a bird on it” of that community. But the idea of just ordering $1000 or more of tower, and pouring a large cement pad for it, just to see if that fixes things, doesn’t sit right with me. What if all I needed was 20ft of tower at a different location? What if I actually needed something taller than 40ft?

Don’t even get me started on the, “Just cut down the trees,” crowd.

So once it got warm enough to be outside without thick mittens for more than a minute at a time, I did what any engineer would do: I hacked together a sextant, and mapped the positions and heights of my obstructions – trees, mostly.

Surveying with what you have: tripod + walking stick = reference, protractor + straw + string + weight = sextant.

Then, because my particular engineering specialty is computers, I created a 3D model of the property, and put Dishy in it.

Dishy’s original location, about 4ft off the ground. North is along the positive Y-axis, which runs to the right (view is toward WNW).

A cone with a 100º peak, rotated northward until its edge is 25º of of horizontal represents Dishy’s field-of-view. I was lazy about tree modeling – they’re just cylinders rising to the measured height, at the correct location. I assume that if I can get the cone above the peaks, it will also be outside of the rest of the tree shape. Comparing this model to the obstruction view in the Starlink app, I think I got close. Each tree I see inside the app’s window, I see inside the model’s cone.

Dishy 10ft above my roof. The blue cylinder near it represents my chimney. The tree in the cone to the north is far more in Dishy’s field-of-view than this model shows, because many tall branches extend southward.

With hope that the model is correct, I started moving the Dishy cone around. My first question was whether I should plan to put Dishy on my roof. Unfortunately, there is a tree very close to the north side of my house, not to mention the east and west sides. The cone wasn’t clear until Dishy was ten feet above my roof. To confirm, when the ice had cleared, I climbed on the roof and pointed the app around. The view from the app made me think this might even be optimistic, as some branches of the tree reach over the roof, in a way that makes the simple tree-cylinder model too simplistic.

Dishy a few feet southwest, and 16ft higher than its original location.

If the roof mount was going to require a tower anyway, how much tower would I need from the ground? I began moving Dishy’s cone up from its original position. When I reached twenty feet, moving it a little south and a little west seemed to almost entirely clear the cone. If I raise the aim or narrow the field at all, as is expected to happen when more satellites come online later this year, the cone is completely clear.

So I should have just ordered a 20ft Rohn and put it there, right? Maybe, but, I’m not quite that confident in my mapping. My hacked sextant gave very coarse readings, made worse by the fact that many readings were over 45º, where sine grows faster than cosine. My compass liked to swing 5-10º between eye level and ground level. I think the model is a decent start, but I’m not willing to risk the permanent installation of a steel tower on it yet.

But I do like building things, and even though lumber prices are higher than usual, it’s manageable for small projects. How close to 20ft can I get?

3D model of a 18.5ft wood-and-pipe tower. Adding Dishy’s 16in stem places the center of the dish 20ft off the ground.

Pretty close, it turns out! A tripod made of 12ft 4x4s, with 7ft of a 10ft pipe sticking out the top, plus Dishy’s 16in stem comes to almost exactly 20ft. If something like this will work, it offers a few nice features:

  • The pole can be raised and lowered to make installing Dishy easier.
  • The pole or the legs can be extended, if just a little more height is required.
  • If ground anchors are enough to keep it from tipping, it’s possible to reposition the tower.

That last point is half of an important question: is this design strong enough? Luckily, radio operators have been mounting antennas on pipes for decades, so the engineering isn’t hard to find.

Total Wind Torque < S40 Bending Moment. Hooray!

Dishy’s stem has a 1.5in. outer diameter. Schedule 40 1.5in. steel pipe has a 1.5in. inner diameter, so mounting Dishy on such a pipe wouldn’t even require the backordered adapter. Is it strong enough? I chose what seemed like the worst case scenario: Dishy’s flat face pointed straight into an 80mph wind. Our winds generally come from the west (parallel-ish to Dishy’s face), and average high gusts are 40mph, so this should give us a good margin of error. Luckily, even in the worst case I’ve specified, math says that 8ft. of 1.5in. S40 shouldn’t bend. Hooray!

Total Torque << Ground Anchor Torque. Hooray!

What about those ground anchors? If I use the wind force already calculated, an calculate the leverage at the pivot point at ground level, I get almost 25,000in.lbs. I found ground anchors that say they provide 2250lbs. of holding power “in normal soil conditions.” Given that they will be 4.5ft from the pivot point, that comes out to over 120,000in.lbs. of leverage. That’s so much more than the wind leverage that even if my soil conditions are abnormal, I think I’m safe. (Yes, I have ignored wind torque on the wood tower as well. Given that it’s much lower to the ground, and the anchor torque is so much higher, I’m not concerned.)

If my well pump hadn’t failed just a couple of hours into construction, I might have had the tower up in two days. It took three. The tower was stable enough for me to lean my ladder against while mounting Dishy. We’ve only had light breezes so far, but Dishy doesn’t seem to wiggle too much.

Dishy powering up in its new home. View is to the southwest – these trees were outside even the original field of view.

So, the did-I-save-over-a-Rohn question: did I fix my obstructions? Almost. We had rainy days following installation, so obstructions have bounced around a bit. In the days before relocation, I was seeing over 35 minutes of obstruction per 12 hr period. Since relocation, I’ve seen as low as 2 minutes of obstruction per 12 hr period, and no higher than 12 minutes (during the thickest cloud cover). So it seems like I fixed at least 60%, and maybe almost 95%. Our leaves are finally starting to come in, so I’ll probably let this setup gather data for a bit before deciding how much farther to push it. Come back in a couple of weeks for my May outage data analysis to find out what effect this tower had on my connection statistics.

If you’re interested in building a similar tower, I’ve published my plans here: http://woodworking-plans.beerriot.com/signal-tower/.

Dishy looking (mostly) over the northward obstructions.

Starlink: Outage Data End of April Update

This one is a bit of a shorter update, because there’s not a lot new to report. Outages remain frequent, and longer, like they did in March. Starlink announced that they would be rolling out an update that would allow the dish to change which satellite it’s connected to when it sees an obstruction. If that has made it to my dish, I haven’t noticed its effects.

And yet, even with my obstruction problems, Starlink is still nicer for most of my internet usage than my fixed wireless connection. I reconfirmed that in the middle of the month, when I felt like my Starlink connection was especially unstable. After two days of much slower internet (the large grey bands in the timeseries plot below), I returned to Starlink and have waited out the downtimes since. (Except for video calling, when the slow-but-connected fixed wireless still wins.)

I’m officially looking forward to writing my May update. Parts for a tower to raise Dishy out of tree obstruction territory should arrive on Monday. Fingers crossed that about a week from now, the shape of my outage graphs will change dramatically.

This month’s graphs are below. They cover noon on March 31 through just before midnight on April 30. A refresher on what they mean:

  • Time series plot:
    • Each square represents one second. There are 1200 seconds (20 minutes) in each line, so a day is 72 lines tall.
    • White: connected. Red: Obstruction. Blue: Beta downtime (recently renamed “other” in the mobile app). Grey: no data (the dish rebooted, or my laptop was connected to the other network). Yellow: a special case of White, where the connection lasted for at least 30 minutes, without an outage lasting longer than 2 seconds.
  • Histogram:
    • The height of each bar indicates the number of times an outage (or connection) was observed lasting the length of time indicated on the horizontal axis.
    • Colors are the same as the time series plot, though now all connected durations are yellow (no more white), and all durations are plotted (not just greater than 30 minutes). Grey (no data) is also left out of this chart, mostly because the counts are so small compared to other fields, that they wouldn’t be visible.
Outage Timeseries: Noon March 31 through Midnight April 30, 2021
Outage Histogram: Noon March 31 through Midnight April 30, 201

Woodworking Plans and OpenSCAD

Every time I post pictures of a project I’ve completed, someone will ask if I have plans I can share. I never do. I have sketches with numbers near them, but I am confident that no one would be able to interpret them. If it has been too long since I made the project, I might have trouble interpreting them myself!

I’ve started an experiment to correct my lack of sharable plans. Diagrams and how-tos for many of my most recent projects are now available at http://woodworking-plans.beerriot.com/. Other projects from this blog’s history should show up there in the future.

While I gather sketches and measurements of past projects, I think it’s also a good time to explain what tools I’ve used, and why. Most of them are new to me, so I’m hoping this post might generate some discussion on better ways to approach this.

What I’ve settled on for diagramming is OpenSCAD. It’s a 3D modeler, controlled by a programming language that supports basic shape manipulation. I chose a CAD system because I thought that, if I had a full model of the project, I could generate component and assembly diagrams from different, partially-completed views of that model.

I chose a 3D modeler that is programmable because … well, let’s be honest, a good deal of it is because programming is how I interact with computers. But the secondary reason is that I believe the model, itself, is not enough to explain how to build a project. Sure, someone could pull apart a model in whatever tool I used, and inspect it for themselves. But if the point of making the model is to explain the project’s construction, then the product of my process shouldn’t just be the model, but should also include descriptions of the model: diagrams of sizes and angles, and natural language telling a person how to make it.

The model isn’t going to generate natural language build instructions, but if the sizes and angles it uses are available in code, they can also be templated into English written along with the model. To accomplish the templating, I’ve chosen the Jekyll website generator. Via a small script, I can export variable names and values from the model, making them available to include in a templated webpage.

An additional benefit of programmability that I’m excited about is standard version control. That’s exciting because I can develop models iteratively, and improve things over time … and you can help me! The models, the diagrams, and the how-tos are all open-source on Github.

Since you can see my source code, that’s what I’d like to spend the rest of this post talking about. I started learning OpenSCAD only about six weeks ago. If you look through the code repository’s history, you’ll see how I’ve adapted my approach over time. Overall, it has been amazing how quickly I could get useful results from the tool.

There are also places I still feel like I’m fighting the tool. Most of these are places where I would really like the model’s code to somewhat read like a natural description of the creation of the project (start with a piece this size, cut this much off here, attach that other part there), but the details of making the tool render that clearly get in the way (rotate this around x and z, move it an infinitesimal amount to the side to prevent rendering conflicts, color this here so the cut is colored like so, by the way this can be animated). Finding the right abstractions are taking time.

Some abstractions are simple things, like getting used to expressing most things in vectors, instead of individual scalars along (or around) each axis. You can see that I learned that in the perfume display, and then forgot it when I started the toddler tower.

Other things aren’t so much abstractions as they are conventions. For example, which orientation should a component be described in? The way I would think about holding it while making it seems most natural in some ways, but the way it fits into the assembly seems most natural in others. I think the currently popularity of CNC and 3D printing means that most CAD models are described in the orientation that the machine will operate on them. Should I endeavor to describe my components such that they could potentially be made via CNC or 3D printing? Muddled in this decision are which way is up, and where should the origin point be?

Some abstractions seem like more complex concepts. Take, for example, these few notes:

  1. Nodes in the scene cannot be referenced by variables.
  2. No introspection can be done on nodes (size, position, color, etc. are all hidden to the language after creation).
  3. Modules, which look a little like functions in some other programming languages, can create nodes in the scene, but cannot otherwise return values.
  4. Nodes can’t be passed to modules, but there is a facility called “children”, which allows the effects of modules to be chained together.
  5. Functions, which also look like functions in some other programming languages, can not create or alter nodes in the scene.

These notes have strong influence on composability. You can write a module that creates a cube of a certain size, and you can write a module that moves whatever its children are up and to the right, and you can chain them together so that you get a cube of a certain size that is moved up and to the right. But, the mover module can’t base the amount that it moves the children on anything about the children. You have to pass parameterization information like that as arguments to the mover module.

Examples of how modules and functions can and cannot be composed.

It seems like thinking about nodes in the scene similar to the way one would think about side-effects in other languages is the near the right model. My struggle with it is part of why you’ll see many modules and many functions in each model. Since I want to make diagrams showing each component at different stages of its completion, I need the ability to selectively apply each stage. The best I’ve found so far is to define each step of the creation as another module, so that I can apply them in different combinations. That solution came after being unsatisfied by parameterizing the modules with “do this step” or “don’t do that step” arguments. Functions and variables for every value help to make it possible to keep the many modules in sync without threading all of the information through arguments, though it does make for a lot of names to keep track of.

There are a hundred other little things I’ve learned and experimented with along the way, I’m sure, but I’ll save them for another spew session. The OpenSCAD code is only part of the repository. There’s fun things like Liquid templating and the Pure.CSS layout framework that made building the website relatively quick, which I may write about some day as well. For now, if you have time and interest to look around, read some of the code, and let me know what you think. Or better yet, if you have time, material, and interest, have a go at building one of the projects, and let me know what you think of the instructions!

Starlink: Outage Data End of March Update

Another month has passed, in which I continued to use Starlink as my primary internet service, except when I needed to make Facetime or Zoom calls. From a subjective standpoint, I can tell you this: March was a far more frustrating experience than February.

Figure 1a: Disconnect Time Histogram, March 2-8, 2021
Figure 1b: Timeseries Connection data, March 2-8, 2021

The month started much like February ended. The graphs above are two views of the history data from Dishy. Figure 1a is a histogram of how often a disconnection (red is obstruction, blue is beta downtime, green is no satellites) or connection (yellow) of a given length happened. Fifteen thousand one-second beta downtime disconnects of one second or less in that week. About three periods of connection lasting 30 minutes or longer. Figure 1b is the “timeseries” chart: the color of each square is the status of the connection at that second: red/blue/green are disconnections as before, black is when I don’t have data (either my collection script missed a run, or the dish was rebooting). White in this figure is just any random second that the connection was live. Yellow is only used if that second was part of a span of 30 minutes or longer where there was no disconnection lasting longer than two seconds. There are 1200 seconds (=20 minutes) per line; a day is 72 lines tall; the chart covers seven days, roughly midnight to midnight.

Figure 2a: Disconnect Time Histogram: March 9-15, 2021
Figure 2b: Timeseries Connection data, March 9-15, 2021

Things got suddenly much worse in the second week of the month. In Figure 2b, that darker blue/red band above the thin black line is March 10. This is the first day we had rain since installing our Starlink service. Far from our first weather event, but before this, it had been well below freezing for two months, so all precipitation was snow. We marveled at how little snow affected Dishy. Unless it was the wet, heavy stuff that clinged to the bare tree limbs, the connection hardly noticed. Rain, however, seems to be Dishy’s nemesis.

On March 10, I reached out to support, because while our obstructions increased somewhat during the rain, our beta downtime increased far more. Their response was puzzling. I asked specifically about beta downtime, and their response was, “we have detected obstructions … [in] basically your entire field of view.” No mention of beta downtime at all.

The only way I’ve been able to explain the support team’s response is what I shared with the starlink subreddit: because Starlink is currently allowing the dish to use a lower horizon than they expect to use when the service leaves beta, they are marking obstructions that occur below the future horizon as beta downtime.

I already know I need to take care of some obstructions. It’s just now starting to get warm enough to plan that. Having a reason to believe that the super noisy beta downtime I’ve experienced might also go away with fewer obstructions, and/or a higher post-beta horizon, gives me reason to believe the effort will be worth it.

Figure 3a: Disconnect Time Histogram: March 16-22, 2021
Figure 3b: Timeseries Connection data, March 16-22, 2021

For the last two weeks, service has suddenly been much more frustrating. In February and the first half of March, browsing and streaming would very occasionally hiccup for a second. In the past two weeks, it has been a somewhat frequent occurrence that browsing and streaming just stop for several seconds. I haven’t looked at this view of the data until now, but it’s nice to see that it backs up my subjective experience. Note the change in Figure 3b from lots of little red and blue dots to lots of red and blue bars.

I have changed nothing about my setup. Dishy is in exactly the same place I put it when I first installed. I keep Dishy on 24/7, with its own router plugged in. And, this isn’t a change in the scenery around Dishy either. One of the trees to the east side of Dishy finally put out buds yesterday. The rest are still bare.

What did change was Dishy’s firmware. On the morning of March 21, Dishy rebooted and installed firmware d61f015c. That’s the lower blue band spanning the whole image except for the small green strip. The longer red/blue bars do seem to start the day before. The black band above them is roughly the start of March 20, but my notes say that Dishy was still running a8a9195a after that reboot. That firmware had been installed on March 12.

This is a beta program. It is expected that Starlink will make changes, and that not all of those changes will be obvious improvements. If anyone at Starlink is reading this, please note that that change was noticed, and it has not been an improvement.

Figure 4a: Disconnect Time Histogram: March 23-29, 2021
Figure 4b: Timeseries Connection data, March 23-29, 2021

Last week was not an especially great one for Starlink use. Figure 4b starts off with another new firmware: 5f1ea9d9. It did not improve my connection stats.

The histogram for March 23-29 (Figure 4a) shows a specific worsening trend that appeared in the previous week: fifteen second obstruction outages. Interestingly, fifteen seconds, and fifteen seconds only, saw a large jump. Obstructions lasting 13, 14, 16, 17 seconds saw no change. What’s up with fifteen seconds, specifically?

The dish installed firmware b44f4294 on March 31. The stats look the same as the past two weeks to me: large spike of fifteen second obstruction outages, and general wide bands of obstruction and beta downtime. I’ll save its charts for next month, when it will have a full week to fill the histogram.

I know someone on the Starlink subreddit is going to stamp their foot and complain about yet another person whining about their obstructions. I know I need to get Dishy up higher, and get some trees down lower. I still think it’s interesting that without changing anything, my connection statistics changed so drastically. Take it as advice to refresh your own obstruction view if your connection quality suddenly changes.

Starlink: Outage Data End of February Update

I’ve continued to analyze and plot more information about Starlink outages. I’ve also collected three more weeks of nearly continuous data, so it’s time to review how quality of service has changed.

Let’s start by replotting the data from my earlier post, using my latest code, so it’s easier to see changes.

Figure 1: Histogram of different outage lengths. Data from February 6 post replotted, covering about 66.5 hours scattered across on January 31, February 1, 2, 4, 5.

As before, the histogram in Figure 1 shows how often an outage of each length occurred. The difference between this one and the one from the earlier post is that instead of breaking up the columns by days, they’re separated by cause. Where we only knew that there were over 700 outages lasting only one second in this data last time, we can now see that that was about 300 obstructions and 500 beta downtimes (my tool also counts a few more outages than the tool I used last time).

Red bars count outages blamed on obstructions, blue are beta downtime, and green are lack of satellites. At the left side, the first set of bars counts the number of times an outage lasting only one second was observed. The next bar to the right counts outages lasting two seconds. Next three seconds, and so on. In the middle of the graph, at the point labeled “1m”, the step between the bars switches to minutes (i.e. the next bar after 1m is outages lasting two minutes). On the right half of the graph, outages with durations between two steps are counted as the lower step (e.g. 4 minutes, 45 seconds is counted as 4 minutes).

I’m going to add one more bar to the graph. The one thing I’ve had trouble using my Starlink connection for is video calling (Zoom, FaceTime, etc.). My connection drops for too long too often to make a long call comfortable. So, the question is, how long am I usually connected?

Figure 2: Adding connectivity lengths (yellow) to the histogram.

In Figure 2, the yellow bars count the number of times that connectivity lasted for the given duration. In the ideal world of zero outages, this looks like a single bar of height 1 at the 60m mark (because spans over 60m are recorded as 60m). This graph doesn’t show the ideal case. The most common connected duration is 2 minutes, occurring around 300 times. The longest connected duration is about 17 minutes, which occurred once. (Click to see the full-resolution image.)

One 17-minute span of connectivity across four days doesn’t sound great. A FaceTime call that I make every week lasts at least that long, and often closer to 30 minutes. So, multiple spans closer to that, and preferably longer, are what I’m looking for.

One thing that’s a little hard with this analysis is making sure it’s not flagging disconnections that I wouldn’t notice. So, a quick thing I’ve built in is a setting to ignore disconnects that last less than a configurable number of seconds. As a generous guess, I’ve decided to tell it that interruptions of two seconds or less are tolerable.

Figure 3: Ignoring outages lasting two seconds or less when calculating duration of connectivity.

Figure 3 has that modification. The number of one and two minute periods of connectivity have drastically decreased. Those short spans were just separated from each other, or from other longer spans. They have been tacked on to those, so we have more connections lasting ten minutes or more. In fact, there are now five durations of connection lasting over 20 minutes.

Something else that’s hard, is making sure that “outage” really means “outage”. These statistics are already following Starlink’s own app in only labeling a second as an outage if all pings were lost during that second (popPingDropRate = 1). Some redditors have suggested that because pings are such low priority, high throughput may cause all pings to be lost. So what looks like an outage could be exactly the opposite. To check this, I also added configuration to ignore an outage if the downlink or uplink speed recorded for that second is above a given value.

Figure 4: Ignoring “outages” where uplink or downlink throughput was at least 1Mbps

In Figure 4, seconds where the downlink or uplink speed was recorded as 1Mbps (1,048,576 bits) or higher are not treated as breaks in connection. It didn’t increase the number of connections lasting longer than 20 minutes. That may be because of the 13,187 outage seconds in this dataset, only 114 had downlink or uplink speeds of 1Mbps or more.

Figure 5: Display settings in use.

That was the state of the connection in the first week of February. Let’s apply this same analysis to the three weeks since.

Figure 6: Data covering just before midnight February 8 through just before midnight February 15.

Figure 6: February 9-15. This is seven days, instead of four, so we should expect counts to be a little higher overall anyway. But, there are many connected spans counted over 20 minutes, and finally some over 30 minutes. There are even a couple over 50 minutes long! This looks like decent improvement.

Figure 7: Data covering just before midnight February 15 through just after midnight February 23.

Figure 7: February 16-22. This looks pretty similar. Multiple spans over 20 minutes, some over 30. This time there are even a couple over 60 minutes long. Very short outages are also up a bit for both obstructions and beta downtime.

Figure 8: Data covering just before midnight February 22 through just after midnight March 2.

Figure 8: February 23-March 1. This still looks like a pretty similar breakdown to me. Unfortunately, we lost the over-60-minute connections, but we still have some over-30-minute durations. All short outage categories are also up, though obstructions overtook beta downtime for 4-10 second outages. A snowstorm made my tree branches thicker.

While short outages seem to have increased slightly, it does seem that the system has improved according to the connected-time measurement. I was hopeful that the Feb 9-15 improvement may have been because of satellites launched on Feb 4, and thus there might have been more improvements from the Feb 15 launch seen in the past week. There were also a couple of firmware updates I noticed on February 15 (7db91a39-…) and 20 (a95d0312-…), so maybe those shifted these metrics as well.

Subjectively, things seems about the same. Streaming and browsing work great, even if we have become a little more sensitive to the very occasional second or two that a coincidental outage delays a page from loading. Video calling still pauses often enough that we switch back to our fixed wireless connection if we expect the call to last more than a couple of minutes.

Figure 9: Timeseries view of outages and connectivity February 23 through March 1. Each 2×2-pixel rectangle represents one second.

There is still some way to go. Figure 9 is what those very few over-30-minute connections per week look like. In this “timeseries” view, each pixel represents one second. One line, from left to right, is 20 minutes. Where the line is red, blue, or green, all pings were lost during that second. Where the line is yellow, that second is part of a 30-minute or longer span of connectivity that has no interruptions longer than 2 seconds. White are other periods of connectivity that lasted less than 30 minutes. Dark grey are times I missed downloading data, because I had shut off the house power to rewire my workshop.

I already know that I need to move my dish to remove obstructions. Bands of more densely red streaks correlate with snowstorms moving through (e.g. February 28). Dishy melts what falls on it, but it can’t melt what has fallen on the tree branches that are in the edges of Dishy’s view. Once the several feet of snow on the ground around my temporary Dishy tower begins to disappear, I’ll be working on a taller mount.

Figure 10: The same timeseries as Figure 9, but with all obstruction outages removed.

From this data, reducing my obstructions to zero would remove about half of my outages. I see just as much beta downtime as obstructions, usually more, if it’s not actively snowing. Ignoring all obstruction outages in my data, while considerably expanding the number of long clear connected periods I can expect, still reveals many stretches where clear connectivity doesn’t last long (Figure 10).

Starlink says beta downtime “will occur from time to time as the network matures.” That doesn’t sound like every couple of minutes for just a few seconds to me, so I’ve tried a number of things to figure out whether all of this beta downtime is mislabeled. The periodic patterns I saw in the obstruction data in my raster-scanning post aren’t as visually obvious in the beta downtime data. Segments of beta downtime are sometimes (about 20% in the last week) immediately preceded or followed by obstruction downtime. Reclassifying those segments as obstructions, and ignoring them does make an appreciable difference in the amount and length of clear connectivity. But is ignoring them correct? Some redditors report frequent beta downtime even with zero obstructions.

For now, I’ll continue to enjoy mostly-fast, mostly-up, decently-priced service, and watch the effects of the next satellite launch and the spring thaw.

If you’d like to play with this data and the viewer yourself, I’ve published it as the 2.0 release on the github repo.

Starlink Raster Scan?

Update, June 2021. This month, Starlink released an update to their mobile app that produces an image similar to what I was trying here. Hopefully some day they talk publicly about the feature, so I can lean how far off I was on implementation details.

The Starlink app, whether on a mobile device, or in a web browser, will tell you in which direction the dish regularly finds something blocking its view of the satellites. I’ve had it in my head for a while that it should be able to do more than this. I think it should be able to give you a silhouette of any obstructions.

Figure 0: A satellite dish records a strip of successful/unsuccessful satellite connection moments as the satellite passes through the sky, sometimes behind obstructions.

As a satellite passes through the sky above the dish, the “beam” connecting the two follows it, sweeping across the scene (Figure 0). The dish repeatedly pings the satellite as this happens, and records how many pings succeeded in each second. When the view is clear, all, or nearly all, pings succeed. When there’s something in the way all, or nearly all, pings fail. In theory, if the dish stays connected to the same satellite for the whole pass, we end up with a “scan line” N samples (= N seconds) long, that records a no-or-low ping drop rate when nothing is in the way, and a high-or-total ping drop rate when something is in the way.

One line isn’t going to paint much of a picture. But, the satellite is going to pass overhead every 91 to 108 minutes. The earth also rotates while this happens, so on the next pass, the satellite will be either lower in the western sky, or higher in the eastern sky. On that pass, we’ll get a scan of a different line.

But 91 minutes is a long time for the earth to rotate. That’s farther than one time zone’s width, nearly 23º of longitude. Since the beam is tight, we’ll have a wide band between the two scans in which we know nothing. However, each satellite shares an orbit with 20 or more other satellites. If they’re evenly spaced, that means the next satellite should start its pass only about 4-minutes after the previous one. That’s conveniently only about 1º of longitude. If the dish reconnects to the next satellite in an orbital reliably at a regular interval, we should get 20-ish scan lines before the first satellite comes around again.[1]

But are 1º longitude scanlines enough? Before we get into the math, let’s look at some data. I’ve created a few simple scripts to download, aggregate, and render the data that Starlink’s dish collects. With over 81 hours of data in hand – 293,183 samples – I can make Safari complain about how much memory my viewer is using … er, I mean I can poke around to see what Dishy sees.

Figure 1: 81 hours of obstruction data, represented as one 4×4-pixel square per second, 600 seconds per line, white = no pings dropped via obstruction, dark red = all pings dropped via obstruction

In Figure 1, I’ve plotted ping drops attributed to obstructions at one second per 4×4-pixel rectangle. Solid red is 100% drop, and the lighter the shade the less was dropped, with white (or clear/black for those viewing with different backgrounds) being no drops. There are 600 samples, or 10 minutes, per line. It doesn’t look like much beyond noise, so let’s play around.

Figure 2: signal-to-noise ratio data at the same scale, white = full signal (9), dark grey = no signal (0)

Figure 2 is the signal-to-noise ratio data instead. White/clear means signal was full (9), solid grey means signal was absent (0), with gradations in between. Still mostly noise, except for the obvious column effect. Those columns are 15 samples wide. So something happens every 15 seconds. It’s not clear what – it could just be an artifact of their sample recording strategy – but that’s as good of a place to start as any for a potential sync frequency.[2]

Figure 3: obstructions plotted at 240 samples per row

So let’s drop down to our guesstimated 4 minutes between satellite frequency. With 240 seconds per row (Figure 3) … mostly everything still looks like noise. Let’s start by guessing that the period between satellites is longer.

Figure 4: obstruction data at 330 samples per row

I clicked through one second increments for a quite a while, watching noise roll by. Then something started to coalesce. At 330 seconds (5.5 minutes) per row (Figure 4), I see two patterns. One is four wide, scattered, red stripes running from the upper right to the lower left. The other is many small red stripes crossing the wide stripes at right angles. Given that this persists over the whole time range, I don’t think it’s just me seeing form in randomness.

Figure 5: obstruction data plotted at 332 samples per row

Advancing to 332 seconds per stripe (Figure 5) causes the small red stripes to pull together into small vertical stacks. Especially in the later data, some of these blobs seem to fill out quite a bit, encouraging me to see … something.

But here I’m fairly stuck. Doubling or halving the stripe size causes the blobs to reform into other blobs, as expected given their periodicity. But nothing pops out as obviously, “That’s a tree!” I experimented with viewing SNR data instead. It does “fill in” a bit more, but still doesn’t resolve into recognizable shapes.

It’s time to turn to math. I think there are two important questions:

  1. How much sky is covered in a second? That is, what span does the width of a pixel cover?
  2. How much sky is skipped between satellite passes? That is, how far apart should two pixels be vertically?
Figure 6: earth (green circle) with high and low starlink orbits (blue circles)

If I draw the situation to scale (Figure 6), with the diameter of the earth being 12742km, and the satellites being 340 to 1150km above that – giving them orbital diameters of 13082 to 13892km, there’s really not enough room to draw in my geometry! So I’ll have to zoom in.

Figure 7: exaggerated triangles representing the math to compute the width of a sample in our scene

We can start estimating how big our pixels are by comparing similar triangles. The satellites moving between 7.28 and 7.70 km/s. If we’re looking strait overhead, for our purposes at these relative distances (340 to 1150km), we can consider that 7km to be a straight line, even though it does have a very slight curve. In that case, we can just use scale the triangle formed by the line from us to the satellites T=0 position and the line from us to its T=1sec position, into our scene (Figure 7). If the scene objects are 20m (0.02km) away, then the width of one second at that object is 0.02km * 7.7km / 340km = 0.00045km, or just under half a meter. Compared to the higher, slower orbit, it’s 0.00012km, or 12cm. At 12 to 45cm, we’re not going to see individual tree branches. Resolution will actually get a bit better when the satellite isn’t directly overhead, because it will be further away and so the perceived angle of change will be smaller. But for the moment, let’s assume we don’t do better than half that size.

On to estimating the distance between scan lines. Wikipedia states that there are 22 satellites per plane.[3] If these are evenly spaced around the orbit, we should see one every 4.14 to 4.91 minutes (248.18 to 294.55 seconds). If the earth rotates once every 23hr56m4s, then that’s 1.038º to 1.231º. At the equator, that’s 115.42 to 136.881km. I’m just above the 45th parallel, where the earth’s circumference is only 28337km, so the change in distance here is only 81.705km to 96.897km. If we change our frame of reference, and consider the satellite orbital to have moved instead of the earth, we can use the same math we did last time. To estimate, this distance (81km/satellite) is approximately one order of magnitude larger than the last ones (7km/s), so we can just multiply everything by ten. Thus, our scan lines should be 1.2m to 4.5m apart.

At 12 x 120cm per sample, we’re not going to be producing photographs. At 45 x 450cm, I doubt we’re going to recognize anything beyond, “Yes, there are things above the horizon in that direction.” Let’s see if anything at all compares.

What parameters should we use to generate our comparison scan? If we’re seeing satellites pass in 4.14 minute (91 minutes / 22 satellites) intervals, we should guess that a scan line will be about 248 seconds. If they’re passing every 4.91 minutes, we should guess about 295 seconds.[3] Given the aliasing that integer math will introduce, the fact that 4.14 and 4.91 are kind of the minimum and maximum, and that the satellites won’t sit at exactly those altitudes, it’s probably worth scanning from about 240sec to 300sec, to see what pops up. I see what look like interesting bands show up at 247, 252, 258, and 295 at least. Maybe I’m catching satellites at a band between the extremes?

But then why was 330-332 the sweet spot in our pre-math plot? Maybe I’m just indulging in numerology, but 330 = 22 * 15. Twenty-two is the number of satellites in an orbital, and 15 is the width of the columns we saw in the SNR plot. Could it be that satellites are not evenly spaced through 360º of an orbital, but are instead always 5.5 minutes (330 seconds) behind each other?[3] If that were the case, the orbital would “wrap” its tails past each other. That seems odd, because you’d end up with a relative “clump” of satellites in the overlap, so maybe there’s a better explanation for the coincidence.

In any case, I’m going to forge on with an example from the 332-sample stripe, because its blobs look the strongest of any to me. Let’s also redraw it with the boxes ten times as tall as they are wide, since that’s what I calculated to be the relationship between one satellite’s samples and the next satellite’s samples. If I overlay one of those clumps on the northward view I shared in my last post, does it line up at all?

Figure 8a: Select a blob
Figure8b: Rotate and scale the blob

I’ve stared at this for far too long now, and I have to say that this feels worse than the numerology I indulged in a moment ago. I’m starting to worry I’ve become the main character of the movie Pi, searching for patterns in the randomness. If there’s something here, it needs a lot more knowledge about satellite choice and position to make it work. Even if I adjusted the rendering to account for the correct curve of the satellite’s path and the camera’s perspective, the data is too rough to make it obvious where it lines up.

With some basic information like which satellite the dish was connected to for that sample, and the database of satellite positions, I’m pretty sure it would be possible to throw these rectangles into an augmented-reality scene. Would it be worth it? Probably not, except for the fun of doing it. The obstruction diagram in the Starlink app (Figure 9) divides the horizon into twelve segments. If it shows red in one 30º segment, it’s the tall thing you can see in that segment that is causing the obstruction. This additional data may be able to narrow within the segment, but if there are multiple tall things in that segment, they’re probably all obstructions.

Figure 9: Starlink app’s obstruction diagram

So, while this was a fun experiment, this is probably where it stops for me. If you’d like to explore your own data, the code I used is in my starlink-ping-loss-viewer repo on github. The data used to to generate these visualizations is also available there, in the 1.0 release. Let me know if you find anything interesting!

Figure 10: Whole-second full-ping loss attributed to obstruction (red) or beta downtime (blue)

… and just one more thing before I sign off. Following up on the topic of my past notes about short, frequent Starlink outages. Figure 10 is a rendering of my obstruction (red) and beta (blue) downtime over this data. I’ve limited rendering to only d=1 cases, where all pings were lost for the whole second, since this seems to be the metric that the Starlink app uses for labeling time down. One rectangle per second, 10 minutes per row. The top row begins in the early afternoon on February 9, and the bottom row ends just before midnight on February 12, US central time.

Dishy dressed up for the grid analysis. We see too many post about Dishy’s icicle beard, and not enough about Dishy’s cool water droplet matrix.

Updates (footnotes):

[1] Many thanks to u/softwaresaur, a moderator of the Starlink subreddit for pointing out that routing is far more complex, since active cells are covered by 2 to 6 planes of satellites, so it’s likely unrealistic to connect to several satellites in the same plane in a row.

[2] From the same source, routing information is planned on 15 second intervals. At the very least, this means that the antenna array likely finely readjusts its aim every 15 seconds, whether or not it changes the satellite it’s pointing at.

[3] Again from the same source, while 22 satellites per plane was the plan, 20 active satellites per plane was the reality, though this has now been adjusted to 18. That fits the cycle observation better, as 18 satellites at a 91-108 minute orbit is 5 to 6 minutes between satellites.

Rural Internet: Starlink Outage Data

In my last post, I talked about how frequent, short outages prevent video calling from being comfortable on Starlink. If you were curious about exactly how short and how frequent I meant, this post is for you.

Starlink’s satellite dish exposes statistics that it keeps about its connection. The small “ping success” graphs I shared in the last post are visualizations provided by the Starlink app, which are driven by these statistics.

Thanks to starlink-grpc-tools assembled by sparky8512 and neurocis on Github, I have instructions and some scripts to extract and decode these statistics myself. I haven’t been great at collecting the data regularly, but I have six bundles of second-by-second stats, each covering 8-12 hours. (February 1 saw a couple of reboots, so the segments there are approximately 7.5 and 11 hours, instead of 12 for the other segments.)

The raw data exposes a per-second percentage of ping success. It’s somewhat common for a single ping’s reply to go missing. Several pings are sent per second, though, and one missing every once in a while is mostly no big deal. The script I’m using tallies the number of times /all/ of the pings within a given second went missing (percent lost = 100, or “d=1” in the data’s lingo). It also tracks “runs” of seconds where all of the pings in contiguous seconds went missing.

Figure 1: count of each length of outage.

These first two graphs (Figure 1) explain what I mean by “frequent” and “short”. This histogram displays one bar per “run length” of all-pings-lost seconds. That is, the left-most bar tracks when all pings were lost for only one second, the next to the right bar tracks when all pings were lost for two consecutive seconds, the third bar tracks when all pings were lost for three consecutive seconds, and so on. The height of the bar represents the number of times an outage of that length was observed. The histogram is stacked, so that the outages on the morning of February 1 (green) begin where the outages on January 31 (blue) end.

Over the 66.5 hours for which I have data, we counted 739 1-second outages. That’s an average of just over eleven 1-second outages per hour, or just slightly more often than one every 6 minutes. The decay of this data is pretty nice: two second outages are approximately half as likely (344, averaging just over 5/hr, or just under every 12 min), three-second outages just a bit less than that, and so on. By the time we get to 8 seconds, we’re looking at only one per hour.

If we look at one 1s-8s outages, i.e. those that on average happen once per hour or more, we have a total of 2018. That’s an average of just over 30 disconnects per hour, or one every two minutes. For once, data proves the subjective experience correct. On a video call, it feels like you get something between a hiccup and a “they last thing I heard you say was…” every couple of minutes.

The right-hand graph is laid out in the same way, but the bars represent minute-long outages. You can just barely see a few counted as 1-minute and 2-minutes in length. Last Thursday, February 4 (red), was the first time we’ve had a significant Starlink outage, long enough for me to spend time poking around trying to figure out if it’s “just us or everyone.”

I’ve been mostly concerned with frequency – how often I can expect outages of each severity. The tool I’ve used to extract the statistics data exposes the outages differently. It is instead concerned with the total amount of downtime observed.

Figure 2: Cumulative downtime, grouped by outage length.

These graphs (Figure 2) are the data as the extraction tool provides it. Each bar represents outages of a certain length, as before. But now the height of the bar represents the total number of seconds of downtime they caused. The 1-second and 2-second bars are now about the same height because there were about half as many 2-second outages as 1-second outages, but they each lasted twice as long. The total amount of downtime they caused is about the same.

That giant red line that has appeared in the right hand graph is eye-catching. Thirty seven and a half minutes of downtime, caused by one 37-minute outage. That 1-minute outage stack is quite a bit taller too, accounting for ten minutes of total downtime itself. This is how the significant outage on Thursday appeared to us. There was a large chunk of time where we obviously had no connection to the internet (37 minutes), surrounded by quite a bit of time where we’d start getting something to download, but then it would stop (ten 1 and 2 minute outages).

The sum of all 1-second-or-longer downtime we experienced in this 66.5 hours of data is 14686 seconds, or just over 4 hours. That’s roughly 94% uptime.

Figure 3: limiting the vertical axis to a count of 50, reveals low-count outage lengths.

We didn’t see the 37-minute outage in the earlier frequency graphs, because it has only happened once. If we zoom in on those graphs (Figure 3), so that most of the 1-13s bars are way off the chart, we can see a few more one-time-only outages. Each day has had some small hiccup in the “long tail” of over twenty seconds. I see hope in the fact that the grey color, which is the most recent data, from the day after the long outage, is nearly absent from the longer-run counts.

I’m curious about the sharp decline between 13 and 14 seconds. Is that a sweet spot for some fault recovery in Starlink’s system, or is it just an aberration in my data? I’ll have to keep collecting to see if it persists.

I’ve posted the summary data I used to generate these graphs in a gist on github.

Rural Internet: Starlink

At the end of my last post about the state of rural internet, I mentioned that we were about to try something new: Starlink by SpaceX. We’ve been using it as our primary internet connection for two weeks now, and TL;DR it would be tough to give it up, but it does have some limitations.

One of my first Speedtest.net results on Starlink.

Download speed via Starlink is excellent. Samples I’ve taken via Speedtest.net over my wifi have never measured less than 30Mbps. Most samples are in the 60-80Mbps range. My highest measurement was 146Mbps. Upload speed via Starlink is also excellent. Speedtest measures them anywhere from 5 to 15Mbps. Ping latency bounces around a little bit, but is usually in the 40-50ms range.

Typical speeds I measured via fixed wireless were 20Mbps down, 3Mbps up. So Starlink, in beta, is already providing a pretty consistent 3-4x speed improvement. I no longer worry about downloading updates while trying to do anything else on the internet.

A typical view in the Starlink app’s statistics panel.

Unfortunately there is a “but”, because while the speed is great when it’s running, the connection drops for a second or five every few minutes. The dish’s statistics indicate that these interruptions are about half due to Starlink making updates (“beta downtime”) and half due to the trees blocking my dish’s view of the sky (“obstructions”). I’ll be working on the latter when the weather warms, and they’re constantly working on the former.

Mid-winter Northwoods mount: four rows of concrete block put the middle of Starlink’s dish about four feet off the ground.
My stitching of the Starlink app’s obstruction view northward approximately where the dish is sitting. This is the clearest view I’ll have until the weather warms enough to try other mounts.

These short interruptions have almost no effect on browsing or streaming. Every once in a while, a page will pause loading for a moment, or a video will re-buffer very early on. I notice it only slightly more frequently than I remember cable internet hiccups.

But what these short interruptions do affect is video calling. Zoom, Facetime, etc. are frustrating. It /almost/ works. For two, three, five minutes everything is smooth, but then sound and video stop for five to ten seconds, and you have to figure out what the last thing everyone heard or said was. My wife participated in a virtual conference this past week, and she tried Starlink each morning, but switched back to fixed wireless after the second or third mid-presentation hiccup each day.

Complete outage, possibly to do with new satellites launched the night before?
Outage confirmation on the support site.

And yet, there’s also a silver lining to the outage story. One of our frustrations with our fixed wireless provider is that we’ve had several multi-hour outages over the last three months. On Thursday, we finally had a two-hour Starlink outage. Why is that a silver lining? When I loaded Starlink’s support page over my cellphone’s limited 4G connection (remember, my wife was video conferencing on fixed-wireless), they had a notice up that they knew about the outage in our area, and listed an expected time of resolution. That sort of communication is something we have never gotten from our fixed-wireless provider. It completely changes how I respond to an outage, and it gives me hope that Starlink better understands what people expect from internet service today.

If you’re curious whether data backs up my subjective review of Starlink connectivity, please continue to my next post, which includes the dish’s own statistics.

The comparative price of the two solutions is nearly a wash. Starlink hardware is $500 plus shipping and handling (another $50). Our fixed wireless installation was $119, with the option to either buy the antenna for an additional $199, or rent for $7/mo. That makes Starlink at least $200 more expensive up-front, without including any additional mounting considerations (brackets, tower, conduit, etc.). And don’t get me wrong, white the setup seemed simple to me, the value of professional installation and local, in-person troubleshooting should not be overlooked.

But once everything is in place, the monthly costs are the same: $99. For fixed wireless, that gets me 25Mbps that handles video calls well, but goes out overnight. Starlink is currently a no-guarantees beta, marketed as “better than nothing” for people who can’t get even my alternatives. Even in this state, it’s providing 4x more speed for me, with better communication about downtime. I think they’ll have no trouble selling these to loads of people, and if they significantly improve the video-calling experience, they’ll put fixed-wireless out of business.