Pandemic Coping Strategy: Give Generously

Hello, BeerRiot Blog readers! I’m Amanda, Bryan’s wife. Bryan has offered to let me guest-post on his blog and share some things that are more aligned to my interests than his. Like Bryan, I have varied interests and hobbies, and among them is personal finance.

Recently, Bryan has posted on social media about donations we’ve made to causes that are important to us. This prompted me to share some information about our approach to charitable giving during the past year or so. We recognize that we are in an extremely privileged position to even be able to discuss this.

Bryan and I had been preparing for years to eventually take some time away from paid work. One part of that plan was a way to continue charitable giving when we weren’t employed. We felt that if we had to stop supporting charitable causes to make our post-employment life financially feasible, then we didn’t actually have the resources to leave our jobs and still have the life we wanted. Fortunately, there was a way for us to prepare for that: a donor-advised fund.

A donor-advised fund (DAF) allows an individual or organization to make charitable contributions to a fund and then recommend grants from the fund to specific charities over time.

We set up our DAF, the Zoellner-Fink Family Fund, in the fall of 2019. Our financial philosophy prioritizes low overhead costs and simplicity, so we focused our research on DAFs affiliated with Vanguard and Fidelity, where we already have accounts. Fees and structures were comparable, but we chose Fidelity Charitable because of their minimum grant amount of just $50. We wanted the option to recommend smaller grants for things like a child’s school fundraiser or a memorial gift. It was easy to set up our account online, choose a name, set an asset allocation, and fund the account with appreciated stock from Bryan’s previous employers. We have not made additional contributions to the fund since we set it up, but it can be added to at any time.

It’s important to know that there are things you can’t do with a DAF:

  • You can’t give directly to individuals, like in a GoFundMe campaign.
  • You can’t make grants from which you ultimately receive a benefit. For example, you can’t use a DAF to buy yourself tickets to attend a fundraising event.
  • You can’t contribute to some international causes.
  • You can’t make political/lobbying contributions.
  • You can’t take the money back.

Fortunately, those restrictions haven’t limited our giving at all!

We began making grant recommendations from our DAF in February 2020 by switching what we had previously given through monthly credit card charges to recurring annual grant recommendations. We also recommended grants in response to donation requests from organizations we had supported in the past and wanted to continue to support.

By March 2020, the whole world was feeling the impacts of COVID-19, and we were voluntarily jobless and transient! We were grateful to be safe and healthy and to have the resources to stay that way, but it was clear that so many people were suffering. Then George Floyd was murdered in Minneapolis, blocks from where my brother used to live, and we learned of too many people of color who had suffered similar fates. The presidential campaign staggered on and left us despairing. It felt like the world was spinning out of control, and we were powerless.

So we started making grant recommendations. Even if we needed to stay isolated, we could still put money into the hands of organizations doing important work.

  • We made extra donations to charities we had previously supported, so they could continue or ramp up their work amidst uncertainty.
  • We talked with friends and family who are directly connected to specific communities in need and got recommendations for more charities to support.
  • We researched and donated to charities that work to uplift the voices and respond to the needs of communities that deserve to be heard and that are disproportionately impacted by the pandemic.

In the past thirteen months, Fidelity Charitable has disbursed nearly $15,000 in grants on our behalf, with no impact on our personal finances. Because markets have gone up overall since we set up the DAF, our fund balance is still about what it was when we opened the account. “Past us” gave a wonderful gift to “future us”: the ability be generous.

In the before times, we’d targeted giving about $5000/year, and that felt like a lot. After this year, it’s clear that we can give more without worry of depleting our fund. Instead of impulse-shopping, we’ve been impulse donating.

  • Local food pantry shelves are depleted? Let’s give them some money!
  • The beloved, inspiring RBG dies? Honor her memory with a grant to Planned Parenthood.
  • Marketplace and Make Me Smart podcasts keep us grounded during a terrifying mid-pandemic cross-country drive? Show our appreciation with a grant to American Public Media.
  • People try to erase the experiences of non-cis/het people in proposed Nebraska public school health curriculum? ACLU Nebraska and HRC Foundation get some money!

It has been a bright spot in this tumultuous year that we can continue to support charities that do such important work, both in the pandemic and after. In future years, perhaps we’ll have the bandwidth to plan in advance where we will donate and do more research to ensure donations benefit organizations that are as effective as possible. For now, however, giving provides some comfort at a time when we need it.

Disclaimer: We are not financial professionals, and even if we were, we don’t work for you. This is merely a recounting of our experiences and is not intended as advice.

Geodesic Dome

By a half-planned chain of events, I’ve spent the last six weeks of COVID-19 Shelter-In-Place over 2000 miles from my woodworking tools. Instead of diving right into a new construction after my dresser, I cleaned and then packed my shop, in preparation for a move. While our belongings have made their way across the country, we have stayed behind to “quaranteam” with a friend-couple, their young son, and their dogs.

We have entertained ourselves with other hobbies: walks to keep everyone moving, cooking delicious meals, reading books, and making music. A few ideas for construction projects have risen during that time, but with few tools and difficulty acquiring wood while maintaining social distance, none of them have been undertaken.

Then one of us saw a post about a geodesic dome made of cardboard. The shape alone immediately captured the attention of the four Xennial-age adults in the house. When we recognized that cardboard was the one material in abundance here, from six weeks of contactless deliveries, wheels set in motion.

Google found a calculator for ordering bits of PVC based on the size and complexity of dome desired. Reverse-engineering that math led to a very simple cut list for a “2V” geodesic dome of paper:

• Ten equilateral triangles, with sides of length A

• Thirty isosceles triangles, with one side of length A and two of length 7/8 * A

Seven-eighths isn’t exactly what the calculator produced, but it’s less than 1% off, it makes measurement simple, and it has worked in my experiments.

The size of the dome that is built is also related to A in a very simple way: the golden ratio. A compressible, bendable material, like paper and tape, worked with common tools like scissors or a box cutter introduced enough error that using many decimal places didn’t make sense. Simplifying to estimating the height at 1.5 * A, and the width (diameter) at 3 * A proved close enough for toy structures.

I’ll include some examples of how specific measurements work out later, but before I annoy people by showing how neatly these work out in Imperial units, I’d like to explain how no particular units are necessary at all. Grab a stick or a string, and I’ll walk you through how to build your own geodesic dome without any arithmetic.

Step 1: Sizing your dome

Figure out where you want to put your dome. Is it a decoration for your desk, or a fort to play in? Find a piece of string, a stick, a strip of paper, etc. that you can cut to the desired width (diameter) of your dome. Before you cut it, find its halfway point, and hold it up in the approximate middle of where you will place your dome. This is about how tall your dome will be (the dome approximates a sphere, so you get one half diameter up from the ground). When you have found a size you like that fits your space, move on to step two.

Step 2: Making your tools

Cut your string, stick, strip of paper to length equal to the dome width that you chose in step one. Then cut that piece into three equal segments. I used a paper strip for my measuring device, so after cutting mine to the full length, I folded it into thirds, and then cut through the folds:

Label one of the cut pieces “8” (eight).

Cut off 1/8th of one of the other pieces. The easiest way to do this is to first find the middle point of that piece. Then find the point halfway between the middle point and one end. Finally, find the point halfway between that point and the end. Cut through that final halfway point. I folded my paper three times and cut through the third fold to do this:

Label the piece you just cut “7” (seven).

Step3: Equilateral Triangles

If the edge of your dome-building material isn’t straight, draw a line on it using a straightedge. Using your “8” piece, mark divisions along your straight edge.

Now for the tricky part. Put one end of your “8” piece right on the left-most mark (or corner) of your straight edge, and angle it up so that the other end is somewhere near where you expect the point of an even triangle would be. Mark a dot on your building material at that end of your “8” piece. Do this a few more times, swinging both a little clockwise and a little counter-clockwise from that spot.

Connect these dots in the arc they form.

Next move the lower end of your “8” piece to the next division mark to the right on your straight edge. Swing the other end up until it crosses the arc you just drew. Mark the point at which it crosses the arc.

Draw a line from each of the division marks you just used to the arc-crossing point you just found. You have just marked your first equilateral triangle!

If you’re building a large dome, and/or working with pieces of material that won’t allow you to get multiple triangles out of one piece, you can skip the next few steps. Cut out this first triangle you have marked, and then use it as a template to trace out nine more identical triangles.

If you’re working with a piece of material that will fit multiple triangles, repeat the arc-crossing process at the right-most division of your straight edge.

Draw a line connecting the points of the two triangles

Using your “8” piece, divide the line between the triangle tips.

Connect the division markers on your straight edge to the division markers on the line between the triangle tips. You have now marked out many more equilateral triangles!

You will need ten of these triangles. If you’ve already marked ten, you’re done. If you need to mark more, try extending your angled lines farther upward. When they cross, they will either make more triangles or diamonds. If they make diamonds, draw a horizontal line connecting the corners to make two triangles.

Cut out your equilateral triangles. Make sure you end up with ten!

Step 4: Isosceles Triangles

The process for the isosceles triangles is the same as it was for the equilateral triangles with one difference: use the “7” piece when finding the arc crossing. Use the “8” piece, as before, to mark divisions along your straight edge, but use “7” to find the crossing point from there.

You will need thirty of these isosceles triangles. Yes, 30.

Step 5: Pentagon Assembly

Time to start assembly. Looking at a finished geodesic dome, the eye is drawn to two (non-triangular) shapes: pentagons and hexagons. I’ve had success with assembling pentagons first, so that’s what I’ll show here.

Collect five isosceles triangles (the ones with two “7” sides and one “8” side). Arrange them in a circle so that all of the “8” sides are pointing out, and all the “7” sides are next to other “7” sides.

Connect four of the “7”-side seams together. A gap should develop in the fifth seam.

Draw the gap together. The pentagon will cup slightly. Seal the seam, and the pentagon will stay cupped.

Repeat this pentagon assembly five more times. You should end up with six pentagons, using all 30 of your isosceles triangles.

Step 6: Connect it All Together

This is where construction will really begin to get unwieldy. If you’re building a large dome, I strongly suggest at least one person to help hold. Two if you can get them.

Collect one pentagon and two equilateral triangles (the ones with three “8” sides). Connect each triangle to two adjacent sides of the pentagon.

From here, connect a pentagon into the space between the two equilateral triangles. This will introduce more cupping, like when you sealed the fifth seam in the pentagon. Continue to alternate pentagons, and equilateral triangles, growing this strip until you have only one pentagon left unconnected (you should have five pentagons attached to five pairs of equilateral triangles). Connect the two equilateral triangles at the end of your strip to the pentagon at the start of your strip. You should have a ring that has a pentagonal hole in one side. Tape the remaining pentagon into this hole, and your geodesic dome will be complete!

Apologies for a lack of build pictures of these steps. The pieces pictured so far are being mailed in an envelope as a small birthday gift. But, here there are laid out ready for final taping.

And here is an annotated diagram of what gets taped where. Purple 1-5 are the pentagon seams. Yellow 1-10 are the remaining alternating-pentagon-triangle seams (9 and 10 appear twice to indicate where the wrap-around connects). Red 1-5 are the roof seams (and 2-5 are duplicated to show where the pieces connect.

And one more shot with a completed dome in the opposite color scheme, to aid in visualization.

What next?

If you followed along, I hope your first dome was successful. If you’re wondering about the dimensions of the domes in my pictures, they are these:

Small dome, with blue pentagons: A = 2 inches. Isosceles sides = 1.75 inches. Height is just a bit over 3 inches. Width is just a bit over 6 inches.

Small dome, with green pentagons: A = no idea. I purposely didn’t measure anything, to make sure I wasn’t lying about being able to build this without numbers.

Large dome, made of cardboard: A = 24 inches. Isosceles sides = 21 inches. Height is just a bit over 3 feet. Width is just a bit over 6 feet. The additional ring around the bottom is ten inches tall. We have fit four adults and one child inside. It’s close, but not cramped.

Good luck with your next build!

Project Box: Planning

While I think about how to tell you about the process of fitting the internal components of this box, I’m going to talk about planning.

box-build-planning - 1.jpg

The image above is the whiteboard in my shop, as it was at the end of this project. I’ve lost some of the context about what each scribble meant, but there are three obvious diagrams: the dovetails, the hinges, and the latches and handle. None are to scale. None indicate relationships to each other. All were drawn at the moment they were needed.

It’s tempting to write about how this plan-as-you-go process is because of the nature of wood. The many ways different grain patterns can and cannot be used, and the inability to be sure of what you’ll find inside a slab, means that most projects end up needing to be adapted to fit as they progress.

But this incremental design is how all of my projects go. The basic structure of a program gets sketched and then adapted as I start to code. Presentations are outlined and then rearranged as I find each part needing a different fit in the story. Dinner plans come together on the cutting board. Road trips have a destination and, “Something like this road will probably work.”

I would make far fewer things if I designed the entire solution up-front. There is, of course, plenty of planning that happens before the first cuts are made. However, there is a point in the initial design of every project at which there are too many unknowns. My solution is often to bring the work near the point where the project is blocked without their decision. This brings clarity to the details surrounding the issue. Sometimes the details become so clear that the solution is obvious, and other times I learn that the question wasn’t even relevant.

There are two keys to this flow working. The first is enough familiarity with the domain to recognize which decisions are likely to doom a project if not addressed early. My box must have internal dimensions large enough for the things I intend to store in it. I must have yeast and two hours of lead time if I want to bake bread for dinner. Put another way, it must be possible to determine what can be left unknown.

The second key to this process is the confidence that I can solve the problems that will arise. I find this one key to my work, even if I’ve over-planned. Years of projects in many domains have taught me that I have to expect that I will make a mistake somewhere in either my plan or my execution. I’ve also learned from this experience that very few of these mistakes spell disaster.

So, a whiteboard hangs in my shop to provide a place for information to accumulate to clarify the unknowns, as needed.

NerdKit Gaming: Part 2

If you were interested in my last bit of alternative code-geekery, you may also be interested to hear that I’ve pushed that NerdKit Gaming code farther. If you browse the github repository now, you’ll find that the game also includes a highscore board, saved in EEPROM so it persists across reboot. It also features a power-saving mode that kicks in if you don’t touch any buttons for about a minute. Key-repeat now also allows the player to hold a button down, instead of pressing it repeatedly, in order to move the cursor multiple spaces.

You may remember that I left of my last blog post noting that there wasn’t much left for the game until I could find a way to slim down the code to fit new things. So what allowed these new features to fit?

Well, I did find ways to slim down the code: I was right about making the game state global. But, I also re-learned a lesson that is at the core of hacking: check your base assumptions before fiddling with unknowns. In this case, my base assumption was the Makefile I imported from an earlier NerdKits project. While making the game state global saved a little better than 1k of space, changing the Makefile such that unused debugging utilities, such as uart, printf, scanf weren’t linked in saved about 6k.

In that learning, I also found that attempting to out-guess gcc’s “space” optimization is a losing game. Making the game state global had a positive effect on space, but making the button state global had a negative effect. Changing integer types would help in one place, but hurt in others. I’m not intimately familiar with the rules of that optimizer, so it felt like spining a wheel of chance choosing which thing to prod next.

You may notice that I ultimately returned the game state to a local variable, passed in and out of each function that needed it. The reason for this was testability. It’s simply easier to test something that doesn’t depend on global state. Once I had a bug that required running a few specific game states through these functions repeatedly, it just made sense to pay the price in program space in order to be able to write unit tests to cover some behaviors.

So now what’s next? This time, it’s not much until I buy a new battery. So much reloading and testing finally drained the original 9V. Once power is restored, I’ll probably dig into some new peripheral … maybe something USB?

NerdKit Gaming

Contrary to the evidence on this blog, not all of the code I write is in Erlang. It’s not even all web-based or dealing with distributed systems. In fact, this week I spent my evenings writing C for an embedded device.

I’ve mentioned NerdKits here before (affiliate link). This week I finally dug into the kit I ordered so long ago, and took it somewhere: gaming.

The result is a clone of a simple tile-swap matching game. I used very little interesting hardware outside the microcontroller and LCD — mostly just a pile of buttons. The purpose of this experiment was to test the capabilities of the little ATmega168 (and my abilities to program it).

I’ve put the code on github, if you’re interested in browsing. If you don’t have a NerdKit of your own to load it up on, I’ve also made a short demo video, and snapped a few up-close screenshots.

What did I learn? Mostly I remembered that writing a bunch of code to operate on a small amount of data can be just as fun as writing a bunch of code to operate on a large amount of data. Lots of interaction with the same few bytes from different angles has a different feel than the same operation repeated time and time again on lots of different data. I also learned that I’ve been spoiled by interactive consoles and fast compile/reload times. When it takes a minute or more to restart (after power cycles and connector un-re-plugging) and I don’t have an effectively infinite buffer to dump logs in, I think a little longer about each experiment.

So what’s next? Well, not much for this game, unless I slim down the code some more. Right now it compiles to 14310 bytes. Shortly before this, it was 38 bytes larger, and refused to load onto the microcontroller properly, since it plus the bootloader exceeds the 16K of flash memory available. My first attack would probably be to simply move the game board to a global variable instead of passing it as a function argument. The savings in stack-pushing should gain a little room.

If I were to make room for new operations, then a feature that saved a bit of state across power cycles would be a fun target. What’s a game without a high-score board?

Reading Code: Use Your Verbs

I think I can be more specific about one component of readability that holds sway over the rest: naming. Partially the quality of each name, but also the ratio of named to unnamed things. But most important of all, the ratio of named to unnamed verbs.

I’ve been reflecting on code quality lately. Partly that’s because I’ve been reading far more code than I’ve been writing. Partly it’s because the most recent code I was writing was intended primarily for reading, and only incidentally for execution, in the most literal way: it was instructional, not application-supporting.

And so it is that I’ve recently reaffirmed my conviction that code’s quality is primarily a function of its readability. Readability is of primary importance because code must be able to be understood in order to be used, and the way to understand it best is to read it.

However, I think I can be more specific about one component of readability that holds sway over the rest: naming. Partially the quality of each name, but also the ratio of named to unnamed things. But most important of all, the ratio of named to unnamed verbs.

I first realized this several years ago, while hacking in the middle of a complex, distributed, Java-based system. At one point, I had spent days diving through spaghetti, and finally found the core of the system … and it was beautiful. Not just the best Java code I’d ever seen, but possibly the cleanest code, period. Comparing it to the ugly code I had dug through, I found that its cleanliness derived from the fact that each interesting operation (or “verb”) was segregated into its own named function. Some of those functions called others of those functions, but it was always just one operation described in each.

Later, coincidentally on the same project, I had reason to spend several weeks not in Java, a language I knew very well, but in Perl, Python, and Bash, languages with which I was less familiar. I wrote and modified code very carefully in those languages, making sure that I could test each step as I went along. As that bit of hacking finished, I returned to Java, and found that my style had changed. I was now writing Java in a very careful, easily-testable manner. When I stepped back, I realized that the easily-understood form of my new Java code shared something with that beautiful core I had found earlier: each function described exactly one operation.

I’ll demonstrate what I mean with a concrete example. The code below is very similar to code I was hacking recently. The labels have been changed to protect the innocent, even though I think the innocent is me.

The set_properties function expects a token and a collection of properties (keys with matching values) to store for the token. New properties should overwrite old properties of matching keys, but old values for keys that are not specified should remain unchanged. For example, if the token “foo” had the properties [{a,1},{b,1}], and I called set_properties with the new properties [{a,2},{c,2}], then after set_properties finishes, the token “foo” should have the properties [{a,2},{b,1},{c,2}] (the new values for a and c plus the old value for b).

set_properties(Token, NewProperties) ->
   OldProperties = get_properties(Token),
   NewKeys = [ K || {K, _} <- NewProperties ],
   FilteredProperties = [ P || P={K, _} <- OldProperties,
                               not lists:member(K, NewKeys) ],
   set_properties_internal(Token, FilteredProperties ++ NewProperties).
Fig. 1: The Beginning

The code in Figure 1 is where I started. This code is correct: it conforms to the spec given, passes all tests (indeed, has been in production, working, for over a year). But, it is also bad code. The hint why is the NewKeys variable. It has little to do with setting new properties; it’s merely an artifact of cleaning up old properties. It’s an indication that the two list comprehensions that reference it are really an unnamed verb separate from set_properties.

set_properties(Token, NewProperties) ->
   OldProperties = get_properties(Token),
   MergedProperties = merge_properties(NewProperties, OldProperties).
   set_properties_internal(Token, MergedProperties).

merge_properties(Keep, Toss) ->
   KeepKeys = [ K || {K, _} <- Keep ],
   FilteredToss = [ P || P={K, _} <- Toss,
                         not lists:member(K, KeepKeys) ],
   FilteredToss ++ Keep.
Fig. 2: Naming the Verb

I propose that the code in Figure 2 is an improvement upon the code in Figure 1. The set_properties function now says just exactly what it’s going to do: lookup the old properties, merge them with the new properties, and store the result. The details about how the merge is performed, the unnamed verb in Figure 1, have been relocated to a new function, named merge_properties. The intermediate list of keys is still produced, but it’s now obvious that it’s just part of the merging process.

set_properties(Token, NewProperties) ->
   OldProperties = get_properties(Token),
   MergedProperties = merge_properties(NewProperties, OldProperties),
   set_properties_internal(Token, MergedProperties).

merge_properties(Keep, Toss) ->
   lists:ukeymerge(1, lists:ukeysort(1, Keep), lists:ukeysort(1, Toss)).
Fig. 3: Using an Existing Name

Figure 3 is a demonstration of part of the reason that MIT changed the 6.001 curriculum. There was no need to write those list comprehensions. Someone had already written the equivalent and named it. It is far clearer to use that named operation than to reimplement. The confusion about why NewKeys was created has been removed, and so has the need to decrypt the other list comprehension.

set_properties(Token, NewProperties) ->
   OldProperties = get_properties(Token),
   MergedProperties = lists:ukeymerge(1,
                         lists:ukeysort(1, NewProperties),
                         lists:ukeysort(1, OldProperties)),
   set_properties_internal(Token, MergedProperties).   
Fig. 4: Breaking Context

It’s a valid question to ask why I didn’t recommend jumping straight from Figure 1 to Figure 4, instead of ending up at Figure 3. It’s true that Figure 4 is a large improvement on Figure 1, but the answer is that even though lists:ukeymerge/3 is a named verb, it’s a verb with less context than merge_properties in my module. The context is richer than this snippet suggests, because there is at least one other function in this module that needs to perform the same operation. Also, to reference 6.001 again, “Abstraction barrier!” Why does set_properties need to know the data structure I’m using?

set_properties(Token, NewProperties) ->
      Token, merge_properties(NewProperties, get_properties(Token))).

merge_properties(Keep, Toss) ->
   lists:ukeymerge(1, lists:ukeysort(1, Keep), lists:ukeysort(1, Toss)).
Fig. 5: Anonymous Nouns

Another valid question is why I didn’t continue on to Figure 5 after Figure 3. In truth, I did consider it. My eye sees less clutter, but having discussed this exact choice with many coworkers, I’ve learned that others don’t. It also goes against the grain of what this post has been advocating: while I’ve worked to name my verbs, Figure 5 anonymized my nouns. There’s a practical reason to keep names for nouns around: printf debugging. Unless I have a very nice macro handy that I can wrap around one of the function calls in-place in Figure 5, I’m forced to copy one of those calls to some other place, and possibly even give it a name, before I can wrap my print statement around it. In Figure 3, the names are already there; all I have to do is use them.

What else could be improved in Figure 3? Plenty: “merge” is a bit generic and over-used; “properties” is long, noisy, and redundant in-context. Is my omission of names for the sorted lists in merge_properties/2 hypocritical? Probably. Readability is a subjective, human measure. In multiple projects and languages, I’ve identified verb-naming as important in my judgement of a code’s readability. Maybe writing that fact down will help me remember to think about it in new code I write.

I’m Sorry (Maybe)

But there lies the perfect storm: an app no one really wanted to write, with a problem no one really wanted to touch, no one with the time to fix it anyway, and a flaw just embarrassing enough for me to remember it years later.

Why is it that embarrassing code has a way of sticking around? The specific variety of embarrassment doesn’t seem to matter (it could be hard to read, willfully inefficient, or just quirkily broken); all varieties live on equally well. Is it just that all code has a way of sticking around, and that we notice the embarrassing code more? Or is it that the embarrassing code is more likely to be written in those tough little corners that no one wanted to touch anyway, and still don’t want to touch now? I don’t know, but I do know every one of us has a few bits that we’d love to do over, if we could ever get the time to Do It Right.

I’m reminded of one of my most embarrassing bits every time I’m put on hold. The music comes on, I hear about three words, and then static. A couple of chords, more static. On and on.

The story of my embarrassment begins over ten years ago. The summer of 1999, I was interning at Lucent Technologies. It was my third summer there, and I was finally hacking on a product, not academic research (or IT upgrades, as my first summer had entailed).

The product was called Softswitch – an amazing new product in the early days of commercial IP telephony. The stack was some mix of C and Java, and there was a box humming somewhere with a connection to some corner of the phone system (at the very least the in-house ISDN). Interacting with telephones, over the internet, with software running on any old random box – wow![1]

My main task was helping to flesh out the add-on module system. “Flesh out” may be the wrong term. The goal of my work was more to experiment with the extension API they had created (known as the Programmable Feature Server), and to produce a demonstration of its capabilities, as well as to provide feedback about what was missing, rough, broken, etc. In the Web 2.0 world, I’d probably have been labeled “beta tester”.

Like most betas, the documentation was scarce. The rumor was that the lack of documentation was less intimidating for those that knew SS7 inside and out, but there was no way I was going to swallow that heap, and also produce something useful in three months.[2]

By late summer, I had implemented a fairly involved demo, boringly named ReminderCall. Dial in from any phone, navigate your way through Push-N-for-X menus, then eventually enter a time and record a message. At the time you chose, ReminderCall would dial your phone and play your message back to you. There was also a “web” frontend (either a Java servlet rendering HTML or a servlet talking to an applet; can’t remember which) for doing the same, as well as canceling or rescheduling pending reminders, if I recall correctly.

ReminderCall was a success. They liked it so much, they used it to demo Softswitch’s extensibility to MCI.

But it’s not the success that I intended to talk about here. The embarrassing code happened along the way to ReminderCall.

As a way to learn how to deal with audio streams, I first implemented another application with a somewhat smaller scope. Much like beginning to learn any display-based system by printing, “Hello World,” I began to learn this audio-based system by playing, “hello.” A few more hours of tinkering after that, and my application could also read key presses.

Polish things up a bit, and the first app I had ready was MusicOnHold. Being 17 at the time, all geek and zero taste, my demonstration music was none other than Sabotage by the Beastie Boys (light defense: it also happened to be one of the few songs I could find for download at the time, avoiding the lack of audio hardware in my workstation).

The nice thing about Sabotage is that it sounds like noise normally. Piping it over 8-bit (or less?) mono mainly just seems to change the timbre of the noise. It wasn’t until the boss asked me to find something more suitable for business-audience demonstration that it became apparent that the noise was part of the application. Glenn Miller’s In the Mood sounded better on every warped vinyl it ever graced. Dee-da-da-dee-kxhxhxhxhxhxhxhx-dee-dee-khxhxhxhxh.

There was worry, and hand wringing. Email went back and forth between us and the core Softswitch developers. Was it just Java unable to keep up (this was the 1.1 or 1.2 days, and I was still a n00b, after all)? Was it the interface to the switch? The network between the boxes? It’s true that the human voice requires less bandwidth to encode than something wide-frequency, high-dynamic-range like Big Band music, but I nevertheless tried re-encoding that song every which way. Things improved a bit, but still the static remained.

In the end, it was deemed more useful for me to press on and experiment with other features of the system, rather than muck about with this encoding trouble.

But there lies the perfect storm: an app no one really wanted to write, with a problem no one really wanted to touch, no one with the time to fix it anyway, and a flaw just embarrassing enough for me to remember it years later.

And now, every time I’m stuck on hold with static-filled music, I wonder whether someone just went ahead and packaged that MusicOnHold demo app with the Softswitch, and thereby forced my old, embarrasing code public. If that’s the case, then, I’m sorry, so kxhxhxhxhxhxhxhx.

[1] I used the department’s mail server for a time, to the chagrin of not only the admin, but another user trying to use that server to host their Netscape Navigator process.

[2] My mentor was also probing the reaches of the API, implementing the required wiretapping features, as I recall. She also gets credit for being the first person to introduce me to Emacs and OOP (by way of Java), not to mention a host of other enlightenment. Many thanks if you’re reading this by some chance!

NoSQL EU: Key-value Stores and Riak

I’m very excited to announce that I’ll be speaking at no:sql(eu). I’ll be covering Key-value stores and Riak. The talk should be a good overview of this [very] broad domain of datastores, as well as a closer look at a few unique features of some specific implementations.

I’ll also be teaching a Riak workshop on the last day of the conference. I plan to cover the design, implementation, and deployment of a simple wiki-like application. It should be a good introduction to simple Riak usage (just storing and fetching data), while also exposing some advanced features (like link-walking, map-reduce, and conflict-resolution).

Looking forward to meeting people there!

Padding Quietly Down the Hall

I am not, however, going to make some drooling prediction about it changing the world. I am also not going to make some frothing statement about how clueless it was to leave out my dream feature. I am not even going to pontificate about whether or not I’ll be buying one (or who else I think should or should not buy one).

I haven’t posted here in a long time. I’ve wanted to. I have several posts partly written, just waiting on getting the last bits of example nailed down. But, as you can see, I haven’t finished the polishing I feel is necessary before posting them.

So, in an effort to jump-start my return to regular blogging, I’m going to do what everyone else is doing: I’m going to yammer about the iPad for a few paragraphs.

I am not, however, going to make some drooling prediction about it changing the world. I am also not going to make some frothing statement about how clueless Apple was to leave out my dream feature. I am not even going to pontificate about whether or not I’ll be buying one (or who else I think should or should not buy one).

Instead, I’d like to point out two things about the iPad that, I feel, have been underconsidered. Those two things are price and file-sharing.


$499. Five hundred dollars less than what seemed to be the most popular pre-announcement prediction. This is amazing because it hits many sweet spots.

Five hundred dollars is basically Geek Toy money. No, it’s not impulse-buy, “I tossed it in my cart to get free shipping at Amazon,” money. But, for your typical, gadget-loving geek, ~$2^9 is, “Yeah, I was thinking about sampling the market anyway.” That means it’s going to have myriad creative eyes and brains contemplating all sorts of mixed-up, new, different uses from day one. By day thirty, I guarantee you will see a demo of something surprising.

Woah. Almost drooled a bit there. Calming down now.

Five hundred dollars, or more specifically sub-1k, is also the price that pundits have been demanding from Apple. There’s always been the Mac mini, but that’s not portable, and its sub-$1000 price really depends on you already owning a keyboard, mouse, and monitor (or doing your own bargain hunting). The iPad now opens the doors to people who want a real Apple computer for half the price of a MacBook (ignoring resale and discount programs).


Yes, a real computer. How can I say this with a straight face? It’s all because of a new feature in the SDK: file-sharing.

The iPhone platform, until now, has not been designed for content creation. Consume all you want, but only produce short emails and the occasional snapshot. This means that it wasn’t a problem that there was no good way to transfer the content you produced onto and off of the device. The small bits produced went out in email. The things consumed, weren’t edited. (A few apps, like the excellent GoodReader, built in HTTP servers, just to get around this trouble.)

The iPad, though, has space and even dedicated hardware to provide UI for content creation. Indeed, Apple has ported the entire iWork suite to the device! In order for this to be of any use to anyone, though, there also needed to be a way to transfer that content elsewhere. Luckily, there is, in the form of a folder that each application can expose, which shows up as a directory on a drive when the iPad is plugged into a computer, just like any other USB drive. There is even a facility for asking other applications on the device to open files from the shared directory. No more bouncing things through remote network machines, just to get data moved between apps, or between the iPad and a desktop.

So that’s it. It’s cheap, and it can do more stuff. In a few months, I expect to see something really interesting.