Archive for the ‘Uncategorized’ Category

Beer IoT (Part 5)

Welcome back for part five of the fermentation instrumentation series. In part four, I placed a few different sensors in some actively fermenting beer to gather data. I now have a few days of pressures and force vectors to analyze …

… but I’m not quite ready to share it all yet. There are some things that look promising, but mainly still a fair bit of confusing. I think there are a couple of quick tests I can run after emptying the carboys that will move some things out of the confusing pile and toward either confirmation or rejection. So, I’m going to delay writing those posts until I can do less handwaving.

To tide you all over until then, I thought I’d share some quick insights from the sensor data that I do not expect to be closely tied to specific gravity: temperature. I have two temp sensors collecting readings: one on a Helium Atom outside the carboys, and one packaged with the pressure sensor submerged in beer at the bottom of a carboy. Let’s start with the one outside the carboy:

screen-shot-2017-02-22-at-10-21-52-pm

 

This graph tracks the air temperature a few inches from the carboy. It’s basically the air temperature of my kitchen/dining-room. And from it, you can nearly read my life. The temperature drops initially as my kitchen cools after brewing. It rises in the morning as we make brunch, and again in the evening as we make dinner. The spike at 8am Tuesday morning is not breakfast. That is the residual heat from my hand as I held the Atom to connect USB power. The cooling into Wednesday morning is the clouds breaking and the weather temperature dropping.

But there’s something even more fun going on here: the light region around the dark line marks the min/max of the readings. Why is the max so much higher? Enhance.

screen-shot-2017-02-22-at-10-23-06-pm

Where did this sawtooth come from? Clue 1: there are exactly six teeth per hour. Clue 2: I queue up readings for ten minutes, and then send them to the cloud all at once. My bet is that I’m picking up residual heat from that extra work. Looking at my code, I forgot to power down the sensors until after I sent all the data to the cloud. Let’s fix that, and then recheck:

screen-shot-2017-02-23-at-5-55-03-pm

The sawtooth until 11am is what we saw earlier. The jump between 11 and 12 is heat from my hand as I plug in the USB cable again. And then … hmm, same sawtooth. Maybe this is heat from the radio instead.  It’s a tenth of a degree Celcius, nothing to worry about, but an interesting artifact.

So, what about the temp sensor in the beer?

screen-shot-2017-02-22-at-10-21-28-pm

Ah, yes, that would be the effect of being surrounded by sixteen pounds of water. It doesn’t change temperature quickly. This works out in the beer’s favor: yeast really don’t like quick temperature changes. Giving them time to adapt keeps them healthy and fermenting.

Here are both temperatures overlaid, so you can compare directly (with bonus 24+ hours on the end):

screen-shot-2017-02-23-at-6-49-38-pm

My apologies for starting with the data you’re all less interested in. It’s too interesting not to share something, but there are too many questions about the other samples to tell a coherent story yet. The data you’re really interested in will be up after bottling, and I’ll share the raw data at that time as well, so you can do your own analysis.

Beer IoT (Part 4)

This is part four of a series on monitoring homebrew fermentation. In parts one, two, and three, I experimented with data I downloaded from one platform and uploaded to another. In this part, I create some new sensors to try.

I have hardware!

img_2151

Helium Atom connected to an ADXL345

And it’s pretty slick. Using any I2C device with Helium’s wrappers is some of the easiest hardware hacking I’ve ever done. This is my first time using Lua, but while it made some different choices than other common languages, it has been very easy to learn.

Maybe an example will prove my point. This is how you take a reading from an ADXL345 accelerometer (he is Helium’s built-in library):

While building such a script, you can fill it with print statements and run it whenever you like by connecting the Atom to your computer via USB cable. This all makes it super easy to learn how a new sensor works.

When you’ve acquired the measurements you want to save, you send them to Helium’s cloud platform like this:

Once you have posted data, you can use Helium’s dashboard to check it out:

helium-dashboard-1

This system is so smooth that in just a week (of evenings) I’ve been able to write scripts to take readings from two different sensors. Those sensors are now sitting in the bottoms of two carboys monitoring the fermentation of an English Mild. Yes, the first thing I did with my new electronics was submerge them in infected sugar water. I tested the water tightness of their containers … oh, at least several times.

helium-submerged

Foreground: carboy with “tilt” sensor, carboy with “sink” sensor, carboy with BeerBug; Background: Helium Atoms in red container, airlock/blowoff in green container

What monitoring a fermentation amounts to is measuring the density of the liquid. Water with sugar in it is denser than water on its own, or water with alcohol in it. As the yeast convert the sugar to alcohol, the liquid becomes less dense.

Most tools test the density of the liquid indirectly, by instead testing the buoyancy of a known float. The standard hydrometer is a float with a scale attached, so you can read how high it’s floating by looking at it.

The device I’m looking to replace, the BeerBug, reads this float-height by suspending the float from a flexible metal tongue, which is also connected to a magnet, whose position is read by a hall-effect sensor. As the float floats higher, the magnet nears the sensor, producing a stronger reading. It requires that you measure the gravity of your liquid with a hydrometer first, but once the initial reading is calibrated, the change in buoyancy can be measured (the magnet moves farther from the sensor as the beer ferments).

screen-shot-2017-02-20-at-1-22-45-pm

BeerBug operation – left: pre-ferment, right: post-ferment

I wasn’t able to obtain a hall-effect sensor as quickly as I wanted, so my devices take different approaches. The first is based on someone else’s design. By making the float very buoyant on one end, and just barely not able to float on pure water on the other, the angle at which the float floats will change with the density of the liquid. So the float should start close to horizontal when the unfermented beer is very sugary, and end up more steeply tilted as the sugar is converted to alcohol. The sensor in this float is thus the ADXL345 accelerometer that the above code demonstrates using. By measuring the direction of the force of gravity, we can figure out what angle the sensor is floating at.

screen-shot-2017-02-20-at-1-30-20-pm

Tilt operation – left: pre-ferment, right: post-ferment

The idea behind the second experimental sensor is to directly measure the increased pressure from the denser liquid, instead of measuring its effect on buoyancy. I’ve place an atmospheric presure sensor in a non-rigid housing, which should allow the liquid to squeeze the air around the sensor, raising the pressure around it. As the liquid becomes less dense, the pressure should reduce. The sensor has been placed at the bottom of the carboy, to get as much liquid above it to provide pressure as possible. I’m also taking readings from the pressure sensor on the Atom, which is sitting in the open air outside the carboy, so I can compensate for weather-related pressure changes.

screen-shot-2017-02-20-at-1-45-21-pm

Sink operation (percent pressure as compared with pure water): left: pre-ferment specific gravity of 1.040; right: post-ferment sg of 1.010

So far, I’m just collecting raw data: pressure readings in the latter case, and force readings in the former. It’s going to take some analysis to figure out what they mean. Unfortunately, the BeerBug site is currently only serving the most recent reading, and not history, so direct comparison of data will not be possible for now. The Helium site is running smoothly, though – and in addition to their dashboard, as shown above, I can also use the graphing code from my earlier experiments:

I’ve shared the code I’m using for these experiments on Github. Please feel free to download and use the code yourself, or to suggest ways I can improve my Lua! Check back soon for analysis of how the measurement and fermentation went.

Update: the first bit of analysis, from the temperature sensors is up in part five.

Beer IoT (Part 3)

My code is ugly, but it works, so it’s time to post part three of this series. In part one, I downloaded data captured by my BeerBug. In part two, I uploaded it to the Helium platform. In this entry, I’ll read use Helium’s API to query and graph the data.

If I were dealing with a currently-active data source, Helium’s dashboard would allow me to view what was happening. That is a fantastic resource for developers, because it takes one step of uncertainty out of the equation by allowing inspection in the middle of the pipeline. But, “currently-active” is limited to 90 days in the dashboard, and my data is about a year old, so I need something else.

What I have built are a few simple D3 graphs:

beerbug-on-helium-screenshot

Each graphs the average value for a time slice as a dark line, with a lighter band around it marking the range from minimum to maximum. It’s crude, but it gets the point across. You can move earlier and later in the range by dragging left and right. Zoom in by holding shift while dragging to select a region. Zoom out by holding alt while dragging to select a region.

As I said before, it’s ugly, but I’ve put the code in a gist, if you’re looking for examples to follow (it’s neither well-organized nor well-documented, but if you’re also working with the Helium API, you may pick up on a clue of what you’re looking for).

Some things that made this graphing easy:

  • Helium supports CORS, so I didn’t even have to set up a proxy webservice. Loading graph.html from a file:// URL still allowed me to make requests to Helium to for the data.
  • D3 has a wide variety of basic example graphs. What I started with was a basic mash-up of the Line Chart and Bitvariate Area Chart examples.
  • Helium’s API will give you the latest data for your sensor (note: no 90-day window here), if you don’t provide an end filter, and also include a “previous” link in the response to get the next-latest data.

Some things that made this graphing hard (or at least tricky):

  • D3 defaults to local time, but Helium is all in UTC. Forgetting to translate leads to confusing debugging about why offset calculations are wrong.
  • Helium’s API will always give you the latest data for your sensor, if you don’t provide an end filter. That is, you can really only follow “previous” links backward through time. Once you follow a “previous” link, you’ll get a “next” link, but you should already have the data that link would give you. You can’t begin with a start filter and expect to follow “next” links to the latest data.

I’m posting this simple viewer now instead of waiting until I’ve had time to clean it up more, because the next step is probably a rewrite. As expected, Helium’s API works really well for supporting a simple dashboard: if you’re concerned with recent updates, and then scrolling back in time from there, the API makes it easy. But, what I learned during a Helium presentation at a meetup this week is that the real purpose of this API is to allow Helium’s servers to act as a transport between your sensors and your own servers. The expectation is that you’ll grab data from Helium, store it in your own database, and serve your app from your own storage.

Helium-as-transport is an interesting bet. It’s focusing on exactly the problem I’ve had with my BeerBug: I have to rely on their site for the tool to be useful. If Helium can keep the path from device to my analysis up more reliably, they will succeed in their goal of making sensor IoT more available to people that want to focus on the sensing and the analysis, whtout worrying about the infrastructure in between (i.e. bascially everyone).

Update: Part 4 is up – hardware on display!

Beer IoT (Part 2)

Welcome back for part two. In part one, I explained how I exported my historical brewing data from The BeerBug’s website. In this part, I’m going to demonstrate what I’ve learned about one alternative, the Helium platform.

Helium doesn’t sell a homebrew device, but rather a generic sensor platform. I ordered a dev kit while they were on sale, and while I’m waiting for my hardware to arrive, I have gained access to their data aggregation platform.

Disclaimer: I know several of the Helium developers, but I am not being compensated in any way to review their system.

Helium supports creating “virtual sensors” and uploading whatever data you like for them, as a way to test and experiment. What better data to play with than something I’m already familiar with? I’ll upload the BeerBug data I exported.

When a helium sensor posts a reading, it specifies a “port” for that reading. The port is primarily a label of what the reading is, but the examples given and port names reserved suggest that they’re intended to label the “type” of the reading. For example, port “t” is reserved for temperature in Celcius, and port “b” is battery level in millivolts. I have data for each of those, as well as a port I’m going to call “sg” for specific gravity.

Logging a reading is done by HTTP-POSTing some JSON data. The basic form looks like this:

{
 "data": {
   "attributes": {
     "port": "sg", // the name of the port
     "value": 1.0568, // the value for the reading
     "timestamp": "2016-01-23T18:35:03Z" // ISO8601 time in UTC
   },
   "type": "data-point"
 }
}

My data is all floating point numbers, so nothing too complex to worry about … except it’s all in the wrong format. To start with, my data looks like this:

{
 "dates": [ // comma-separated, zero-based month index, in local time
   "2016,0,23,18,35,3",
   // ... the rest of the dates ...
 ],
 "temp": [ // fahrenheit degrees
   70.26
   // ... the rest of the temperatures ...
 ],
 "sg": [ // specific gravity
   1.0568
   // ... the rest of the specific gravities ...
 ]
}

After many iterations, this is my jq script for conversion:

[.dates, .sg, .temp, .batt] | transpose | .[] |

  # there is probably a better way to convert from 0-based month to ISO8601
  # strptime bails on 0-based month, but produces a 0-based month structure?
  (.[0] | split(",") |
   [.[0],(.[1] | tonumber | .+1 | tostring),.[2],.[3],.[4],.[5]] |
   join(",") | strptime("%Y,%m,%d,%k,%M,%S") | todate) as $date |

  # specific gravity
  {"data":{"attributes":{"port":"sg","value":.[1],"timestamp":$date},
           "type":"data-point"}},

  # temperature - assumed fahrenheit (helium is celcius)
  {"data":{"attributes":{"port":"t","value":((.[2] - 32) * 5 / 9),"timestamp":$date},
           "type":"data-point"}},

  # battery level - assumed volts (helium is millivolts)
  {"data":{"attributes":{"port":"b","value":(.[3] * 1000),"timestamp":$date},
           "type":"data-point"}}

It has one major bug still: I’m just using local time as UTC. Just figuring out how to deal with the zero-based month was enough hassle (strptime produces an array that uses a zero-based month, but it can’t consume a string with one). It seems like the addition of a mktime | . + 28800 | gmtime (or 25200) would be close enough … but I should have exported in UTC to start with.

But anyway, let’s run this through jq:

$ jq -cf beerbug-to-helium.jq export-oatmeal-stout-jan-2016.json > helium-oatmeal-stout-jan-2016.json
$ head -3 helium-oatmeal-stout-jan-2016.json
{"data":{"attributes":{"port":"sg","value":1.0568,"timestamp":"2016-01-23T18:35:03Z"},"type":"data-point"}}
{"data":{"attributes":{"port":"t","value":21.255555555555556,"timestamp":"2016-01-23T18:35:03Z"},"type":"data-point"}}
{"data":{"attributes":{"port":"b","value":4146.7,"timestamp":"2016-01-23T18:35:03Z"},"type":"data-point"}}

Now I have one data-point per line, which will make uploading easy. But before uploading, I need to actually create my virtual sensor. This can be done via Helium’s HTTP API, but their example is missing the POST body (though I assume it’s the same as the update’s body, without the “id” field), and it’s just so simple with the Helium Commander utility installed (yes, I’ve censored the UUID):

$ helium sensor create --name beerbug-536
$ helium --uuid sensor list
+--------------------------------------+-----+------+-----------------------------+----------------------------+-------------+
| ID                                   | MAC | TYPE | CREATED                     | SEEN                       | NAME        |
+--------------------------------------+-----+------+-----------------------------+----------------------------+-------------+
| ABIGUUID-USED-TOBE-HERE-BUTISGONENOW |     |      | 2016-12-18T06:11:54.182691Z | 2016-12-19T04:49:57.00331Z | beerbug-536 |
+--------------------------------------+-----+------+-----------------------------+----------------------------+-------------+
$ export HELIUM_BEERBUG=ABIGUUID-USED-TOBE-HERE-BUTISGONENOW

Now I can finally upload some data! I’m just going to pipe the file I have through xargs and let things chug along. The sed work at the front is needed to escape the double-quotation marks in the json file, so that xargs doesn’t remove them:

$ sed 's/"/\\"/g' helium-oatmeal-stout-jan-2016.json |\
  xargs -n 1 curl -H "Content-Type: application/json" \
  -H "Authorization: $HELIUM_API_KEY" -XPOST \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries" -d

That … was slow. About 12,000 data-points in an hour. Or, three per second, as some insist all speeds be measured. I have around 65,000 data points, so that would be five hours or more. That’s my fault, though – starting curl all the way over again for each data point is way expensive. Let’s split up the work and run three curls in parallel:

$ tail +12001 helium-oatmeal-stout-jan-2016.json |\
  grep "\"b\"" > helium-oatmeal-stout-jan-2016.json-b
$ tail +12001 helium-oatmeal-stout-jan-2016.json |\
  grep "\"sg\"" > helium-oatmeal-stout-jan-2016.json-sg
$ tail +12001 helium-oatmeal-stout-jan-2016.json |\
  grep "\"t\"" > helium-oatmeal-stout-jan-2016.json-t
$ sed 's/"/\\"/g' helium-oatmeal-stout-jan-2016.json-b |\
  xargs -n 1 curl -H "Content-Type: application/json" \
  -H "Authorization: $HELIUM_API_KEY" -XPOST \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries" -d &
$ sed 's/"/\\"/g' helium-oatmeal-stout-jan-2016.json-sg |\
  xargs -n 1 curl -H "Content-Type: application/json" \
  -H "Authorization: $HELIUM_API_KEY" -XPOST \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries" -d &
$ sed 's/"/\\"/g' helium-oatmeal-stout-jan-2016.json-t |\
  xargs -n 1 curl -H "Content-Type: application/json" \
  -H "Authorization: $HELIUM_API_KEY" -XPOST \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries" -d

That was better, at about 8-ish points per second. I don’t expect much better out of my non-business DSL line. It’s saturated enough that MARIO RUN is delaying the starts of the games that I’m playing while waiting. If I were planning to bulk-load other data, I’d write something that kept the HTTP connection open and pipelined POSTs.

The real question I’ve been waiting on is, now that the data is in Helium’s system, what can I do with it? The bummer news is that I can’t use their web dashboard. It only goes back 90 days, and this data is from nearly a year ago. Maybe I’ll adjust the dates in another experiment. I think the only way to change data later might be to make a new sensor (i.e. you don’t get to change it – you have to rewrite it), so maybe best to think about where you scribble.

But, I can do basic retrieval, with filter[start]= and filter[end]=:

$ curl -H "Authorization: $HELIUM_API_KEY" -XGET \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries?filter%5Bstart%5D=2016-02-01T12:00:00Z&filter%5Bend%5D=2016-02-01T12:05:00Z" |\
  jq .
{
 "data": [
   {
    "attributes": {
      "value": 4162.5,
      "timestamp": "2016-02-01T12:04:01Z",
      "port": "b"
    },
    "relationships": {
      "sensor": {
        "data": {
          "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
          "type": "sensor"
        }
      }
    },
    "id": "89b47b2f-500d-4af3-9d01-49766b5938b0",
    "meta": {
      "created": "2016-12-23T06:05:50.757111Z"
    },
    "type": "data-point"
   },
   {
    "attributes": {
      "value": 1.0131,
      "timestamp": "2016-02-01T12:04:01Z",
      "port": "sg"
    },
    "relationships": {
      "sensor": {
        "data": {
          "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
          "type": "sensor"
        }
      }
    },
    "id": "645ca2f8-96aa-4cd9-915d-3670ec1b43af",
    "meta": {
      "created": "2016-12-23T06:06:21.478522Z"
    },
    "type": "data-point"
   },
   {
    "attributes": {
      "value": 18.672222222222224,
      "timestamp": "2016-02-01T12:04:01Z",
      "port": "t"
    },
    "relationships": {
      "sensor": {
        "data": {
        "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
        "type": "sensor"
      }
    }
   },
   "id": "44afd122-b13d-4675-b35a-e48184f32c9a",
   "meta": {
     "created": "2016-12-23T06:06:38.950493Z"
   },
   "type": "data-point"
  },
...

I’ve elided the data points at 12:03:01, 12:02:01, and 12:01:01 for brevity. This is a bit verbose, and seems to contain a lot of duplicate information. It all makes more sense when you learn that you query the same data by organziation, element, or label, which each map to groups of sensors.

It’s also possible to request basic aggregate statistics for this data, by adding agg[type]= and agg[size]=. The types currently available are min, max, and avg, and window sizes start at one minute and go up to one day.

$ curl -H "Authorization: $HELIUM_API_KEY" -XGET \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries?filter%5Bstart%5D=2016-02-01T12:00:00Z&filter%5Bend%5D=2016-02-01T12:30:00Z&agg%5Btype%5D=avg&agg%5Bsize%5D=10m" |\
  jq .
{
 "data": [
   {
    "attributes": {
      "value": {
        "max": 18.7,
        "avg": 18.6819444444444,
        "min": 18.6555555555556
      },
      "timestamp": "2016-02-01T12:20:00Z",
      "port": "agg(t)"
    },
    "relationships": {
      "sensor": {
        "data": {
          "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
          "type": "sensor"
        }
      }
    },
    "id": "ff308e69-a2c5-43a8-9215-dd4042b51104",
    "meta": {
      "created": "2016-12-23T06:06:46.98618Z"
    },
    "type": "data-point"
   },
   {
    "attributes": {
      "value": {
        "max": 1.0133,
        "avg": 1.01325,
        "min": 1.0132
      },
      "timestamp": "2016-02-01T12:20:00Z",
      "port": "agg(sg)"
    },
    "relationships": {
      "sensor": {
        "data": {
          "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
          "type": "sensor"
        }
      }
    },
    "id": "9d09823b-5302-4fd8-94f4-9c1e2ef62b99",
    "meta": {
      "created": "2016-12-23T06:06:29.719129Z"
    },
    "type": "data-point"
   },
   {
    "attributes": {
      "value": {
        "max": 4168,
        "avg": 4161.15,
        "min": 4152.5
      },
      "timestamp": "2016-02-01T12:20:00Z",
      "port": "agg(b)"
    },
    "relationships": {
      "sensor": {
        "data": {
          "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
          "type": "sensor"
        }
      }
    },
    "id": "5cd24bb5-30ea-4278-bbb0-082c8f25a5fe",
    "meta": {
      "created": "2016-12-23T06:06:01.779172Z"
    },
    "type": "data-point"
   },
...

Again, I’ve elided the results for 12:10 and 12:00 for brevity. This seems like it could be very convenient for supporting something like a dashboard. Some things I haven’t shown are the ability to choose a limited number of ports, and how large result sets are paginated, but those are also quite simple. It seems like the requests to support basic display of min/max/avg data on a zoomable/scrollable timeline would be very straightforward. And, that’s what Helium’s dashboard appears to give you, if your data is recent.

But I need some way to visualize historical data as well. Read part three to find out what I came up with.

Beer IoT (Part 1)

I’m not super into the Internet-of-Things. There are no wifi lightbulbs, electronic locks, or smart thermostats in my house. But, I’m a homebrewer, and that means I love new ways to get data about my beer. I backed The BeerBug on Kickstarter, and I’ve used it on a number of batches since early 2014.

The data my BeerBug provides is simple, but interesting: air temperature and specific gravity, measured once per minute. It gives me a pretty good idea of when a beer has finished or stalled.

The user experience leaves something to be desired, though. The website is clunky, and was down for a month or more recently. The mobile app is just a web view. There is no way to use the device without the website.

So, I have two goals over the next few months. The first is to extract all of the data I have recorded with my BeerBug, and the second is to find an alternative. This post covers the first goal, and the next will begin to explore the second.

The BeerBug offers an API … that only covers active brewing, not history. Beer pages allegedly offer CSV and XML data download, but the links haven’t worked in months. You can view graphs of historical brews on the website, though, so they have the ability to fetch that data.

Pulling up the Chrome web inspector and visiting a beer page, there is an XHR for a “graph.php” that returns JSON to draw the graph. Try as I might, I haven’t been able to construct a curl command to get the same data – it always came through with “0” or “null” in several fields. There’s almost certainly some header I’m missing, but I’ve taken an alternate route.

The network tab of Chrome’s web inspector will let you “Save as HAR with Content.” This exports a JSON file will all the information the inspector is showing. Lucky for me, this includes the content of the graph.php XHR response. So, switching the graph view from “25 points” to “all” and waiting for the new graph.php request to complete, then saving as HAR has captured my data.

The data from the XHR is the last in the log entries, so it’s easy to extract with jq:

$ jq ".log.entries[-1].response.content.text | fromjson" \
  export-oatmeal-stout-jan-2016.har > export-oatmeal-stout-jan-2016.json

Now I can start to explore the data:

$ jq ". | keys" export-oatmeal-stout-jan-2016.json
[
 "al",
 "batt",
 "dates",
 "degrees",
 "ext",
 "plato",
 "platod",
 "sg",
 "success",
 "temp",
 "temp2"
]

Almost all of these fields are arrays with one entry per measurement:

  • al: alcohol percentage
  • batt: battery voltage (volts)
  • dates: date of measurement (comma-separated strings year,month,day,hour,minute,second – not width-padded, zero-based month index, local timezone)
  • platod: degrees plato
  • sg: specific gravity
  • temp: air temperature (either Fahrenheit or Celcius, depending on value of “degrees” field)
  • temp2: probe temperature

Non-array fields:

  • degrees: what units “temp” and “temp2” are in (“F” for Fahrenheit, and I assume “C” for Celcius, but I haven’t checked)
  • ext: unknown
  • plato: unknown
  • success: unknown

Just a bit of data checking: I started the beer on January 23, 2016, and finished it on February 8:

$ jq ".dates[0], .dates[-1]" export-oatmeal-stout-jan-2016.json
"2016,0,23,18,35,3"
"2016,1,08,15,18,3"

Its specific gravity started about where I normally start my beers, and ended a little below where I normally finish them:

$ jq ".sg[0], .sg[-1]" export-oatmeal-stout-jan-2016.json
1.0568
1.0082

That means it may have a 6.4% alcohol content by volume:

$ jq ".al[0], .al[-1]" export-oatmeal-stout-jan-2016.json
0
6.4

And finally, it was kept in nice cool range (`add / length` is jq for “average”):

$ jq ".temp | max, min, add / length" export-oatmeal-stout-jan-2016.json
71.18
63.4
65.68423989795319

Neat. Let’s compare all the beers I exported:

# extract all xhr data
$ for x in export*.har; \
    do jq ".log.entries[-1].response.content.text | fromjson" $x \
    > ${x/har/json}; \
  done
# extract basic data
$ for x in export*.json; \
    do echo $x && jq -c '{"sg":.sg[0],"fg":.sg[-1],"abv":.al[-1],"temp":{"min":.temp|min,"max":.temp|max,"avg":(.temp|add/length)}}' $x; \
  done
export-abbey-oct-2015.json
{"sg":1.0498,"fg":1.4284,"abv":0,"temp":{"min":69.74,"max":79.96,"avg":72.70824454043661}}
export-beechwood-smoke-may-2014.json
{"sg":1.0511,"fg":0.9935,"abv":7.5,"temp":{"min":71.8,"max":83,"avg":75.40845794392524}}
export-butternut-stout-nov-2014.json
{"sg":1.0529,"fg":1.3635,"abv":0,"temp":{"min":65.36,"max":74.41,"avg":69.15657534246593}}
export-ipa-may-2015.json
{"sg":1.0475,"fg":0.9946,"abv":6.7,"temp":{"min":68.81,"max":80.21,"avg":71.19772108108131}}
export-mead.json
{"sg":1.115,"fg":1.0389,"abv":10,"temp":{"min":61,"max":70.84,"avg":65.09618010573946}}
export-oatmeal-stout-jan-2016.json
{"sg":1.0568,"fg":1.0082,"abv":6.4,"temp":{"min":63.4,"max":71.18,"avg":65.68423989795319}}
export-oatmeal-stout-nov-2015.json
{"sg":1.0639,"fg":1.0108,"abv":7,"temp":{"min":63.66,"max":77.25,"avg":69.64541020966313}}
export-oatmeal-stout-sep-2014.json
{"sg":1.0499,"fg":0.9973,"abv":7.3,"temp":{"min":72.3,"max":81.8,"avg":76.59252173913043}}
export-pumpkin-ale-nov-2015.json
{"sg":1.0529,"fg":1.0134,"abv":5.2,"temp":{"min":63.37,"max":70.69,"avg":66.15414939483689}}

There is quite a bit more analysis that should be done on this data. For example, I know that the specific gravity jumps around quite a lot. It is measured by a hall-effect sensor capturing the weight of a plumb in the beer, and so it’s a bit touchy about temperature changes and carbonation bubbles from active yeast. Those simple stats about the temperature (min, max, mean) do not really tell the whole story.

But, I’m fairly well convinced that I now have a copy of my recorded data. What is the path forward? Find out in part two.

My Favorite Moment of 2013

It’s the last day of 2013, and I’m supposed to be finishing preparations for a cross-country move. But instead, I really want to recount my favorite moment of this past year.

On Friday, October 11, 2013, MIT’s Hobby Shop held a celebration to commemorate its 75th anniversary. The hobby shop is a place for the MIT community (students, faculty, alumni, and such) to … well, practice *manus* after stretching their *mens*. It’s a large room, filled with benches, power tools, and hand tools for working wood, metal, plastic, etc.

People use the Hobby Shop to build … things. Equipment for lab projects, musical instruments, furniture, signs, or whatever else they might dream. I was (sadly) not a member in college, but joined later to learn and use their large machinery when starting my bed.

The celebration in October included many member projects on display, one of which was a camera. Biyeun, its builder and user, gave a presentation about making and using her creation. In her introduction, she explained her discovery of view cameras and her instantaneous reaction: “I must build that.”

As I nodded my head in understanding of her sentiment, I saw heads all around the room do likewise. Building a machine gives you a different understanding of it that no variety of use ever will. Just a taste of such knowledge can cause everyday objects to practically scream at you forever afterward, “Imagine what it’s like to create me.” I knew that everyone nodding had heard that call.

The dean of student life, Chris Colombo, spoke as well. He was not a member of the Hobby Shop, but had good friends there. He expressed awe for the projects like Biyuen’s camera, that he had seen leave the shop, and a few minutes into his speech said something like, “I wish I knew how to build something like that.” As he took a breath afterward, I could just feel every shop member in the room struggle to restrain themselves from walking onto the stage, grabbing Chris by the elbow, and dragging him to the shop, to teach him how. “C’mon, I’ll show you,” were the words on every lip.

Realizing that I was surrounded by people that not only had wanted to know, and then spent time doing and learning, but now also wanted to show and teach, was my favorite moment in 2013. Finding people that are curious is not terribly hard. Finding those that will follow through on their curiosity can sometimes seem rare. But, finding one who actually wants to share what he or she has learned, by answering the endless naive questions of a beginner, is like winning the lottery. To be standing in a room full of such individuals was overwhelming.

Hobbies -= 1

I shut down a hobby today. BeerRiot, the site I started over six years ago, is now closed. I’m keeping the domain active, because I’ve used the name in other places, but browsers will see only a static archive of what used to be there.

BeerRiot began as an experiment. I wanted to learn about Erlang, and I needed a project to drive my curiosity. It worked, and I learned a good deal about modern web application development in the process. In fact, I learned enough about both that, through blogging about my progress, I was able to join up with a smart team and work in Erlang on web apps professionally.

In fact, even after the experiment paid off, BeerRiot remained my sandbox. New webservers, new storage techniques, new rendering processes, new API designs … I was able to practice with them all in a live setting before attempting to pull an entire team of engineers toward any of them.

So why would I give up my playground? Simply put: I don’t play there any more. My interests have moved on, and it’s time to remove the mental clutter of the service existing (no matter it’s reliability). Were the virtual server some physical object, I’d be putting it on a garage sale. As it is not, I will instead throw a tarball on a backup disk, and laugh when I find it in a few years.

What’s next? On the code side, more focus on that smart team and profession Erlang work I mentioned. On the hobby side … definitely not another web app. I’ll keep this blog up. No promises on changes to its post frequency, but readers will be among the first to know when I find a new thing.

Cheers.

Roundtripping the HTTP Flowchart

Webmachine hackers are familiar with a certain flowchart representing the decisions made during the processing of an HTTP request. Webmachine was designed as a practical executable form of that flowchart.

It has long bugged many of the Webmachine hackers that this relationship is one-way, though. Webmachine was made from the graph, but the graph wasn’t made from Webmachine. I decided to change that in my evenings last week, while trying to take my mind off of Riak 1.0 testing.

This is a version of the HTTP flowchart that only a Webmachine hacker could love. It’s ugly and missing some information, but the important part is that it’s generated by parsing webmachine_decision_core.erl.

I’ve shared the code for generating this image in the gen-graph branch of my webmachine fork. Make sure you have Graphviz installed, then checkout that branch and run make graph && open docs/wdc_graph.png.

In addition to the PNG, you’ll also find a docs/wdc_graph.dot if you prefer to render to some other format.

If you’d really like to dig in, I suggest firing up an Erlang node and looking at the output of wdc_graph:parse("src/webmachine_decision_core.erl"):

[{v3b13, [ping],                     [v3b13b,503]},
 {v3b13b,[service_available],        [v3b12,503]},
 {v3b12, [known_methods],            [v3b11,501]},
 {v3b11, [uri_too_long],             [414,v3b10]},
 {v3b10, [allowed_methods,'RESPOND'],[v3b9,405]},
 {v3b9,  [malformed_request],        [400,v3b8]},
...

If you’ve looked through webmachine_decision_core at all, I think you’ll recognize what’s presented above: a list of tuples, each one representing the decision named by the first element, with the calls made to a resource module as the second element, and the possible outcomes as the third element. Call wdc_graph:dot/2 to convert those tuples to a DOT file.

There are a few holes in the generation. Some response codes are reached by decisions spread across the graph, causing long arrows to cross confusingly. The edges between decisions aren’t labeled with the criteria for following them. Some resource calls are left out (like those made from webmachine_decision_core:respond/1 and the response body producers and encoders). It’s good to have a nice list for future tinkering.

Riak Presented at NYC NoSQL – slides, text & video

I had the pleasure of attending the NYC NoSQL Fall ’09 Meetup/Mini-Conference last Monday. Great talks, all around. I thought it was a good mix of use-case analysis and technology introduction.

In addition to enjoying everyone else’s presentations, I also presented Riak. It was a quick 12-minute talk, followed by 2.5 minutes of questions, but the response I got was great. People really dug in and had interesting observations and questions to discuss afterward.

If you weren’t able to make the event, Brendan has posted video of my talk. I have also posted an HTML slides-and-text version of my talk, if you prefer reading over watching and listening.

Dev House Boston

If you’re in the Boston area, and interested in Erlang/ErlyWeb, and free next Sunday … I’ll probably be hanging around Dev House Boston.

It’s my first trip to one of these hackathons. I’ve never been to Foo/BarCamp, or any of the others. So, we’ll see how it goes.

My best idea for a project so far is an Emacs mode for ErlTL. But, mainly I’d be interested in helping people come up to speed with Erlang/Erlyweb and/or Facebook app development. I think ErlyWeb’s a great platform for web development, and I’d like to see more people put it through its paces.

I’m also familiar with plenty of other languages/systems, so I feel pretty confident that I’ll be able to hack on whatever comes up.