Beer IoT (Part 2)

Welcome back for part two. In part one, I explained how I exported my historical brewing data from The BeerBug’s website. In this part, I’m going to demonstrate what I’ve learned about one alternative, the Helium platform.

Helium doesn’t sell a homebrew device, but rather a generic sensor platform. I ordered a dev kit while they were on sale, and while I’m waiting for my hardware to arrive, I have gained access to their data aggregation platform.

Disclaimer: I know several of the Helium developers, but I am not being compensated in any way to review their system.

Helium supports creating “virtual sensors” and uploading whatever data you like for them, as a way to test and experiment. What better data to play with than something I’m already familiar with? I’ll upload the BeerBug data I exported.

When a helium sensor posts a reading, it specifies a “port” for that reading. The port is primarily a label of what the reading is, but the examples given and port names reserved suggest that they’re intended to label the “type” of the reading. For example, port “t” is reserved for temperature in Celcius, and port “b” is battery level in millivolts. I have data for each of those, as well as a port I’m going to call “sg” for specific gravity.

Logging a reading is done by HTTP-POSTing some JSON data. The basic form looks like this:

{
 "data": {
   "attributes": {
     "port": "sg", // the name of the port
     "value": 1.0568, // the value for the reading
     "timestamp": "2016-01-23T18:35:03Z" // ISO8601 time in UTC
   },
   "type": "data-point"
 }
}

My data is all floating point numbers, so nothing too complex to worry about … except it’s all in the wrong format. To start with, my data looks like this:

{
 "dates": [ // comma-separated, zero-based month index, in local time
   "2016,0,23,18,35,3",
   // ... the rest of the dates ...
 ],
 "temp": [ // fahrenheit degrees
   70.26
   // ... the rest of the temperatures ...
 ],
 "sg": [ // specific gravity
   1.0568
   // ... the rest of the specific gravities ...
 ]
}

After many iterations, this is my jq script for conversion:

[.dates, .sg, .temp, .batt] | transpose | .[] |

  # there is probably a better way to convert from 0-based month to ISO8601
  # strptime bails on 0-based month, but produces a 0-based month structure?
  (.[0] | split(",") |
   [.[0],(.[1] | tonumber | .+1 | tostring),.[2],.[3],.[4],.[5]] |
   join(",") | strptime("%Y,%m,%d,%k,%M,%S") | todate) as $date |

  # specific gravity
  {"data":{"attributes":{"port":"sg","value":.[1],"timestamp":$date},
           "type":"data-point"}},

  # temperature - assumed fahrenheit (helium is celcius)
  {"data":{"attributes":{"port":"t","value":((.[2] - 32) * 5 / 9),"timestamp":$date},
           "type":"data-point"}},

  # battery level - assumed volts (helium is millivolts)
  {"data":{"attributes":{"port":"b","value":(.[3] * 1000),"timestamp":$date},
           "type":"data-point"}}

It has one major bug still: I’m just using local time as UTC. Just figuring out how to deal with the zero-based month was enough hassle (strptime produces an array that uses a zero-based month, but it can’t consume a string with one). It seems like the addition of a mktime | . + 28800 | gmtime (or 25200) would be close enough … but I should have exported in UTC to start with.

But anyway, let’s run this through jq:

$ jq -cf beerbug-to-helium.jq export-oatmeal-stout-jan-2016.json > helium-oatmeal-stout-jan-2016.json
$ head -3 helium-oatmeal-stout-jan-2016.json
{"data":{"attributes":{"port":"sg","value":1.0568,"timestamp":"2016-01-23T18:35:03Z"},"type":"data-point"}}
{"data":{"attributes":{"port":"t","value":21.255555555555556,"timestamp":"2016-01-23T18:35:03Z"},"type":"data-point"}}
{"data":{"attributes":{"port":"b","value":4146.7,"timestamp":"2016-01-23T18:35:03Z"},"type":"data-point"}}

Now I have one data-point per line, which will make uploading easy. But before uploading, I need to actually create my virtual sensor. This can be done via Helium’s HTTP API, but their example is missing the POST body (though I assume it’s the same as the update’s body, without the “id” field), and it’s just so simple with the Helium Commander utility installed (yes, I’ve censored the UUID):

$ helium sensor create --name beerbug-536
$ helium --uuid sensor list
+--------------------------------------+-----+------+-----------------------------+----------------------------+-------------+
| ID                                   | MAC | TYPE | CREATED                     | SEEN                       | NAME        |
+--------------------------------------+-----+------+-----------------------------+----------------------------+-------------+
| ABIGUUID-USED-TOBE-HERE-BUTISGONENOW |     |      | 2016-12-18T06:11:54.182691Z | 2016-12-19T04:49:57.00331Z | beerbug-536 |
+--------------------------------------+-----+------+-----------------------------+----------------------------+-------------+
$ export HELIUM_BEERBUG=ABIGUUID-USED-TOBE-HERE-BUTISGONENOW

Now I can finally upload some data! I’m just going to pipe the file I have through xargs and let things chug along. The sed work at the front is needed to escape the double-quotation marks in the json file, so that xargs doesn’t remove them:

$ sed 's/"/\\"/g' helium-oatmeal-stout-jan-2016.json |\
  xargs -n 1 curl -H "Content-Type: application/json" \
  -H "Authorization: $HELIUM_API_KEY" -XPOST \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries" -d

That … was slow. About 12,000 data-points in an hour. Or, three per second, as some insist all speeds be measured. I have around 65,000 data points, so that would be five hours or more. That’s my fault, though – starting curl all the way over again for each data point is way expensive. Let’s split up the work and run three curls in parallel:

$ tail +12001 helium-oatmeal-stout-jan-2016.json |\
  grep "\"b\"" > helium-oatmeal-stout-jan-2016.json-b
$ tail +12001 helium-oatmeal-stout-jan-2016.json |\
  grep "\"sg\"" > helium-oatmeal-stout-jan-2016.json-sg
$ tail +12001 helium-oatmeal-stout-jan-2016.json |\
  grep "\"t\"" > helium-oatmeal-stout-jan-2016.json-t
$ sed 's/"/\\"/g' helium-oatmeal-stout-jan-2016.json-b |\
  xargs -n 1 curl -H "Content-Type: application/json" \
  -H "Authorization: $HELIUM_API_KEY" -XPOST \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries" -d &
$ sed 's/"/\\"/g' helium-oatmeal-stout-jan-2016.json-sg |\
  xargs -n 1 curl -H "Content-Type: application/json" \
  -H "Authorization: $HELIUM_API_KEY" -XPOST \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries" -d &
$ sed 's/"/\\"/g' helium-oatmeal-stout-jan-2016.json-t |\
  xargs -n 1 curl -H "Content-Type: application/json" \
  -H "Authorization: $HELIUM_API_KEY" -XPOST \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries" -d

That was better, at about 8-ish points per second. I don’t expect much better out of my non-business DSL line. It’s saturated enough that MARIO RUN is delaying the starts of the games that I’m playing while waiting. If I were planning to bulk-load other data, I’d write something that kept the HTTP connection open and pipelined POSTs.

The real question I’ve been waiting on is, now that the data is in Helium’s system, what can I do with it? The bummer news is that I can’t use their web dashboard. It only goes back 90 days, and this data is from nearly a year ago. Maybe I’ll adjust the dates in another experiment. I think the only way to change data later might be to make a new sensor (i.e. you don’t get to change it – you have to rewrite it), so maybe best to think about where you scribble.

But, I can do basic retrieval, with filter[start]= and filter[end]=:

$ curl -H "Authorization: $HELIUM_API_KEY" -XGET \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries?filter%5Bstart%5D=2016-02-01T12:00:00Z&filter%5Bend%5D=2016-02-01T12:05:00Z" |\
  jq .
{
 "data": [
   {
    "attributes": {
      "value": 4162.5,
      "timestamp": "2016-02-01T12:04:01Z",
      "port": "b"
    },
    "relationships": {
      "sensor": {
        "data": {
          "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
          "type": "sensor"
        }
      }
    },
    "id": "89b47b2f-500d-4af3-9d01-49766b5938b0",
    "meta": {
      "created": "2016-12-23T06:05:50.757111Z"
    },
    "type": "data-point"
   },
   {
    "attributes": {
      "value": 1.0131,
      "timestamp": "2016-02-01T12:04:01Z",
      "port": "sg"
    },
    "relationships": {
      "sensor": {
        "data": {
          "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
          "type": "sensor"
        }
      }
    },
    "id": "645ca2f8-96aa-4cd9-915d-3670ec1b43af",
    "meta": {
      "created": "2016-12-23T06:06:21.478522Z"
    },
    "type": "data-point"
   },
   {
    "attributes": {
      "value": 18.672222222222224,
      "timestamp": "2016-02-01T12:04:01Z",
      "port": "t"
    },
    "relationships": {
      "sensor": {
        "data": {
        "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
        "type": "sensor"
      }
    }
   },
   "id": "44afd122-b13d-4675-b35a-e48184f32c9a",
   "meta": {
     "created": "2016-12-23T06:06:38.950493Z"
   },
   "type": "data-point"
  },
...

I’ve elided the data points at 12:03:01, 12:02:01, and 12:01:01 for brevity. This is a bit verbose, and seems to contain a lot of duplicate information. It all makes more sense when you learn that you query the same data by organziation, element, or label, which each map to groups of sensors.

It’s also possible to request basic aggregate statistics for this data, by adding agg[type]= and agg[size]=. The types currently available are min, max, and avg, and window sizes start at one minute and go up to one day.

$ curl -H "Authorization: $HELIUM_API_KEY" -XGET \
  "https://api.helium.com/v1/sensor/$HELIUM_BEERBUG/timeseries?filter%5Bstart%5D=2016-02-01T12:00:00Z&filter%5Bend%5D=2016-02-01T12:30:00Z&agg%5Btype%5D=avg&agg%5Bsize%5D=10m" |\
  jq .
{
 "data": [
   {
    "attributes": {
      "value": {
        "max": 18.7,
        "avg": 18.6819444444444,
        "min": 18.6555555555556
      },
      "timestamp": "2016-02-01T12:20:00Z",
      "port": "agg(t)"
    },
    "relationships": {
      "sensor": {
        "data": {
          "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
          "type": "sensor"
        }
      }
    },
    "id": "ff308e69-a2c5-43a8-9215-dd4042b51104",
    "meta": {
      "created": "2016-12-23T06:06:46.98618Z"
    },
    "type": "data-point"
   },
   {
    "attributes": {
      "value": {
        "max": 1.0133,
        "avg": 1.01325,
        "min": 1.0132
      },
      "timestamp": "2016-02-01T12:20:00Z",
      "port": "agg(sg)"
    },
    "relationships": {
      "sensor": {
        "data": {
          "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
          "type": "sensor"
        }
      }
    },
    "id": "9d09823b-5302-4fd8-94f4-9c1e2ef62b99",
    "meta": {
      "created": "2016-12-23T06:06:29.719129Z"
    },
    "type": "data-point"
   },
   {
    "attributes": {
      "value": {
        "max": 4168,
        "avg": 4161.15,
        "min": 4152.5
      },
      "timestamp": "2016-02-01T12:20:00Z",
      "port": "agg(b)"
    },
    "relationships": {
      "sensor": {
        "data": {
          "id": "8dce390e-082a-47fc-85cf-43adafd30edd",
          "type": "sensor"
        }
      }
    },
    "id": "5cd24bb5-30ea-4278-bbb0-082c8f25a5fe",
    "meta": {
      "created": "2016-12-23T06:06:01.779172Z"
    },
    "type": "data-point"
   },
...

Again, I’ve elided the results for 12:10 and 12:00 for brevity. This seems like it could be very convenient for supporting something like a dashboard. Some things I haven’t shown are the ability to choose a limited number of ports, and how large result sets are paginated, but those are also quite simple. It seems like the requests to support basic display of min/max/avg data on a zoomable/scrollable timeline would be very straightforward. And, that’s what Helium’s dashboard appears to give you, if your data is recent.

But I need some way to visualize historical data as well. Read part three to find out what I came up with.

Beer IoT (Part 1)

I’m not super into the Internet-of-Things. There are no wifi lightbulbs, electronic locks, or smart thermostats in my house. But, I’m a homebrewer, and that means I love new ways to get data about my beer. I backed The BeerBug on Kickstarter, and I’ve used it on a number of batches since early 2014.

The data my BeerBug provides is simple, but interesting: air temperature and specific gravity, measured once per minute. It gives me a pretty good idea of when a beer has finished or stalled.

The user experience leaves something to be desired, though. The website is clunky, and was down for a month or more recently. The mobile app is just a web view. There is no way to use the device without the website.

So, I have two goals over the next few months. The first is to extract all of the data I have recorded with my BeerBug, and the second is to find an alternative. This post covers the first goal, and the next will begin to explore the second.

The BeerBug offers an API … that only covers active brewing, not history. Beer pages allegedly offer CSV and XML data download, but the links haven’t worked in months. You can view graphs of historical brews on the website, though, so they have the ability to fetch that data.

Pulling up the Chrome web inspector and visiting a beer page, there is an XHR for a “graph.php” that returns JSON to draw the graph. Try as I might, I haven’t been able to construct a curl command to get the same data – it always came through with “0” or “null” in several fields. There’s almost certainly some header I’m missing, but I’ve taken an alternate route.

The network tab of Chrome’s web inspector will let you “Save as HAR with Content.” This exports a JSON file will all the information the inspector is showing. Lucky for me, this includes the content of the graph.php XHR response. So, switching the graph view from “25 points” to “all” and waiting for the new graph.php request to complete, then saving as HAR has captured my data.

The data from the XHR is the last in the log entries, so it’s easy to extract with jq:

$ jq ".log.entries[-1].response.content.text | fromjson" \
  export-oatmeal-stout-jan-2016.har > export-oatmeal-stout-jan-2016.json

Now I can start to explore the data:

$ jq ". | keys" export-oatmeal-stout-jan-2016.json
[
 "al",
 "batt",
 "dates",
 "degrees",
 "ext",
 "plato",
 "platod",
 "sg",
 "success",
 "temp",
 "temp2"
]

Almost all of these fields are arrays with one entry per measurement:

  • al: alcohol percentage
  • batt: battery voltage (volts)
  • dates: date of measurement (comma-separated strings year,month,day,hour,minute,second – not width-padded, zero-based month index, local timezone)
  • platod: degrees plato
  • sg: specific gravity
  • temp: air temperature (either Fahrenheit or Celcius, depending on value of “degrees” field)
  • temp2: probe temperature

Non-array fields:

  • degrees: what units “temp” and “temp2” are in (“F” for Fahrenheit, and I assume “C” for Celcius, but I haven’t checked)
  • ext: unknown
  • plato: unknown
  • success: unknown

Just a bit of data checking: I started the beer on January 23, 2016, and finished it on February 8:

$ jq ".dates[0], .dates[-1]" export-oatmeal-stout-jan-2016.json
"2016,0,23,18,35,3"
"2016,1,08,15,18,3"

Its specific gravity started about where I normally start my beers, and ended a little below where I normally finish them:

$ jq ".sg[0], .sg[-1]" export-oatmeal-stout-jan-2016.json
1.0568
1.0082

That means it may have a 6.4% alcohol content by volume:

$ jq ".al[0], .al[-1]" export-oatmeal-stout-jan-2016.json
0
6.4

And finally, it was kept in nice cool range (`add / length` is jq for “average”):

$ jq ".temp | max, min, add / length" export-oatmeal-stout-jan-2016.json
71.18
63.4
65.68423989795319

Neat. Let’s compare all the beers I exported:

# extract all xhr data
$ for x in export*.har; \
    do jq ".log.entries[-1].response.content.text | fromjson" $x \
    > ${x/har/json}; \
  done
# extract basic data
$ for x in export*.json; \
    do echo $x && jq -c '{"sg":.sg[0],"fg":.sg[-1],"abv":.al[-1],"temp":{"min":.temp|min,"max":.temp|max,"avg":(.temp|add/length)}}' $x; \
  done
export-abbey-oct-2015.json
{"sg":1.0498,"fg":1.4284,"abv":0,"temp":{"min":69.74,"max":79.96,"avg":72.70824454043661}}
export-beechwood-smoke-may-2014.json
{"sg":1.0511,"fg":0.9935,"abv":7.5,"temp":{"min":71.8,"max":83,"avg":75.40845794392524}}
export-butternut-stout-nov-2014.json
{"sg":1.0529,"fg":1.3635,"abv":0,"temp":{"min":65.36,"max":74.41,"avg":69.15657534246593}}
export-ipa-may-2015.json
{"sg":1.0475,"fg":0.9946,"abv":6.7,"temp":{"min":68.81,"max":80.21,"avg":71.19772108108131}}
export-mead.json
{"sg":1.115,"fg":1.0389,"abv":10,"temp":{"min":61,"max":70.84,"avg":65.09618010573946}}
export-oatmeal-stout-jan-2016.json
{"sg":1.0568,"fg":1.0082,"abv":6.4,"temp":{"min":63.4,"max":71.18,"avg":65.68423989795319}}
export-oatmeal-stout-nov-2015.json
{"sg":1.0639,"fg":1.0108,"abv":7,"temp":{"min":63.66,"max":77.25,"avg":69.64541020966313}}
export-oatmeal-stout-sep-2014.json
{"sg":1.0499,"fg":0.9973,"abv":7.3,"temp":{"min":72.3,"max":81.8,"avg":76.59252173913043}}
export-pumpkin-ale-nov-2015.json
{"sg":1.0529,"fg":1.0134,"abv":5.2,"temp":{"min":63.37,"max":70.69,"avg":66.15414939483689}}

There is quite a bit more analysis that should be done on this data. For example, I know that the specific gravity jumps around quite a lot. It is measured by a hall-effect sensor capturing the weight of a plumb in the beer, and so it’s a bit touchy about temperature changes and carbonation bubbles from active yeast. Those simple stats about the temperature (min, max, mean) do not really tell the whole story.

But, I’m fairly well convinced that I now have a copy of my recorded data. What is the path forward? Find out in part two.

Roundtripping the HTTP Flowchart

It has long bugged many of the Webmachine hackers that this relationship with Alan Dean’s HTTP flowchart is one-way. Webmachine was made from that graph, but that graph wasn’t made from Webmachine. I decided to change that in my evenings last week.

Webmachine hackers are familiar with a certain flowchart representing the decisions made during the processing of an HTTP request. Webmachine was designed as a practical executable form of that flowchart.

It has long bugged many of the Webmachine hackers that this relationship is one-way, though. Webmachine was made from the graph, but the graph wasn’t made from Webmachine. I decided to change that in my evenings last week, while trying to take my mind off of Riak 1.0 testing.

This is a version of the HTTP flowchart that only a Webmachine hacker could love. It’s ugly and missing some information, but the important part is that it’s generated by parsing webmachine_decision_core.erl.

I’ve shared the code for generating this image in the gen-graph branch of my webmachine fork. Make sure you have Graphviz installed, then checkout that branch and run make graph && open docs/wdc_graph.png.

In addition to the PNG, you’ll also find a docs/wdc_graph.dot if you prefer to render to some other format.

If you’d really like to dig in, I suggest firing up an Erlang node and looking at the output of wdc_graph:parse("src/webmachine_decision_core.erl"):

[{v3b13, [ping],                     [v3b13b,503]},
 {v3b13b,[service_available],        [v3b12,503]},
 {v3b12, [known_methods],            [v3b11,501]},
 {v3b11, [uri_too_long],             [414,v3b10]},
 {v3b10, [allowed_methods,'RESPOND'],[v3b9,405]},
 {v3b9,  [malformed_request],        [400,v3b8]},
...

If you’ve looked through webmachine_decision_core at all, I think you’ll recognize what’s presented above: a list of tuples, each one representing the decision named by the first element, with the calls made to a resource module as the second element, and the possible outcomes as the third element. Call wdc_graph:dot/2 to convert those tuples to a DOT file.

There are a few holes in the generation. Some response codes are reached by decisions spread across the graph, causing long arrows to cross confusingly. The edges between decisions aren’t labeled with the criteria for following them. Some resource calls are left out (like those made from webmachine_decision_core:respond/1 and the response body producers and encoders). It’s good to have a nice list for future tinkering.

NerdKit Gaming: Part 2

If you were interested in my last bit of alternative code-geekery, you may also be interested to hear that I’ve pushed that NerdKit Gaming code farther. If you browse the github repository now, you’ll find that the game also includes a highscore board, saved in EEPROM so it persists across reboot. It also features a power-saving mode that kicks in if you don’t touch any buttons for about a minute. Key-repeat now also allows the player to hold a button down, instead of pressing it repeatedly, in order to move the cursor multiple spaces.

You may remember that I left of my last blog post noting that there wasn’t much left for the game until I could find a way to slim down the code to fit new things. So what allowed these new features to fit?

Well, I did find ways to slim down the code: I was right about making the game state global. But, I also re-learned a lesson that is at the core of hacking: check your base assumptions before fiddling with unknowns. In this case, my base assumption was the Makefile I imported from an earlier NerdKits project. While making the game state global saved a little better than 1k of space, changing the Makefile such that unused debugging utilities, such as uart, printf, scanf weren’t linked in saved about 6k.

In that learning, I also found that attempting to out-guess gcc’s “space” optimization is a losing game. Making the game state global had a positive effect on space, but making the button state global had a negative effect. Changing integer types would help in one place, but hurt in others. I’m not intimately familiar with the rules of that optimizer, so it felt like spining a wheel of chance choosing which thing to prod next.

You may notice that I ultimately returned the game state to a local variable, passed in and out of each function that needed it. The reason for this was testability. It’s simply easier to test something that doesn’t depend on global state. Once I had a bug that required running a few specific game states through these functions repeatedly, it just made sense to pay the price in program space in order to be able to write unit tests to cover some behaviors.

So now what’s next? This time, it’s not much until I buy a new battery. So much reloading and testing finally drained the original 9V. Once power is restored, I’ll probably dig into some new peripheral … maybe something USB?

NerdKit Gaming

Contrary to the evidence on this blog, not all of the code I write is in Erlang. It’s not even all web-based or dealing with distributed systems. In fact, this week I spent my evenings writing C for an embedded device.

I’ve mentioned NerdKits here before (affiliate link). This week I finally dug into the kit I ordered so long ago, and took it somewhere: gaming.

The result is a clone of a simple tile-swap matching game. I used very little interesting hardware outside the microcontroller and LCD — mostly just a pile of buttons. The purpose of this experiment was to test the capabilities of the little ATmega168 (and my abilities to program it).

I’ve put the code on github, if you’re interested in browsing. If you don’t have a NerdKit of your own to load it up on, I’ve also made a short demo video, and snapped a few up-close screenshots.

What did I learn? Mostly I remembered that writing a bunch of code to operate on a small amount of data can be just as fun as writing a bunch of code to operate on a large amount of data. Lots of interaction with the same few bytes from different angles has a different feel than the same operation repeated time and time again on lots of different data. I also learned that I’ve been spoiled by interactive consoles and fast compile/reload times. When it takes a minute or more to restart (after power cycles and connector un-re-plugging) and I don’t have an effectively infinite buffer to dump logs in, I think a little longer about each experiment.

So what’s next? Well, not much for this game, unless I slim down the code some more. Right now it compiles to 14310 bytes. Shortly before this, it was 38 bytes larger, and refused to load onto the microcontroller properly, since it plus the bootloader exceeds the 16K of flash memory available. My first attack would probably be to simply move the game board to a global variable instead of passing it as a function argument. The savings in stack-pushing should gain a little room.

If I were to make room for new operations, then a feature that saved a bit of state across power cycles would be a fun target. What’s a game without a high-score board?

Map/reducing Luwak

I was inspired, this weekend, by off-list discussion of Luwak and by Guy Steele’s talk How to Think about Parallel Programming—Not!. The two seemed naturally attracted, and thus I created the luwak_mr module.

The luwak_mr module exposes simple a function that knows how to walk a Luwak file tree, and send the keys for each of its leaf nodes off to a Riak map/reduce process. This enables one to run a map function against each block in a luwak file. For example, one might split a large Latin-1 file into “words” (the luwak_mr_words module in the project is an example implementation of the method that Guy Steele presented).

And, yes, this blog has been dormant for a while … I’ve been busy. Lots of woodworking and travel. Making music has also begun to require more time, and yesterday I learned how to ski cross-country. Always busy, the life of a hobbyist.

Update: luwak_mr has also been accepted to the Riak function contrib. So if you’re in the habit of browsing there, fetch the latest.

Spinner!

Now that iPhone webapps are all the rage, I thought I’d release one of my own. Behold: the Spinner for iPhone and Dashboard. Good riddance to indecision – let the spinner choose!

It’s my birthday, and I’ve decided to give you all a present.

I needed a break from Riak & BeerRiot last week, and the thing everyone was talking about was iPhone webapps.

I read up and spent a while considering different apps I could build. Then it finally hit me: converting Dashboard widgets to iPhone webapps should be trivial!

Unfortunately, it’s not completely trivial if your widget uses the widget preferences, buttons, and/or back-face configuration stuff. But, it’s not impossible to emulate all that.

Lucky for me, I had a widget lying around that I’d written a few years ago. After a few hours of trial-and-error, I’m finally happy with it, and I’ve decided to release it to the world.

I give you the Spinner iPhone webapp. If you ever find yourself in a moment of indecision, tell the Spinner what your choices are and give it a flick.

If you’re without iPhone or iPod Touch, but you have a Mac running a recent OS X, I’ve also released the Dashboard widget, for you to relinquish the same decisive responsibility on the desktop.

Enjoy!

This also gave me an excuse to upgrade the Webmachine installation on BeerRiot. Virtual hosts for the win!

Webmachine POST Example

Many people have asked for an example Webmachine resource that responds to POST. If you follow my twitter feed, you may have caught this gem.

I figured that example could use a little fleshing out, so I’ve added a resource to my wmexamples repo.

formjson_resourc.erl makes an attempt at demonstrating the simplest way to handle a POST, while also demonstrating the difference between content-producing functions (to_json/2 in this example, and others named in content_types_provided/2), which put content in the response body simply by returning it, and other functions, which have to put content in the response body by returning a modified ReqData.

For another example of handling POST, read demo_fs_resource.erl that comes with Webmachine. It implements post_is_create/2, create_path/2, content_types_accepted/2, and accept_content/2 to handle POST requests. (Incidentally, demo_fs_resource is a good example of many Webmachine resource functions.)

Updated to include content_types_accepted/2 in the list of functions handling POST requests – thanks for catching it, Lou!