Emergentweet

Late on the 15th Feb, Chris Tweedie posted a couple of interesting tweets:

christweets

 

 

 

It was an interesting thought, and AJ and I had a quick chat about it, and decided we should give it a go and see what we could do.  We’re actually working right now on how to make the Citizen Science applications more user friendly and also investigating the feasibility of using Phonegap (http://www.phonegap.com/) as a means of making our HTML5 apps more available.

I asked AJ to write a bit about the development of the app:

PhoneGap is an interesting development toolkit that enables you to write a HTML5 application then wrap this with an ‘App’ to be deployed natively on most mobile phone hardware. The development of PhoneGap applications is pretty straightforward if you know HTML5 and JavaScript. First of all, you install the relevant development environment (in my case, this required the Android Development Kit & drivers, Eclipse plugins and the PhoneGap libraries). After that you write your HTML5 application as you normally would, with the exception of one key point: PhoneGap adds additional calls to the browser API (Application Programming Interface) that enable you to access the native hardware. For example, there are calls such as ‘navigator.compass.watchHeading’ which enables you to interface with the digital compass in most phones.

The Emergentweet application is fairly simple. It presents a single screen interface that watches the compass and gps. When someone clicks to report an emergency, that spatial information is then wrapped into a prefilled out twitter URL where they can click the ‘Tweet’ button and their information is published to the cloud. All up it took around an hour and a half from start to finish, including the installation of the SDKs (Software Development Toolkits). (Real Blue Sky Thinking ™)

So, when we got in on the morning of the 16th, I grabbed AJ, Benny and Kehan and said, guys, let’s go with this thing. Let’s throw a small amount of time at this for each of you and see if we can get a working prototype up at the end of the day.  It’s not meant to be pretty, but basically here’s a workflow:

  1. A person spots disaster on horizon (e.g. pall of smoke),
  2. A person runs an app on their phone, points at the smoke and presses a submit button,
  3. Their location and the bearing they were pointing is then tweeted with hashtags,
  4. A server app grabs the tweets that match the hashtag requirements, and displays them no a map as a point and a bearing line, and
  5. An emergency manager can then look at the server app and see where the lines intersect, to add to their own planning and on-ground work.

Then I left the lads to it, occasionally seeing some strange tweets (like the brief period when the hashtag was accidentally shortened to “#emergentwee” which made the whole thing seem very different!).

At about 5pm AJ grabbed me and we went outside to do a full walkthrough of what they had built in a little over 7 hours between the three of them.  I took a brief video of the experience, shown below (warning: poor quality, no professional editing, and somewhat irreverant – put it this way, we had disasters that included zombie apocalypse and the return of disco).

 

 

The end result of this work is shown on the website https://www.gaiaresources.com.au/emergentweet/.
*Warning: If this works in IE, it’s purely through luck.

You can go to this website now on your Android device, and download the Android App from the link in the top left corner of the window (or from this link directly – https://www.gaiaresources.com.au/emergentweet/dl/Emergentweet.apk.

We released this at about 5pm on the 16th February, about 24 hours from when we started.

announcement

 

As a proof of concept, it illustrated a few really interesting points.  In something I knew from experience, GPS locations can be really dodgy (apart from the fact that we did round the location data to 4 decimals), as shown from the four points we used in the test I video’d – likely using the mobile tower triangulation in at least one case means that there are significant errors in the points, and hence the bearing lines do not intersect.  Here’s an example, where the olive green (?) points are the four reported but the red points are where we took the sightings from, with a red X marking where we were aiming for.

wrongpoint

Of course, if we coded something like this for real use, we’d just write something into the app that would prevent someone from submitting the sightings/bearings without having a high quality GPS location ready to send. People would probably also benefit from previewing/adjusting their data on a map before it was sent to Twitter.

When I dig into the data a little deeper, the bearings were much more accurate than the GPS fix.  This surprised me – I had heard anecdotally from other people that the compass bearings were pretty bad on most devices, but on the ones we were using (a Samsung Galaxy Tab and a HTC Desire) were pretty good!

Disasters are also time specific, and in this proof of concept we didn’t deal with time and dates.  What we did discuss briefly was that the tweets should have a limited life on the map (probably dependent upon the disaster type) and would fade out over time.  Although I guess as Twitter doesn’t keep tweets forever, this will happen anyway.

We won’t take this project any further at the moment, but we are looking at using this as a platform for additional professional development across our team when we get another opportunity to play with this.  Some of the other areas we have already discussed as possible things to work on in the future included:

  • Restricting submission of the points if the positional accuracy is below a certain (disaster related?) threshold,
  • Drawing an intersection radius around particular bearings, and concatenating tweets for a particular event into this intersection,
  • Doing some 3D analysis on the bearings – so that we can account for line of sight properly (although when I’ve done this in the past, the big problem is a lack of terrain data of suitable accuracy),
  • Administration functions – so you can configure the app and the service appropriately,
  • A cloud-based implementation of the server could also be developed that would perform all of the analysis and provide interfaces with current emergency services systems.

So why did a company that specialises in environmental work suddenly jump into emergency and disaster management?  We’re not about to change our business focus, but what we saw was an opportunity for a quick development project that involved some of the technologies we were already working with.  It was an interesting challenge for our team, and we’ll be doing internal presentation sessions about how it went for the whole team to learn from.

And now a point on the rapid prototyping process itself from AJ:

One of the most interesting things that i got out of this project was regarding team dynamics and the tendency of us developer types to want to completely polish a component before moving onto the next thing. I noticed that a standard workflow for most of us programmers is to approach a problem as follows:

  1. Make it work
  2. Make it right
  3. Make it fast and slick

Which is a pretty good way to work, however… Subconsciously we break down a bigger problem into a serious of smaller ones. Naturally, each of these smaller problems gets tackled with the standard workflow which means that very often, the first few components are great, but the later components never seem to come along. By breaking down the components and handing each off to separate developers the result is a complete working prototype (albeit not pretty or fast) in a very fast time frame as opposed to a half working prototype. I’m sure there’s other ways we can adjust our development process with this realisation (such as moving on before we do the slick bit when we’re rapidly prototyping).

If you have any questions about what we did here, then let us know by contacting either myself or AJ either via email (Piers or AJ) or twitter (Piers or AJ).

Comments are closed.