Building the Android mobile prototype app for Bomb Sight

Since my last blog post giving a technical overview, the Android app has come along nicely and the second prototype is now being developed further into what will become the version of the app we release.

Development of the prototype continued using the Phonegap framework – allowing for possible future porting to the iPhone – and is using the Javascript-based Leaflet mapping library for the map view rather than the Android-specific library we had used in the initial prototype.

A quick walkthrough

When you load the current version of the app, you are shown a quick introduction and then invited to view the map when you are ready. The map is showing the data from approximately the first 8 months of the Blitz (October 1940 to June 1941), including the type of bomb that fell at each location.

If you are in the London area then the map will zoom in to your current location and highlight it with a marker in amongst the bomb markers. If you’re not in London (or if we can’t locate you) the map will zoom to a view of Greater London which gives you an impression of the vast number of bombs that fell during that period. If you’ve moved the map around, you can always zoom back to your current location using the crosshair icon underneath the zoom controls.

Map view with small icons

When you’re zoomed in to a small area, we show you a map with markers that depict the type of bomb that fell at each location. If you’re zoomed further out then we simplify the icons so they don’t just end up covering the street layout on the map.

Map view with larger icons

The augmented reality view shows you markers hovering over where bombs fell, scaled to show closer locations with larger markers and smaller ones for those further away. For some more contextual information, where we can, we add a label with the name of the street it fell on. If you click on the marker, you’ll get a bit more information about the bomb and how far away from your location it fell. The associated radar dot will also be highlighted so you can see where it is in relation to others showing on the display.

Augmented Reality view

Prototyping and iterative improvements

As I’ve been building the prototype I’ve been regularly testing it out, including taking it out around the streets around where I live. Along with helping me find out more about the area I live in. Also the user testing At The National Archives  in October raised a number of questions and possible issues with the app…which we are now looking at.

Because we are currently relying on getting data from the project website rather than shipping it all with the app, the app can sometimes take a little while to load the data, and it will be using up your data allowance as well. In the coming weeks we’ll be looking to reduce the amount of data transfer required by stripping out information we don’t need, and possibly by shipping some of the data with the app so it doesn’t need to be transferred over the phone network once you’ve downloaded it from the app store.

There are currently issues with performance of the map when we’re showing a lot of information on the map, so we will be working to make this a bit easier for the phone to handle by simplifying how the data is displayed and not displaying any more information than is needed.

With the augmented reality view, we started off showing a simple icon for each bomb with a label underneath to show how far away it fell from where the user is currently standing. From testing this out and about on the street, it was clear that this didn’t give a very good experience as it wasn’t easy to tell where the markers referred to. This is how we came to changing icon sizes and trialling address labels to help users contextualise what they are seeing.

When there are lots of bombs that fell in a certain area, the screen can become quite full, so we will be trying a number of techniques to try and help the situation out here. The screenshot below shows a trial of slightly translucent markers to let users see markers in the distance as well as the closest ones, but it may be that it adds more detail to the screen than is necessary. What do you think? Do the address labels add to the confusion too?

Lots of markers

On the subject of augmented reality, when you’re trying to overlay information in an exact location, you need to make sure you know exactly where the user is, and which way their phone is pointing. This can be a bit problematic at times if your location hasn’t been detected accurately (e.g. if you’re under trees, near tall buildings or away from wifi networks), and may mean that what you’re seeing on the display doesn’t actually reflect reality. Over time as your phone refines its location, the display will update to correct this, but it may still lead to some confusion initially.

I will try to mitigate some of these issues in the final app by checking location accuracy (and currency, if you’ve walked on a bit since your phone last detected your location) before showing the first display of data, and perhaps try showing a small map of what we believe the user’s location to be.

I’ll post again when the app is available for wider testing, but why not tweet us @BombSightUK if you’d like to be an early tester.

Advertisements
Posted in project details, User-Centred Design | Tagged , , , , | 8 Comments

Progress update and preview of the website and mobile App

The last few weeks have seen considerable progress on the project. Amongst the many tasks that we have been working on we have selected a project name (see blog post: finding an application name), we have finished processing all of the maps and have them imported into Geoserver, we have complete the digital capture of bomb location from both the weekly maps and aggregate map sheets and we are developing our brand identity and communication plan.

We have been working diligently on the development of the web and mobile app. Progress is going well and we are planning our second  round of user testing at The National Archives this Friday Nov 9th  between 12 and 4pm. Our first round of testing two Staurday’s ago was positive and we identified some issues that have been resolved (I will write a separate blog post discussing the user testing).

Currently the apps are without their stylish front-end as this is currently being designed and implemented. What we will be testing is the functionality and meaning of the buttons and text as well as evaluating how intuitive the tools are to use.  We have also designed a set of icons for the different data we are clustering together and we will evaluate if their inherent meaning can be understood.

For a sneak preview of the apps we have created a short video – I used the freely available Microsoft Movie Maker together with lot of screen shots, that I linked together using transitions. For the music I browsed the open source catalogue available via Jamendo. This is my first attempt at creating a video, so it is not too long but hopefully gives you a taste of what we are developing.

I have embedded the video in this blog post, but the project has also set up a You Tube channel where we will be posting more updates and project information. The channel is called BombSightUK.

 

 

 

Posted in project plan, User-Centred Design | Tagged , , , , , , | 13 Comments

Announcing Bomb Sight

Watch this space….Coming Soon

http://bombsight.org/

https://twitter.com/BombSightUK

Posted in dissemination | Tagged , , , | Leave a comment

Bomb Sight: Naming and Branding our Web and Mobile App

For the last two-three weeks we have been trying to find a suitable name and identity for the interactive website and mobile mapping app with Augmented Reality functionality. There are two key outputs from the project.

The four core members of the team  wrote a list of names that would encapsulate the essence of the applications that are being developed. We wanted a name and identity that would:

  • Communicate the main purpose of the apps – which is to enable users to browse information relating to the WW2 Bomb Census.
  • Not just focus on the data that we have currently but would work in the future, should we find funding to scan and capture data for other British Cities and the entire period of the war.

Yasia, the project designer, came up with a name that we all liked very much. I spoke to Andrew Janes, the TNA map archivist who is on our steering committee and he also preferred this name on the long list. As did a straw pole of students I tutor at the University of Portsmouth.

We selected the name Bomb Sight.

This name actually incorporates the name of the piece of equipment that was used within various WW2 aircraft to predict the path of a released bomb.

Bomb Sight Mobile App Logo

For our logo we have included a picture of a plane and a bomb – the instruments of destruction. We have two versions of the Logo – one for the app and one for the website.

Posted in dissemination | Tagged | Leave a comment

Technical overview of the Android mobile application

This blog post will walk through the overview of the technical architecture for the Stepping Into Time Android mobile app and the process we went through to define it.

Because of the nature and rapid speed of development of mobile technologies, some of the choices we made earlier on in the project have changed along the way, largely due to new options becoming available between the initial investigation and the building of the first prototype application.

Application framework – native or cross-platform?

The requirement for the project was to build a native Android app that people would be able to download to their handset from an app store to use enhanced features that we wouldn’t be able to offer them through the main project website – namely the augmented reality view of the bomb census data.

When building an app for a mobile device, you are writing code that will only be able to run on one platform (e.g. Java code for Android, Objective-C for iOS on the iPhone), meaning that if you want to release the app onto a different platform (e.g. iPhone) in the future, you’ll need to rewrite the whole thing in another programming language. As the project is only initially looking to release an Android app, this is perhaps not a major issue, but as we are planning to release the app code as open source, it would be good if it could be more easily extended at a later date, either for this project or others looking to do similar things.

To build in some more future flexibility, there are a couple of projects which can help abstract application development away from a specific platform, allowing you to write one app which can be ported easily to other types of mobile devices. Phonegap is one such project, which provides you a skeleton framework for a native application on a number of different platforms (including Android and iPhone), and lets you build most of your application in Javascript, HTML and CSS, widely used languages on the web.

Having had some experience in building Phonegap prototypes in the past, I had originally looked at Phonegap as an option for building the Stepping Into Time mobile app, but decided not to pursue it for this project as there wasn’t an Augmented Reality library available for it — at that point in time.

An initial prototype was built as a native Android application using the Wikitude augmented reality library (see my previous post comparing different libraries) and an Android-specific mapping library called OSMDroid.

In the meantime, the Wikitude library has been released as a plugin for Phonegap, meaning we can once again consider using Phonegap for the project. Using Phonegap not only means that the app could be easily ported to other mobile platforms in the future, but also that we can share Javascript and CSS with the main project website, thereby simplifying development and leaving less code to maintain overall.

The final decision will be made after we’ve built a Phonegap prototype, but this is looking like it will be the preferred choice.

Integration with the project website

Whichever way we build it, the mobile app will retrieve all of its bomb location data and historical mapping from the main project website and will use the same map tiles that the website uses.

The geographic point data showing bomb locations will be requested in GeoJSON (geographic Javascript object notation) format from a Web Feature Service (WFS) provided by GeoServer. The GeoJSON data will then be passed to the augmented reality view for processing and display using Javascript.

For the mapping view, the GeoJSON data will be parsed into a Java array and added to OSMDroid if we build a native app or, if we build a Phonegap app, the data will be passed to the Leaflet mapping library to parse it using Javascript and display it as it would any other data layer.

Contemporary maps will be requested as tiled imagery from a mapping provider such as Cloudmade and used as the base layer for the map view.

The historical mapping will be requested as tiled imagery using the Web Map Service (WMS) provided by GeoServer and overlayed on the map view when needed.

Next steps…

I will post an update once we have built both prototype apps and decided which way to proceed for the final app.

Posted in technical | Tagged , , , | 2 Comments

Choosing an Augmented Reality library for Android

A key part of the mobile app for the Stepping Into Time project is the augmented reality (AR) feature, giving users a way of exploring the area around them in addition to using map views. This blog post summarises the project’s requirement’s for this app and a review of the different Software Development Kits that we could use.

Augmented Reality software allows applications to overlay additional information on top of an existing view of the landscape to provide contextual information. Typically information will be displayed depending on its geographical location in the landscape (location-based, or geo-based augmented reality), but could also be shown when the software detects a certain object (vision-based augmented reality).

For the Stepping Into Time project, we will only be using location-based AR as we are working with historical geodata in a situation where the landscape may have changed considerably since the bomb damage occurred in World War II.

A number of free software development kits (SDKs) and open source software libraries exist to assist in the development of augmented reality applications, allowing the project to leverage that work in its own mobile app. As part of the selection process, we looked into a number of different options and compared their functionality to the project requirements to see which would best meet our needs.

The main requirements for the AR SDK/library were:

  • open source software (or available for use free of charge)
  • can be embedded into a native Android app
  • presents point information (bombs, anti-invasion, images, witness statements), using mobile symbolisations for different points
  • ability to click on points to get more information
  • radar view of environment indicating density of bombs

When looking at the various offerings, we looked at the technical features as well as a number of other points about the project, such as the existence of a community, the amount of documentation and support available, and the availability of equivalent functionality for other platforms such as the iPhone (iOS) for possible future expansion.

Wikitude provides an SDK for geo-based augmented reality overlays for both Android and iOS which uses standard HTML5, CSS and Javascript web technologies for displaying markers. The software is not open source, but is available free of charge for non-commercial, non-profit projects (though a startup animation will be added in this case, and a logo displayed on the camera view). The software is being actively developed, and there is a community for support if needed.

Vuforia (previously known as Qualcomm QCAR) often crops up when searching for augmented reality options, but is aimed at the vision-based part of the augmented reality spectrum, recognising real world objects (known as trackables) and overlaying 3D models on top of them. They provide an Android and iOS SDK, which are both free for use in commercial and non-commercial projects. This seems like the project to go with if you want to use image recognition in your project, and can even be used as an extension to Wikitude if you want to combine geo- and vision-based AR.

Mixare is an open source project for Android and iOS which provides a geo-based AR application and the code behind it. With some work it should be possible to embed the AR functionality into an existing application, but this isn’t well documented at present.

AR-Kit is a geo-based AR library for Android written by a single developer and available as open source code. There isn’t much documentation available for this project, it is limited to just Android, and it doesn’t provide a radar view of the data in the surrounding area.

ARViewer is an open source geo-based AR library for Android. One of the useful features noted about this library is the ability to use the AR interface to create content that is tagged with a location (lat, long and altitude). To use this library, the end user of the app would need to install the ARViewer app from the Android Market as well as the app we are building, which isn’t an ideal user experience.

Having looked at each of the projects, we think the best match for our requirements is currently the Wikitude SDK.

Although it would be better to find an open source library to ensure the code was always available, we feel the lack of documentation and support in these projects is a limiting factor, and due to time constraints we can’t commit to improving that situation. Wikitude’s non-profit pricing makes their offering an ideal alternative, and because they are actively building a business around their platform, the SDK should be available for some time to come.

It is envisaged that the AR library could be switched to an open source version in the future if necessary, without too much redevelopment work being needed.

Posted in technical | Tagged , , | 2 Comments

A quick update on project progress

So it has been a while since my last post, I finally took a summer holiday and it was amazing, so apologies for the silence.

Lots of things have been going on , so expect a number of blog posts in the coming weeks.

  • Work on the web prototype is progressing well – we will upload a screen cast to show you in the the next week or so.
  • The license with the The National Archives is due to be signed, there is one small addition to be made.
  • I have appointed a mobile designer – Dan Karran – whose first job was to select an appropriate augmented reality library for the project – blog post to come.
  • We have asked a number of graphic designers to submit quotes to help develop the design of the website – watch this space for news of who is selected.
  • The data capture process is near completion – we are undertaking quality assurance at the moment – then I will be blogging more about what the data reveals.
  • We have developed a little app that enables you to identify locations in blocks of text so that we will hopefully be able to include some contextual data from the BBC archive of WW2 memories – a more detailed blog post to follow.
  • The National Archive became an official project partner – and I will be delivering a seminar their in July next year.

 

Posted in project details | Tagged | Leave a comment