Monday, January 10, 2011

Using the semantic web to fight fires

On Dec 10th, I watched a fascinating webcast from SemanticWeb.com ... "Fighting Fire with the Semantic Web". I won't repeat the content (you can watch the presentation for yourself), but I will provide some of my take-aways.

We all have environments with problems similar to the fire fighters in Amsterdam (except, hopefully, not quite so dangerous or life-threatening). When going to a fire, they have to:
  • Navigate the city streets (understand what else is happening with respect to road congestion, construction, etc.)
  • Assess the conditions of the building, fire and available/closest resources
  • Get different data based on the situation - for example, the relevant data for a forest fire (wind speed) is quite different than data for a structural fire (asbestos levels)
  • Determine the tasks to be done and in what order
  • Work with incomplete data
  • Use tools that are not targeted to their particular situations (such as commercial GPSs targeted for normal driver needs)
And do this all in real-time (or faster, if possible).

In the fire fighting environment, data comes from a variety of locations (emergency infrastructure databases owned by the police, construction road reports, geospatial databases, internal fire department databases and spreadsheets, ...). This all sounds similar to the daily problems that a IT or business person experiences.

So, what did the Amsterdam fire department do? They created a flexible infrastructure based on accessing raw (RDF) data (adopting Tim Berners-Lee's mantra, "raw data now"). They pre-loaded any data that was (relatively) static, and obtained other data in real-time as needed. All the information was accessed and assembled on the fly ... based on the URI of the information. (The URI was used by the back-end system to determine how the data should be displayed to the user.) The goal was to get away from specific applications with their own UIs, system and display requirements, etc. As was stated, "we already have 8 screens in the fire truck... we don't need 9".

Because the environment focused on raw data, in a generic form (RDF triples), any relevant information could be requested and displayed. Since the infrastructure based its processing on the URI, a user could definitely "follow their nose". Taking this one step further, meta-data could also be supplied with a URI and/or a piece of data, for UI/display purposes. (Interestingly enough, I was just working on UI-related meta-data for CA's Unified Service Model, stealing some great insights from our Mainframe 2.0 team. :-)

So, what were the technical lessons that were highlighted:
  • Use of RDF triples allowed the data to evolve with no change to the interfaces (except to support radically new kinds of data and their display needs - which should be rare)
  • Use of raw data meant that there were no interfaces (and custom coding) with "exclusive" ways to access to data
  • Use of common tags allowed quick review of possible data and selection
  • All interfaces were data-driven ... meaning that the data/URI defined how it should be displayed
  • Use of RDF triples (and URIs) allowed a "follow your nose" approach to locating other, related data
  • What was needed could be defined by the user based on their needs (for example, what they already knew) and the situation
How cool is that? And, don't you wish that you had this yesterday?

Andrea

1 comment:

  1. Adaptive has an active and influential Ontology Repository role in the standards community and works closely with the Object Management Group in addition to various other standards bodies across the globe.

    ReplyDelete