Monday, May 12, 2014

Ontology Summit 2014 and the communique

Ontology Summit 2014 officially concluded with the symposium on April 28-29. There were some great keynotes, summary presentations and discussions. You can see most of the slides on the Day 1 and Day 2 links, and can also check out the online, unedited Day 1 chat and Day 2 chat.

The main "output" of each Ontology Summit is a communique. This year's communique is titled Semantic Web and Big Data Meets Applied Ontology, consistent with the Summit theme. Follow the previous link to get the full document, and consider endorsing it (if you are so inclined). To endorse the communique, send an email to with the subject line: "I hereby confirm my endorsement of the OntologySummit2014 Communique" and include (at least) your name in the body of the email. Other remarks or feedback can also be included. And, I would encourage you to add your thoughts.

I want to provide a quick list of the high points of the communique (for me):
  • In the world of big data, ontologies can help with semantic integration and mapping, reduction of semantic mismatches, normalization of terms, and inference and insertion of metadata and other annotations.
  • Development approaches that involve a heavy-weight, complete analysis of "the world" are evolving to lighter weight approaches. This can be seen in the development of ontology design patterns, the use of ontologies in Watson, and the bottom-up annotation and interlinking approaches of web/RESTful services (as "Linked Services").
  • There are some best practices that can be applied for sharing and reuse to succeed (and since I drafted most of these best practices, I am just copying them directly below :-)):
    • Wise reuse possibilities follow from knowing your project requirements. Competency questions should be used to formulate and structure the ontology requirements, as part of an agile approach. The questions help contextualize and frame areas of potential content reuse.
    • Be tactical in your formalization. Reuse content based on your needs, represent it in a way that meets your objectives, and then consider how it might be improved and reused. Clearly document your objectives so that others understand why you made the choices that you did.
    • Small ontology design patterns provide more possibilities for reuse because they have low barriers for creation and potential applicability, and offer greater focus and cohesiveness. They are likely less dependent on the original context in which they were developed.
    • Use "integrating" modules to merge the semantics of reused, individual content and design patterns.
    • Separately consider the reuse of classes/concepts, from properties, from individuals and from axioms. By separating these semantics (whether for linked data or ontologies) and allowing their specific reuse, it is easier to target specific content and reduce the amount of transformation and cleaning that is necessary.
    • RDF provides a basis for semantic extension (for example, by OWL and RIF). But, RDF triples without these extensions may be underspecified bits of knowledge. They can help with the vocabulary aspects of work, but formalization with languages like OWL can more formally define and constrain meaning. This allows intended queries to be answerable and supports reasoning.
    • Provide metadata (providing definitions, history and any available mapping documentation) for your ontologies and schemas. Also, it is valuable to distinguish constraints or concepts that are definitive (mandatory to capture the semantics of the content) versus ones that are specific to a domain. Domain-specific usage, and "how-to" details for use in reasoning applications or data analytics are also valuable. Some work in this area, such as Linked Open Vocabularies and several efforts in the Summit's Hackathon, is underway and should be supported.
    • Use a governance process for your ontologies (and it would be even better if enforced by your tooling). The process should include open consideration, comment, revision and acceptance of revisions by a community.
  • Lastly, what are some of the interesting areas of investigation? One area, certainly, is the need for tooling to better support modular ontology development, integration, and reuse. Another is support for hybrid reasoning capabilities - supporting both description logic and first-order logic reasoning, and both logical and probabilistic reasoning. Third, tooling that combines data analytic and ontological processing would be valuable to make sense of "big data", and aid in the dissemination of the resulting knowledge to users and for decision support. To truly address this last area, it may be necessary to create specialized hardware and processing algorithms to combine and process data using the graph-structured representations of ontologies.
That's it for me, but please take a look at the communique, draw your own conclusions, and determine your own highlights.

Andrea

No comments:

Post a Comment