Wednesday, June 10, 2009

Types of Models - For Business Versus For IT

I was looking through my notes about articles that I had read - and found an interesting Burton Group report entitled Generalized and Detailed Data Models: Seeking the Best of Both Worlds. (I think that it was published earlier this year.) I must admit to having been both confused and intrigued by the title. :-)

In the paper, "generalized" models are those used to define database/storage structures and to find the general themes and fundamental aspects of the data (and its values). In short, they are the data models defined by IT to effectively and efficiently use the technologies that are in place (like SQL databases). Maybe "reduced" is a better word than "generalized" ...

On the other hand, "detailed" models are those that are useful to business people. They define and describe the information requirements of the business, and its vocabularies, rules and processes. They hold the details from the business perspective. Again, maybe another word like "conceptual" is better (since even the "generalized" models hold "details") ...

What is valuable is not the titles used for these models but their semantics. :-) The key message is that a business needs both types of models and they need to stay in sync. This is really important. The conceptual/detailed models hold the real business requirements and language. They haven't been reduced to basic data values whose semantics are lost in the technology used to define and declare them.

IMHO, a business loses information and knowledge when it only retains and works from the IT models. There is much to be gleaned from the business input and much value in keeping the business people engaged in the work. This is almost impossible once you reduce the business requirements to technology-speak.

As the report says, "do not allow generalized models to compromise your understanding of the business."

Monday, June 8, 2009

PriceWaterhouseCoopers Spring Technology Forecast (Part 3)

This is the last in a series of posts summarizing the PriceWaterhouseCooper Spring Technology Forecast. I spent a lot of time on the report, since it highlights many important concepts about the Semantic Web and business.

The last featured article in the report is entitled 'A CIO's strategy for rethinking "messy BI"'. The recommendation is to use Linked Data to bring together internal and external information - to help with the "information problem". How does PwC define the "information problem"? As follows ... "there's no way traditional information systems can handle all the sources [of data], many of which are structured differently or not structured at all." The recommendation boils down to creating a shared or upper ontology for information mediation, and then using it for analysis, for helping to create a business ecosystem, and to harmonize business logic and operating models. The two figures below illustrate these concepts.





The article includes a great quote on the information problem, why today's approaches (even metadata) are not enough, and the uses of Semantic Web technologies ... "Think of Linked Data as a type of database join that relies on contextual rules and pattern matching, not strict preset matches. As a user looks to mash up information from varied sources, Linked Data tools identify the semantics and ontologies to help the user fit the pieces together in the context of the exploration. ... Many organizations already recognize the importance of standards for metadata. What many don’t understand is that working to standardize metadata without an ontology is like teaching children to read without a dictionary. Using ontologies to organize the semantic rationalization of the data that flow between business partners is a process improvement over electronic data interchange (EDI) rationalization because it focuses on concepts and metadata, not individual data elements, such as columns in a relational database management system. The ontological approach also keeps the CIO’s office from being dragged into business-unit technical details and squabbling about terms. And linking your ontology to a business partner’s ontology exposes the context semantics that data definitions lack."

PwC suggests taking 2 (non-exclusive) approaches to "explore" the Semantic Web and Linked Data:

  • Add the dimension of semantics and ontologies to existing, internal data warehouses and data stores
  • Provide tools to help users get at both internal and external Linked Data
And, as with the previous posts, I want to finish with a quote from one of the interviews in the report. This quote comes from Frank Chum of Chevron, and discusses why they are now looking to the Semantic Web and ontologies to advance their business. "Four things are going on here. First, the Semantic Web lets you be more expressive in the business logic, to add more contextual meaning. Second, it lets you be more flexible, so that you don’t have to have everything fully specified before you start building. Then, third, it allows you to do inferencing, so that you can perform discovery on the basis of rules and axioms. Fourth, it improves the interoperability of systems, which allows you to share across the spectrum of the business ecosystem. With all of these, the Semantic Web becomes a very significant piece of technology so that we can probably solve some of the problems we couldn’t solve before. One could consider these enhanced capabilities [from Semantic Web technology] as a “souped up” BI [business intelligence]."

Wednesday, June 3, 2009

PriceWaterhouseCoopers Spring Technology Forecast (Part 2)

This post continues the review and summarization of PwC's Spring Technology Forecast, focused on the Semantic Web.

The second featured article is Making Semantic Web connections. It discusses the business value of using Linked Data, and includes interesting information from a CEO survey about information gaps (and how the Semantic Web can address these gaps). The article argues that to get adequate information, the business must better utilize its own internal data, as well as data from external sources (such as information from members of the business' ecosystem or the Web). This is depicted in the following two figures from the article ...




















I also want to include some quotes from the article - especially since they support what I said in an earlier blog from my days at Microsoft,
Question on what "policy-based business" means ... :-)
  • Data aren’t created in a vacuum. Data are created or acquired as part of the business processes that define an enterprise. And business processes are driven by the enterprise business model and business strategy, goals, and objectives. These are expressed in natural language, which can be descriptive and persuasive but also can create ambiguities. The nomenclature comprising
  • ... the natural language used to describe the business, to design and execute business processes, and to define data elements is often left out of enterprise discussions of performance management and performance improvement.
  • ... ontologies can become a vehicle for the deeper collaboration that needs to occur between business units and IT departments. In fact, the success of Linked Data within a business context will depend on the involvement of the business units. The people in the business units are the best people to describe the domain ontology they’re responsible for.
  • Traditional integration methods manage the data problem one piece at a time. It is expensive, prone to error, and doesn’t scale. Metadata management gets companies partway there by exploring the definitions, but it still doesn’t reach the level of shared semantics defined in the context of the extended virtual enterprise. Linked Data offers the most value. It creates a context that allows companies to compare their semantics, to decide where to agree on semantics, and to select where to retain distinctive semantics because it creates competitive advantage.
As in my last post, I want to reinforce the message and include a quote from one of the interviews. This one comes from Uche Ogbuji of Zepheira ... "... it’s not a matter of top down. It’s modeling from the bottom up. The method is that you want to record as much agreement as you can. You also record the disagreements, but you let them go as long as they’re recorded. You don’t try to hammer them down. In traditional modeling, global consistency of the model is paramount. The semantic technology idea turns that completely on its head, and basically the idea is that global consistency would be great. Everyone would love that, but the reality is that there’s not even global consistency in what people are carrying around in their brains, so there’s no way that that’s going to reflect into the computer. You’re always going to have difficulties and mismatches, and, again, it will turn into a war, because people will realize the political weight of the decisions that are being made. There’s no scope for disagreement in the traditional top-down model. With the bottom-up modeling approach you still have the disagreements, but what you do is you record them."

And, yes, I did say something similar to this in an earlier post on Semantic Web and Business. (Thumbs up :-)

Tuesday, June 2, 2009

PriceWaterhouseCoopers Spring Technology Forecast (Part 1)

In an earlier post, I mentioned PriceWaterhouseCoopers' spring technology forecast and its discussion of the Semantic Web in business. In this and the following post, I want to overview and highlight several of the articles. Let's start with the first featured article ...

Spinning a data Web overviewed the technologies of the Semantic Web, and discussed how businesses can benefit from developing domain ontologies and then mediating/integrating/querying them across both internal and external data. The value of mediation is summarized in the following figure ...








I like this, since I said something similar in my post on the Semantic Web and Business.

Backing up this thesis, Tom Scott of BBC Earth provided a supporting quote in his interview, Traversing the Giant Global Graph. "... when you start getting either very large volumes or very heterogeneous data sets, then for all intents and purposes, it is impossible for any one person to try to structure that information. It just becomes too big a problem. For one, you don’t have the domain knowledge to do that job. It’s intellectually too difficult. But you can say to each domain expert, model your domain of knowledge— the ontology—and publish the model in the way that both users and machine can interface with it. Once you do that, then you need a way to manage the shared vocabulary by which you describe things, so that when I say “chair,” you know what I mean. When you do that, then you have a way in which enterprises can join this information, without any one person being responsible for the entire model. After this is in place, anyone else can come across that information and follow the graph to extract the data they’re interested in. And that seems to me to be a sane, sensible, central way of handling it."