Thursday, May 19, 2011

Shameless plug for my session at SemTech 2011

I will be speaking in a few weeks at SemTech 2011 (actually on Tuesday afternoon, June 7th, from 1:40 to 2:30pm). The meeting is in San Francisco, and you can register at the SemTech website. I can even provide a 15% discount code (SPK15).

So, what am I talking about (actually, demoing!)? My passions for processing of natural language text to extract business vocabulary and rules, and semantic technologies (of course).

Here are a couple of high points from the presentation (I won't repost the abstract, since you can read that online):I really encourage attendance at this conference, as I strongly believe that semantic technologies offer great insights and capabilities for business.

Andrea

Monday, May 2, 2011

Finishing Off the NIST Access Control Survey

I am finally getting a chance to finish the analysis of the NIST survey on access control methods. My apologies ... Somehow, the month of April got away from me ...

The last control model is Risk Adaptive Access Control or RAdAC. It is a combination of attribute and policy-based control (on steroids) with heuristics and machine learning. That last part is what makes it unique, challenging and very exciting.

Saying that attributes include environmental conditions does not seem like rocket-science, but just common sense. It is like saying that no one has permission to enter a building unless they are already identified to the security system. That works great until you need to override the policy because the building is on fire (and the firemen are definitely not already identified). So, including data on the environment (in the set of attributes to be assessed) is prudent.

Next, saying that policy can modify existing rules (making them more lax or strict, or modifying them to co-exist - i.e., de-conflicting them) is "meta-policy" (policy about policy). To me, this is just policy based management - but the targets of the policy are rules themselves.

However, the fascinating bit comes into play when the NIST authors discuss taking "a probabilistic, heuristic approach to determine whether the access should be granted ... The heuristics include a historical record of access control decisions and machine learning. This means that a RAdAC system will use previous decisions as one input when determining whether access will be granted to a resource in the future." I would actually expand that last sentence a bit to say "use previous decisions with insider/outsider threat analysis".

Do IT systems have the necessary data to capture and analyze this information today? I believe that we do. We have cheap storage that can hold extensive log data, sophisticated sensor/management hardware and software, and advanced pattern recognition and analysis software. What we need is more experience and research into the heuristics and strategies to effectively utilize this information, hardware and software.

The NIST paper goes on to highlight the obstacles to overcome to achieve RAdAC. I want to just briefly note and comment on them here:
  • Integration of a wide variety of systems and data - Which is an area where semantics technologies would be very useful (something that I might have said before)
  • Unambiguous definition of digital policies - I would again encourage investigating and building on semantic technologies, such as the Institute for Human and Machine Cognition's KAoS ontology and framework
  • Trustworthy sources of user and environment information
  • Research into machine learning, genetic algorithms and heuristics - Which is discussed above, and ...
  • A broad swath of non-technical challenges - such as the liabilities associated with a security breach made by an automated entity
Although I have worked on policy-based management for many years, I still worry that automated policy will just allow us to make errors more quickly. So, to NIST's list of obstacles I want to add the need to improve testing, test beds and simulation.

Andrea

Thursday, March 31, 2011

More on the topic of NIST and Access Control

Continuing the discussion, NIST and Access Control, I disagree with some of the distinctions between Attribute and Policy Based Access Control (ABAC and PBAC). Most of this disagreement is really in where attribute- and policy-based controls begin and end, and their characteristics, than in the concepts themselves.

In the report surveying access control models, NIST indicated that "one limitation of the ABAC model is that ... there can be disparate attributes and access control mechanisms among the organizational units. It is often necessary to harmonize access control across the enterprise." This distinction of specific silos of attribute-based control versus more enterprise-coordinated (but still attributed-based) control is the distinction between ABAC and PBAC in the report. For me, I consider this all policy-based access control, where the scope of "harmonization" varies (perhaps by conscious decision or priority of implementation).

It seems reasonable that an organization would work to fully harmonize its access to shared data or resources. That word, "shared", is important. For data and attributes specific to one organizational unit (or domain or application), it is reasonable to have individual policies (i.e., there is nothing to harmonize to). However, once data is shared across applications and org units, there is a need to harmonize - since subjects could be granted or denied access to the same data or resources by different units or applications.

So, there is a need to consider attribute-based control in specific areas AND in the enterprise as a whole. And, there is a need to be consistent with respect to the attributes and the policies. This seems true whether you call this ABAC or PBAC.

The NIST report goes on to say, PBAC "requires not only complicated application-level logic to determine access based on attributes, but also a mechanism to specify policy rules in unambiguous terms." This is not controversial, but I would clarify the term "application-level" in "application-level logic".

I agree that you need "guards" related to an application's need for and use of policy. However, you could obtain this by intercepting application/user attempts to access resources and then applying policy. Of course, you likely also have to map the application details into the attributes of your policies. This is where the Semantic Web concepts (such as OWL's SubClassOf, EquivalentClasses, DisjointClasses, etc.) come into play. As for "guards" themselves, there is a good example of this in KAoS Guards, implemented in the infrastructure from Florida's Institute for Human Machine Cognition (KAoS).

The second part of the quoted sentence talks about the need to specify policies in "unambiguous terms". This was one of the key points of my earlier post - one where we need end-user and government/organizational participation. BTW, I will be so bold as to also endorse the ontology work of the KAoS team as a basis for both access control (authorization) and obligation policies (see the entity rendering of their OWL ontology at http://ontology.ihmc.us/policy.html).

So, where do I start to disagree a bit more with the NIST position? It is in the area of an "Authoritative Attribute Source" ("the one source of attribute data that is authorized by the organization"). I cannot argue with taking this as a first line goal. Fred Wettling also pointed this out in his comments to my earlier post ... that there needs to be "a rethinking of the hub and spoke models of access control of the past. This raises the question of where identity-based attributes are stored and how/when they are validated. Can peer-to-peer communications rely on attribute assertions from the peers? In some cases yes. In other cases, third-party validation may be required. In both cases a centralized infrastructure may NOT exist. That is a challenge that may be addressable through some level of standardization of vocabulary, policy, and rule structures."

While I agree that one authorized attribute source is required by some scenarios, it is not mandated in all scenarios or even required for PBAC. As noted above, a common understanding of the semantics of the attributes and their "qualities" (including where and how they are obtained) is needed. This can be managed and mitigated by semantic alignment and policies about the data itself.

Just some more food for thought in the continuing dialog about access control and policy-based security/management/...

Next time, I promise to finally write about Risk Adaptive Access Control.

Andrea

Monday, March 21, 2011

Dialog on "NIST and Access Control", my previous post

I received a notification that the following comment was posted by Fred Wettling (from Bechtel) ... but it does not appear on the web site. So, I am re-posting it as a full entry and then adding a bit of dialog of my own.

My apologies to Fred that the site is not working quite right, and my appreciation for the additional insights! Here is Fred's post ....
Good post, Andrea ;-)

Here are a few initial thoughts...The overall NIST model can be viewed as moving from topology-based control (network boundaries) to policy-based control. RBAC, ABAC, PBAC... appear to be the evolution of how policies are instantiated and the level of granularity required for the target level of security. ACLs, firewall rules, and other topology-based controls will be around for a while and continue to serve as a coarse-grained access control for many hosted services within companies and also cloud providers that have requirements for zone-based controls.

But topologies are changing from at least three perspectives that must be addressed in tomorrow's security.
1. Resources and information are becoming more distributed as we move from a single source of access (monolithic applications) toward a "single source of truth" with mash-ups securely accessing and aggregating multiple authoritative information sources.
2. More distributed information sources including peer-to-peer communications.
3. Location-based services have an implication of geographic or physical context that may be relevant in security decisions.

There's also a needed (and implied) shift in WHERE security is applied. The trend must be toward the target resource or information to operate in the expanding Internet of Things. The implication of these topology-related trends will require changes to how information and resources are secured… and a rethinking of the hub and spoke models of access control on the past. This raised the question of where identity-based attributes are stored and how/when they are validated. Can peer-to-peer communications rely on attribute assertions from the peers? In some cases yes. In other cases, third-party validation may be required. In both cases a centralized infrastructure may NOT exist. That is a challenge that may be addressable through some level of standardization of vocabulary, policy, and rule structures that you mentioned. The challenge, as usual, is frequent vendor perception that product differentiation is achieved through proprietary technology.

I think there needs to be some industry awakening about the value of standard & policy-based access control. There is high-value to organizations to have common policy definitions accessible by PDPs and PEPs provided by multiple vendors.
1. DMTF has done some great work collaborating with others in standardizing policy models.
2. Before it merged with the Open Group in 2007, the Network Application Consortium (NAC) published the Enterprise Security Architecture. Link: https://www2.opengroup.org/ogsys/jsp/publications/PublicationDetails.jsp?catalogno=h071. A team has been established to update the 115 page doc. It has some great content that is relevant to this thread.

NIST could certainly help in pushing this work forward and promoting policy standardization in other standards organization including the two mentioned above. Unification would be a good thing.

Fred Wettling
Bechtel Fellow
And, here are my replies:

1. As for ACLs being around for a while, I totally believe that. However, I am not sure that they will be around out of necessity, but due to the persistence of legacy systems and the reluctance of vendors to move from tested and costly proprietary technologies. I think that we can move fully to PBAC or RAdAC policies, at least at the declarative level. Let me explain ... If hardware and software products understand a "standard" policy ontology and vocabulary, then they can implement/act on declarative policy, as opposed to processing device/OS/software-specific ACLs. Now, a device may only be capable of coarse-grained control or that may be the appropriate implementation for certain environments (for example, "zone-based control" in a cloud), where the fine-grained details are/should not be known. But, a different take on this, from requiring an ACL, is to say that a "standard" rule is translated to coarse-grained protection, as part of the PBAC/RAdAC processing. Even in the worst case of using only ACLs, the policy infrastructure can relate the ACL to the declarative policy that it was designed to implement. Traceability is a wonderful thing!

2. It is noted that a centralized infrastructure may not exist. I agree but also want to allow for centralized policy declaration (for example, from federal, state or local governments, or from a Governance/Compliance Policy-Setting Body in an enterprise) and decentralized, local policy definition and override. On the distribution side, it is necessary to disseminate the broad coverage policy (such as legislation), as well as the rules for how/when/by whom they can be modified. Does this mean "centralized" infrastructure? Not necessarily, but there may/should be "standard" policy repositories. Regarding policy decision and enforcement, there may be security aspects that can/should be processed locally, others that require third party validation, some that rely on sophisticated, controlled policy decision point analysis, etc. ... It eventually comes down to policy on policy. But, in any/all cases, security needs to work when offline/disconnected.

3. Fred lists work by DMTF and NAC. The NAC paper is very insightful and I encourage people to read it. As for the DMTF, they have indeed pushed the boundaries of policy-based management, although I find their model to be complex (due to the use of a query language in defining policy conditions and actions, and the disconnected instantiation of the individual rule components). Also, CIM requires extension to address all the necessary domain concepts and vocabularies, which will not be easy.

4. I was a bit unclear in my reference to NIST and helping with "practices and standards". One of my NIST colleagues pointed out that they do not set industry standards, but support the development of "open, international, consensus-based" ones. In addition, they develop guidance for Federal agencies. This was what I meant, although I did not use enough words. :-) Federal, state, and local governments are the ones that write legislation/statutes. But, NIST could encourage government agencies to publish their policies in a standard format, and that would help industry and promote competition. This was indeed my goal in mentioning NIST - not for them to create a standard, but to help drive one and ensure its utilization by Federal agencies. NIST is in a unique position to help because they are not a policy-setting agency. And, as Fred noted above "the challenge ... is frequent vendor perception that product differentiation is achieved through proprietary technology."

Well, I'm sure that this is too long already. Look for more in the next few days.

Andrea

Friday, March 18, 2011

NIST and Access Control

I ran across an excellent paper from NIST (the US's National Institute of Standards and Technology), A Survey of Access Control Methods. The document is a component of the publication, "A Report on the Privilege (Access) Management Workshop". I highly recommend reading it, since the security landscape is evolving ... as the technology, online information, regulations/legislation, and "need to share" requirements of a modern, agile enterprise keep expanding.

Access control is discussed from the hard-core (and painfully detailed) ACL approach (access control lists) all the way through policy and risk-adaptive control (PBAC and RAdAC). Here is a useful image from the document, showing the evolution:



Reading the paper triggered some visceral reactions, on my part ... For example, I strongly feel that role-based access control is no longer adequate for the real-world. Yet, it is where most of us live today.

The problem is the need for agility. The world is no longer only about restricting access to specific, known-in-advance entities using a one-size-fits-all-conditions analysis ("need to protect" with predefined roles) - but also about granting the maximum access to information that is allowed ("need to share" considering the conditions under which sharing occurs).

Here are some examples ... Firefighters need the maximum data about the location and conditions of a fire that they can legally obtain (see my previous post, Using the Semantic Web to Fight Fires). Law enforcement personnel, at the federal, state or local levels, need all the data about suspicious activities that can be legally shared. An information worker needs to see and analyze all relevant data that is permitted (legally and within the corporate guidelines). *The word, "legally", comes up a lot here ... more on that in another post.

So, how do you accomplish this with simple roles? You can certainly build new roles that take various situational attributes into account. But how far can you go with this approach? At some point, the number of roles (variations on a theme) spirals out of control. You really need attribute based control. As the NIST paper points out, with attributes, you don't need to know all the requesters in advance. You just need to know about the conditions of the access.

But, simply adding attribute data (data about the information being accessed, the entity accessing it, the environment where the access occurs or is needed, ...) can get quite complex. The real problem is figuring out how to harmonize and evaluate the attribute information if it is accessed from several data stores or infrastructures. Then, closely associated with that problem is the need to be consistent across an enterprise - to not allow access (under the same conditions) through one infrastructure that is disallowed by another.

Policy-based access control, the next concept in the evolution, starts to address some of these concerns. NIST describes PBAC as "a harmonization and standardization of the ABAC model at an enterprise level in support of specific governance objectives." It concerns the creation and administration of organization-wide rule sets (policies) for access control, using attribute criteria that are also semantically consistent across the enterprise.

Wow, reading that last sentence made my head hurt. :-) Let me decompose the concepts. For policy-based access control to really work, we need (IMHO, in order of implementation):

  1. A well defined (dare I say "standard") policy/rule structure
  2. A well understood vocabulary for the actors, resources and attributes
  3. Ability to use #1 and #2 to define access control rules
  4. Ability to analyze the rules for consistency and completeness
  5. An infrastructure to support the evaluation and enforcement of the rules (at least by transforming between local data stores and infrastructures, and the well understood and defined vocabulary and policies/rules)

Some day, we will have best practices and standards for #1 and #2. Even better, we could have government-blessed renderings of the standard legislation (SOX, HIPAA, ...) using #1 and #2.

Can NIST also help with these activities? I hope that it can. In the meantime, there are some technologies like Semantic Web that can help.

As you can imagine, I have lots more things to discuss about the specifics of PBAC and RAdAC, in my next posts.

Andrea

Monday, January 10, 2011

Using the semantic web to fight fires

On Dec 10th, I watched a fascinating webcast from SemanticWeb.com ... "Fighting Fire with the Semantic Web". I won't repeat the content (you can watch the presentation for yourself), but I will provide some of my take-aways.

We all have environments with problems similar to the fire fighters in Amsterdam (except, hopefully, not quite so dangerous or life-threatening). When going to a fire, they have to:
  • Navigate the city streets (understand what else is happening with respect to road congestion, construction, etc.)
  • Assess the conditions of the building, fire and available/closest resources
  • Get different data based on the situation - for example, the relevant data for a forest fire (wind speed) is quite different than data for a structural fire (asbestos levels)
  • Determine the tasks to be done and in what order
  • Work with incomplete data
  • Use tools that are not targeted to their particular situations (such as commercial GPSs targeted for normal driver needs)
And do this all in real-time (or faster, if possible).

In the fire fighting environment, data comes from a variety of locations (emergency infrastructure databases owned by the police, construction road reports, geospatial databases, internal fire department databases and spreadsheets, ...). This all sounds similar to the daily problems that a IT or business person experiences.

So, what did the Amsterdam fire department do? They created a flexible infrastructure based on accessing raw (RDF) data (adopting Tim Berners-Lee's mantra, "raw data now"). They pre-loaded any data that was (relatively) static, and obtained other data in real-time as needed. All the information was accessed and assembled on the fly ... based on the URI of the information. (The URI was used by the back-end system to determine how the data should be displayed to the user.) The goal was to get away from specific applications with their own UIs, system and display requirements, etc. As was stated, "we already have 8 screens in the fire truck... we don't need 9".

Because the environment focused on raw data, in a generic form (RDF triples), any relevant information could be requested and displayed. Since the infrastructure based its processing on the URI, a user could definitely "follow their nose". Taking this one step further, meta-data could also be supplied with a URI and/or a piece of data, for UI/display purposes. (Interestingly enough, I was just working on UI-related meta-data for CA's Unified Service Model, stealing some great insights from our Mainframe 2.0 team. :-)

So, what were the technical lessons that were highlighted:
  • Use of RDF triples allowed the data to evolve with no change to the interfaces (except to support radically new kinds of data and their display needs - which should be rare)
  • Use of raw data meant that there were no interfaces (and custom coding) with "exclusive" ways to access to data
  • Use of common tags allowed quick review of possible data and selection
  • All interfaces were data-driven ... meaning that the data/URI defined how it should be displayed
  • Use of RDF triples (and URIs) allowed a "follow your nose" approach to locating other, related data
  • What was needed could be defined by the user based on their needs (for example, what they already knew) and the situation
How cool is that? And, don't you wish that you had this yesterday?

Andrea