Showing posts with label XML. Show all posts
Showing posts with label XML. Show all posts

June 6, 2008

Information Sharing Standards and Enterprise Architecture

In response to the 9/11 Commission’s recommendations, the Intelligence Reform and Terrorism Prevention Act (IRTPA) of 2004 called for an Information Sharing Environment (ISE), “an approach that facilitates the sharing of terrorism information” and that requires the President to designate a Program Manager for the ISE and to establish an Information Sharing Council to advise the President and the Program Manager.

The Common Terrorism Information Sharing Standards (CTISS) Program Manual is a construct for ISE. It defines both functional standards and technical standards.

  • Functional standards—According to the CTISS Program Manual, these are “detailed mission descriptions, data and metadata on focused areas that use ISE business processes and information flows to share information.” From an enterprise architecture perspective, I believe this would correspond to the business and information perspectives of the architecture as well as be extended probably to the performance perspective. In other words, functional standards correlate to the three business perspectives of the Federal Enterprise Architecture. These are the standards that define our requirements, in other words, how we measure performance (for example, Balanced Scorecard), how we engineer business processes (for example, Lean Six Sigma), and how we describe information sharing requirements (for example, NIEM or U-CORE, and Information Exchange Package Descriptions).
  • Technical Standards—“methods and techniques to implement information sharing capability…[for] acquiring, accessing, producing, retaining, protecting, and sharing.” From an enterprise architecture perspective, I believe this would correspond to the services, technology, and security perspectives of the architecture. These correlate to the three technical perspectives of the architecture. The technical standards include how systems will interoperate or share information (for example, J2EE, .NET), what technology standards will be employed (for example, XML, SOAP, UDDI) and how security will be assured (for example, various from NIST/FIPS, ISO, IEEE, and so on).

What I like about the CTISS is that it attempts to define a comprehensive framework for the ISE from the highest-level being the domains of information (such as intelligence, law enforcement, homeland security, foreign affairs, and defense) and drills down to the security domains (SBU, Secret, and US-SCI), reference models, (FEA, DoDAF, IC EA…), standard types (metadata, data, exchange, and service), standards bodies (NIEM, W3C, OASIS…), and then the standards themselves.

As an initial impression, I think next steps are to articulate how I share information with you or you share with me. Currently, we are still defining techniques for future sharing of data, like developing metadata, creating a data dictionary and schema, defining exchange standards, and service standards to discover data through registries. It like responding to someone who asks, how do I get to your house, by saying, we need to pave roads, design and manufacture cars or buses, install traffic signs and lights, and so on. That’s all infrastructure that needs to be built. That still doesn’t tell me how I get to your house. While we are making huge progress with information sharing, we’re still at the early stages of figuring out what the infrastructure elements are to share. But it seems to be a running start!


Share/Save/Bookmark

February 4, 2008

Web 3.0 and Enterprise Architecture

While the Web 1.0 is viewed as an information source, and Web 2.0 as participatory, Web 3.0 is envisioned as Semantic (or the Semantic Web).

MIT Technology Review, March 2007 reports in an article entitled “A Smarter Web” by John Borland that Web 3.0 will “give computers the ability—the seeming intelligence—to understand content on the World Wide Web.” The goals is to “take the web and make it …a system that can answer questions, not just get a pile of documents that might hold an answer.”

In The New York Times, November 2007, John Markoff defined Web 3.0 “as a set of technologies that offer efficient new ways to help computers organize and draw conclusions from online data.”

Not only individuals would benefit from the Semantic Web, but companies too that “are awash in inaccessible data on intranets, in unconnected databases, even on employees’ hard drives.” The idea is to bring the data together and make it useful.

Many of you have heard of the Dewey Decimal System for organizing information. Melvin “Dewey was no technologist, but the libraries of his time were as poorly organized as today’s Web. Books were often placed in simple alphabetical order, or even lined up by size…Dewey found this system appalling: order, he believed, made for smoother access to information.” (MIT Technology Review) Melvin Dewey developed in 1876 what became The Dewey Decimal System, a library classification attempts to organize all knowledge.” (Wikipedia) In the Dewey system, books on a similar subject matter are co-located aiding discovery and access to information.

MIT Technology Review contends that like Melvin Dewey, web browser and search engine companies, like Microsoft and Google, want to help consumers locate information more efficiently.

“By the mid-1990’s, the computing community as a whole was falling in love with the idea of metadata, a way of providing Web pages with computer-readable instruction or labels…metadata promised to add the missing signage. XML—the code underlying today’s complicated websites, which describes how to find and display content, emerged as one powerful variety.” The problem with this was that it was not a systematic way of labeling data, since each developer used “their own custom ‘tags’—as if different cities posted signs in related but mutually incomprehensible dialects.”

In 1999, the World Wide Web Consortium (W3C) came up with the Resource Description Framework (RDF) for locating and describing information. Since then the vision has been for “a web that computers could browse and understand much as humans do…analogous to creating detailed road signs that cars themselves could understand and upon which they could act,” independent of human action. However, the obstacles remain for how to create ontologies that everyday busy people would use to relate data across the web—data that is currently described in myriad number of ways today—so that computers could then read and understand the data.

A second area of doubt on the realism of a Semantic Web is whether computers can truly understand the intricacies (or connotations) of human language. For example, can a computer realistically make sense of a word like marriage that can have subtle distinctions of “monogamy, polygamy, same-sex relationships, and civil unions?”

Despite the perceived obstacles, many remain not only fixated, but enamored with the notion of a Semantic Web that can not only provide amazing amounts of information, but also, like a human being, is able to analyze the data holistically, and provide actionable artificial intelligence (AI).

To enterprise architects, the Semantic Web (or Web 3.0) would be an incredible leap forward enabling organizations and individuals to get more intelligence from the web, be more productive, and ultimately provide for more efficient and effective business processes, supported by a higher order of computing enablement. Additionally, for enterprise architects themselves that deal with inordinate amounts of business and technical data—structured and unstructured—Web 3.0 technologies and methods for better mining and analyzing the data would be a welcome capability for advancing the discipline.


Share/Save/Bookmark