Showing posts with label Information Sharing. Show all posts
Showing posts with label Information Sharing. Show all posts

November 1, 2009

Decoding Decision-Making

Decision-making is something we have to do every day as individuals and as organizations, yet often we end up making some very bad decisions and thus some costly mistakes.

Improving the decision-making process is critical to keeping us safe, sound, and stably advancing toward the achievement of our goals.

All too often decisions are made based on gut, intuition, politics, and subjective management whim. This is almost as good as flipping a coin or rolling a pair of dice.

Disciplines such as enterprise architecture planning and governance attempt to improve on the decision-making process by establishing a strategic roadmap and then guiding the organization toward the target architecture through governance boards that vet and validate decisions based on return on investment, risk mitigation, alignment to strategic business goals, and compliance to technical standards and architecture.

In essence, decisions are taken out of the realm of the “I think” or “I feel” phenomenon and into the order of larger group analysis and toward true information-based decision-making.

While no decision process is perfect, the mere presence of an orderly process with “quality gates” and gatekeepers helps to mitigate reckless decisions.

“Make Better Decisions,” an article in Harvard Business Review (HBR), November 2009, states, “In recent years, decision makers in both the public and private sectors have made an astounding number of poor calls.”

This is attributed to two major drivers:

Individuals going it alone: “Decisions have generally been viewed as the prerogative of individuals-usually senior executives. The process employed, the information used, the logic relied on, have been left up to them, in something of a black box. Information goes in [quantity and quality vary], decisions come out—and who knows what happens in between.”

A non-structured decision-making processes: “Decision-making has rarely been the focus of systematic analysis inside the firm. Very few organizations have ‘reengineered’ the decision. Yet there are just as many opportunities to improve decision making as to improve other processes.”

The article’s author, Thomas Davenport, who has a forthcoming book on decision-making, proposes four steps (four I’s) organizations can take to improve this process:

Identification—What decision needs to be made and which are most important?

Inventory—What are the factors or attributes for making each decision?

Intervention—What is the process, roles, and systems for decision-making?

Institutionalization—How do we establish sound decision-making ongoingly through training, measurement, and process improvement?

He acknowledges that “better processes won’t guarantee better decisions, of course, but they can make them more likely.”

It is interesting that Davenport’s business management approach is so closely aligned with IT management best practices such as enterprise architecture and capital planning and investment control (CPIC). Is shows that the two disciplines are in sync and moving together toward optimized decision-making.

One other point I’d like to make is that even with the best processes and intentions, organizations may stumble when it comes to decision making because they fail into various decision traps based on things like: groupthink, silo-thinking and turf battles, analysis paralysis, autocratic leadership, cultures where employees fear making mistakes or where innovation is discouraged or even frowned upon, and various other dysfunctional impediments to sound decision-making.

Each of these areas could easily be a discourse in and of themselves. The point however is that getting to better decision-making is not a simple thing that can be achieved through articulating a new processes or standing up a new governance board alone.

We cannot delegate good decision-making or write a cursory business case and voila the decision is a good one. Rather optimizing decision-making processes is an ongoing endeavor and not a one-time event. It requires genuine commitment, participation, transparency, and plenty of information sharing and collaboration across the enterprise.


Share/Save/Bookmark

October 20, 2009

“The Happiness Myth” and Enterprise Architecture


Recently, I was reminded of an interesting article that appeared in The Wall Street Journal (20 Dec 2007) that what really matters in life is not happiness, but rather peace of mind.

Generally speaking, people “are consumed by the pursuit of happiness,” and this fact is codified in our very Declaration of Independence
that states: “that all men are created equal, that they are endowed with certain unalienable rights, that are among these are life, liberty, and the pursuit of happiness.”

However, absolute happiness is often in conflict with the "reality on the ground".

There are some of the inherent conflicts we deal with in enterprise architecture (sort of like the Murphy's Law of EA):

Here are some typical user wants (often associated with problematic architectures):
  • A baseline, target, and transition plan without their having to provide virtually any input or to collaborate whatsoever.
  • An architecture roadmap that they do not have to actually follow or execute on.
  • A platform for information sharing and access to information 24/7, but they also want to hoard “their information”, and keep it secure and private, on a need-to-know only basis, which they subjectively decide.
  • A structured IT governance process to ensure sound IT investments for the organization, but also they want leeway to conduct their own affairs, their way, in which they buy want they want, when they want, how they want, from whomever they want, with whatever founds they can scrounge up.
  • A requirements generation and management process that captures and aligns specific functional requirements all the way up to the organization’s strategic plan, mandates and legislation, but that they don't have to be bothered with identifying, articulating, or aligning.


The world of EA is filled with conflicting user demands and polarizing directions from user that want and expect to have it all. While certainly, EA wants and strives to meet all reasonable user requirements and to satisfy the user community and “make them happy,” at a point there comes the realization that you can’t (no matter how hard you try) make everyone happy all of the time.

People want it all, want it now, and often when you give them what they want, they realize that it wasn’t “really” what they had wanted anyway.

So the way ahead is to understand and take into account your user requirements, but more importantly to do the “right” thing for the organization based on best practices, common sense, and initiatives that will truly drive improved performance and mission results.

The WSJ states, “Dad told me: “life isn’t built around ‘fun.’ It’s built around peace of mind. Maybe Dad sensed the paradox of happiness: those most desperate for it run a high risk of being the last to find it. That’s because they make foolish decisions. They live disorderly lives, always chasing the high of the moment.”

In User-centric EA, we don’t “chase the high of the moment,” or look to satisfy each and every user whim, but rather we keep the course to developing sound IT planning and governance and to enhancing organizational decision-making capabilities for our end users. EA is a discipline that ultimately strives to ensure peace of mind for the enterprise through the provision of vital "insight" and "oversight" functions.


Share/Save/Bookmark

August 30, 2009

Privacy vs. Exhibitionism

We are a nation torn between on one hand wanting our privacy safeguarded and on the other hand wanting to share ourselves openly and often on the Internet—through Social Media, e-Commerce, e-mail, and so forth.

These days, we have more information about ourselves available to others than at any time in history. We are information exhibitionists—essentially an open book—sharing virtually everything about ourselves to everybody.

Online, we have our personal profile, photos, videos, likes and dislikes, birth date, addresses, email and phone contacts, employer, resume, friends and family connections, banking information, real estate transactions, legal proceedings, tax returns, and more. We have become an open book to the world. In a sense we have become an exhibitionistic nation.

While we continue to friend, blog, tweet, and post our thoughts, feelings, and personal information online, we are shocked and dismayed when there is a violation of our privacy.

How did we get to this point—here are some major milestones on privacy (in part from MIT Technology Review--July/August 2009):

1787—“Privacy” does not appear in Constitution, but the concept is embedded in protections such as “restrictions of quartering soldiers in private homes (Third Amendment), prohibition against unreasonable search and seizure (Fourth Amendment), prohibition against forcing a person to be a witness against himself (Fifth Amendment).

1794—Telegraph invented

1876—Telephone invented

1890—Boston Lawyers Samuel Warren and Louis Brandeis wrote in Harvard Law Review of “the right to be let alone” and warned that invasive technologies threatened to take “what was whispered in the closet” and have it “proclaimed from the house-tops.”

1914Federal Trade Commission Act prohibits businesses from engaging in “unfair or deceptive acts or practices”; has been extended to require companies to write privacy policies describing what they do with personal information they collect from customers and to honor these policies.

1934Federal Communications Act limits government wiretapping

1969—ARPANet (precursor to Internet) went live

1970Fair Credit Reporting Act regulates collections, dissemination, and use of consumer information, including credit information

1971—First e-mail sent.

1973—Code of Fair Information Practices limits secret data banks, requires that organizations ensure they are reliable and protected from unauthorized access, provides for individuals to be able to view their records and correct errors.

1974—Privacy Act prohibits disclosure of personally identifiable information from federal agency.

1988—Video Privacy Protection Act protects against disclosure of video rentals and sales.

1996—Health Insurance Portability and Accountability Act (HIPPA) protects against disclosures by health care providers.

1999Scott McNealy, CEO of Sun Microsystems states: “You have zero privacy anyway. Get over it.”

2000—Children’s Online Privacy Protection Act prohibits intentional collections of information from children 12 or younger

2001—USA Patriot Act expands government’s power to investigate suspected terrorism acts

2003—Do Not Call Implementation Act limits telemarketing calls

2006—Google Docs release for creating and editing docs online

2009—Facebook 4th most popular website in the world

As anyone can see, there is quite a lot of history to protecting privacy. Obviously, we want to be protected. We need to feel secure. We fear our information being misused, exploited, or otherwise getting out of our control.

Yet, as technology progresses, the power of information sharing, collaboration, and online access is endlessly enticing as it is useful, convenient, and entertaining. We love to go online and communicate with people near and far, conduct e-commence for any product near seamlessly, and work more and more productively and creatively.

The dichotomy between privacy and exhibitionism is strong and disturbing. How do we ensure privacy when we insist on openness?

First, let me say that I believe the issue here is greater than the somewhat simplistic answers that are currently out there. Obviously, we must rely on common sense + technology.

From a common sense perspective, we need to personally safeguard truly private information—social security numbers and mother’s maiden name are just the obvious. We need not only be concerned about distinct pieces of information, but information in the aggregate. In other words, individual pieces of information may not be easily exploitable, but when aggregated together with other publically available information—you may now be truly exposed.

In terms of technology, we need to invest more time, money, and effort into securing our systems and networks. Unfortunately, businesses are more concerned with quarterly revenue and profit targets than with securing our personal information. We have got to incentivize every business, organization, and government entity to put security and privacy first. Just like we teach our children, “safety first”, we need to change our adult priorities as well or risk serious harm to ourselves and our nation from cyber criminals, terrorisms, and hostile nation states.

But the real issue is, why do we continue to treat technology as if it is more secure and private than it truly is? In a sense, we shut our eyes to the dangers that we know are lurking, and tell ourselves “it only happens to somebody else.” How do we curb our enthusiasm for technological progress with a realism of recognizing the very real dangers that persist?


Share/Save/Bookmark

August 5, 2009

How To Use Social Media Strategically


This is an outstanding 3 minute video on Social Media from General Services Administation (GSA) and HowCast.com

The video provides 6 "how-to" steps to implementing social media for the purposes of collaboration, information sharing, information exchange, keeping pace of fast moving events in real-time, and harnessing the collective ingenuity of the public to support mission.

As the video states, "The key is to focus on the organization's goals."


Share/Save/Bookmark

July 12, 2009

Information Management Framework

The Information Management Framework (IMF) provides a holistic view of the categories and components of effective information architecture.

These categories include the following:

Information-sharing--Enable information sharing by ensuring that information is visible, accessible, understandable, and interoperable throughout the enterprise and with external partners.

Efficiency--Improve mission efficiency by ensuring that information is requirements-based, non-duplicative, timely, and trusted.

Quality--Promote information quality, making certain that information provided to users is valid, consistent, and comprehensive.

Compliance--Achieve compliance with legislation and policy providing for privacy, freedom of information, and records management.

Security-- Protect information assets and ensure their confidentiality, integrity, and availability.

All areas of the framework must be managed as part of effective information architecture.

Share/Save/Bookmark

June 7, 2009

Digital Object Architecture and Internet 2.0

There is an interesting interview in Government Executive, 18 May 2009, with Robert Kahn, one of the founders of the Internet.

In this interview Mr. Kahn introduces a vision for an Internet 2.0 (my term) based on Digital Object Architecture (DOA) where the architecture focus is not on the efficiency of moving information around on the network (or information packet transport i.e. TCP/IP), but rather on the broader notion of information management and on the architecture of the information itself.

The article states: Mr Kahn “still harbors a vision for how the Internet could be used to manage information, not just move packets of information” from place to place.

In DOA, “the key element of the architecture is the ‘digital element’ or structured information that incorporates a unique identifier and which can be parsed by any machine that knows how digital objects are structured. So I can take a digital object and store it on this machine, move it somewhere else, or preserve it for a long time.”

I liked the comparison to electronic files:

“A digital object doesn’t become a digital object any more than a file becomes a file if it doesn’t have the equivalent of a name and an ability to access it.”

Here are some of the key elements of DOA:

  • Handles—these are like file names; they are the digital object identifiers that are unique to each and enable each to be distinctly stored, found, transported, accessed and so forth. The handle record specifies things like where the object is stored, authentication information, terms and conditions for use, and/or “some sense of what you might do with the object.”
  • Resolution system —this is the ‘handle system’ that “gives your computer the handle record for that identifier almost immediately.”
  • Repository—“where digital objects may be deposited and from which they may be accessed later on.” Unlike traditional database systems, you don't need to know a lot of the details about it to get in or find what you're looking for.
  • Security at object layer—In DOA, the security “protection occurs at the object level rather than protecting the identifier or by providing only a password at the boundary.”

The overall distinguishing factor of DOA from the current Internet is that in the current Internet environment, you “have to know exactly where to look for certain information” and that’s why search engines are so critical to indexing the information out there and being able to find it. In contrast, in DOA, information is tagged when it is stored in the repository and given all the information up front about “how do you want to characterize it” and who can manage it, transport it, access it, and so on.

To me, in DOA (or Internet 2.0) the information itself provides for the intelligent use of it as opposed to in the regular Internet, the infrastructure (transport) and search features must provide for its usability.

As I am thinking about this, an analogy comes to mind. Some people with medical conditions wear special information bracelets that identify their unique medical conditions and this aids in the speed and possibly the accuracy of the medical treatment they receive—i.e. better medical management.  This is like the tagging of information in DOA where the information itself wears a metaphorical bracelet identifying it and what to do with it thereby yielding faster and better information management.

Currently, we sort of retrofit data about our information into tags called metadata, but instead here we have the notion of creating the information itself with the metadata almost as part of the genetic makeup of the information itself.

Information with “handles” built into as a part of the information creation and capture process would be superior information for sharing, collaboration, and ultimately more user-centric for people. 

In my humble opinion, DOA has some teeth and is certainly not "Dead On Arrival."


Share/Save/Bookmark

May 11, 2009

Innovation Goes Both Ways


A soldier with an iPod mounted on his wrist takes aim with his assault rifle

http://www.independent.co.uk/news/world/middle-east/iphones-in-iraq-ndash-the-us-armys-new-weapon-1682655.html


It used to be the military technology often found application in the consumer world (like the development of the Internet from DARPA). Now, consumer technology is being used in battlefield theatre (for example, iPods and iPhones).

Enterprise architecture is being turned upside down in terms of the traditional migration path of technological innovation.

Perhaps, the practice of applying technological innovation to other areas regardless of from where they originate is the best model of them all!

In this way, we share and match the best and brightest ideas and product designs to new areas of applicability, opening up more uses and markets for successful product launches.

Share/Save/Bookmark

April 4, 2009

Social Media Can Free Us All

An action by a lone decision maker may be quick and life-saving as in response to extreme fear or stress, when a person must in a split second select from the “fight or flight response.”

Given a little more time to make a decision, people have found that there is not only strength in numbers, but also wisdom. In other words, vetting a decision among a diverse group and hearing different sides to an issue, generally yields better decisions than an individual could make alone. Colloquially, we often here this referred to as “two heads are better than one.”

Now, with the power of the Internet, we are able to employ collective decision making en masse. Through Web 2.0 tools like Wiki’s, Internet forums, social networks, and other collaboration tools, we can reach out to masses of people across the social, economic, and political landscape—anywhere in the world—and even from those orbiting the planet on the International Space Station. Soon enough, we will take the power of the collective to new extremes by reaching out to those who have traveled and reside on distance worlds—I think that will probably be in Web 4.0 or 5.0.

What’s amazing is that we can get input from anyone, anywhere and in virtually limitless numbers from anyone interested in participating and providing their ideas and input.

When we open up the discussion to large groups of people like this it is called crowdsourcing, and it is essentially mass information sharing, collaboration and participation towards more sophisticated and mature ideation and decision making.

The concept of participatory thinking and intelligence, to me, is an outgrowth not just of the technologies that enables it, but also of the freedom of people to choose to participate and their human right to speak their minds freely and openly. Certainly, this is an outgrowth of democratization and human rights.

While the Internet and Social Media technologies are in a sense an outgrowth of freedoms that support our abilities to innovate. I believe that they now will be an enabler for continued democratization, freedom, and human rights around the world. Once the flood gates are opened a little for people to be free virtually (to read new ideas online, to vote online, to comment and provide feedback online, and to generally communicate and share openly online), a surge of freedom in the traditional sense must soon follow.

This is a tremendous time for human civilization—the Internet has connected us all. Diversity is no longer a dirty little word that some try to squash, but a strength that binds us. Information sharing no longer cowers behind a need to know. Collaboration no longer hides behind more authoritative forms of decision making. People and organizations recognize that the strengths of individuals are magnified by the power of the collective.

The flip side is that voices for hate, chaos, and evil can also avail themselves of the same tools of social media to spread extremism, crime, terrorism, and anarchy. So there are two camps coming together through sharing and collaboration, the same as through all time—good and evil.

The fight for truth is taking a new turn through technology. Social media enables us to use mass communication and collective intelligence to achieve a high goal.


Share/Save/Bookmark

February 7, 2009

The Perilous Pitfalls of Unconscious Decision Making

Every day as leaders, we are called upon to make decisions—some more important than others—but all having impacts on the organization and its stakeholders. Investments get made for better or worse, employees are redirected this way or that, customer requirements get met or are left unsatisfied, suppliers receive orders while others get cancelled, and stakeholders far and wide have their interests fulfilled or imperiled.

Leadership decisions have a domino effect. The decisions we make today will affect the course of events well into the future--especially when we consider a series of decisions over time.

Yet leadership decisions span the continuum from being made in a split second to those that are deliberated long and hard.

In my view, decision makers can be categorized into three types: “impulsive,” “withholding,” and “optimizers.”

  1. Impulsive leaders jump the gun and make a decision without sufficient information—sometimes possibly correctly, but often risking harm to the organization because they don’t think things through.
  2. Withholding leaders delay making decisions, searching for the optimal decision or Holy Grail. While this can be effective to avoid overly risky decisions, the problem is that they end up getting locked into “analysis paralysis”. They never get off the dime; decisions linger and die while the organization is relegated to a status quo—stagnating or even declining in times of changing market conditions.
  3. Optimizers rationally gather information, analyze it, vet it, and drive towards a good enough decision; they attempt to do due diligence and make responsible decisions in reasonable time frames that keep the organization on a forward momentum, meeting strategic goals and staying competitive. But even the most rational individuals can falter in the face of an array of data.

So it is clear that whichever mode decision makers assume, many decisions are still wrong. In my view, this has to do with the dynamics of the decision-making process. Even if they think they are being rational, in reality leaders too often make decisions for emotional or even unconscious reasons. Even optimizers can fall into this trap.

CIOs, who are responsible for substantial IT investment dollars, must understand why this happens and how they can use IT management best practices, structures, and tools to improve the decision-making process.

An insightful article that sheds light on unconscious decision-making, “Why Good Leaders Make Bad Decisions,” was published this month in Harvard Business Review.

The article states: “The reality is that important decisions made by intelligent, responsible people with the best information and intentions are sometimes hopelessly flawed.”

Here are two reasons cited for poor decision making:

  • Pattern Recognition—“faced with a new situation, we make assumptions based on prior experiences and judgments…but pattern recognition can mislead us. When we’re dealing with seemingly familiar situations, our brains can cause us to think we understand then when we don’t.”
  • Emotional Tagging—“emotional information attaches itself to the thoughts and experiences stored in our memories. This emotional information tells us whether to pay attention to something or not, and it tells us what sort of action we should be contemplating.” But what happens when emotion gets in the way and inhibits us from seeing things clearly?

The authors note some red flags in decision making: the presence of inappropriate self-interest, distorting attachments (bonds that can affect judgment—people, places, or things), and misleading memories.

So what can we do to make things better?

According to the authors of the article, we can “inject fresh experience or analysis…introduce further debate and challenge…impose stronger governance.”

In terms of governance, the CIO certainly comes with a formidable arsenal of IT tools to drive sound decision making. In particular, enterprise architecture provides for structured planning and governance; it is the CIO’s disciplined way to identify a coherent and agreed to business and technical roadmap and a process to keep everyone on track. It is an important way to create order of organizational chaos by using information to guide, shape, and influence sound decision making instead of relying on gut, intuition, politics, and subjective management whim—all of which are easily biased and flawed!

In addition to governance, there are technology tools for information sharing and collaboration, knowledge management, business intelligence, and yes, even artificial intelligence. These technologies help to ensure that we have a clear frame of reference for making decisions. We are no longer alone out there making decisions in an empty vacuum, but rather now we can reach out –far and wide to other organizations, leaders, subject matter experts, and stakeholders to get and give information, to analyze, to collaborate and to perhaps take what would otherwise be sporadic and random data points and instead connect the dots leading to a logical decision.

To help safeguard the decision process (and no it will never be failsafe), I would suggest greater organizational investments in enterprise architecture planning and governance and in technology investments that make heavily biased decisions largely a thing of the past.


Share/Save/Bookmark

January 3, 2009

Embedded Systems and Enterprise Architecture

Information technology is not just about data centers, desktops, and handheld devices anymore. These days, technology is everywhere—embedded in all sorts of devices from cars and toaster ovens to traffic lights and nuclear power plants. Technology is pervasive in every industry from telecommunications to finance and from healthcare to consumer electronics.

Generally, embedded systems are dedicated to specific tasks, while general-purpose computers can be used for a variety of functions. In either case, the systems are vital for our everyday functioning.

Government Computer News, 15 December 2008 reports that “thanks to the plummeting cost of microprocessors, computing…now happens in automobiles, Global Positioning Systems, identification cards and even outer space.

The challenge with embedded systems are that they “must operate on limited resources—small processors, tiny memory and low power.”

Rob Oshana, director of engineering at Freescale Semiconductor says that “With embedded it’s about doing as much as you can with as little as you can.”

What’s new—haven’t we had systems embedded in automobiles for years?

Although originally designed for interacting with the real world, such systems are increasingly feeding information into larger information systems,” according to Wayne Wolf, chair of embedded computing systems at Georgia Institute of Technology.

According to Wolf, “What we are starting to see now is [the emergence] of what the National Science Foundation is called cyber-physical systems.”

In other words, embedded systems are used for command and control or information capture in the physical domain (like in a car or medical imaging machine), but then they can also share information over a network with others (think OnStar or remote medical services).

When the information is shared from the car to the Onstar service center, information about an accident can be turned into dispatch of life-saving responders. Similarly, when scans from a battlefield MRI is shared with medical service providers back in the States, quality medical services can be provided, when necessary, from thousands of miles away.

As we should hopefully have all come to learn after 9-11, information hoarding is faux power. But when information is shared, the power is real because it can be received and used by others and others, so that its influence is exponential.

Think for example, of the Mars Rover, which has embedded systems for capturing environmental samples. Left alone, the information is contained to a physical device millions of miles away, but sharing the information back to remote tracking stations here on Earth, the information can be analyzed, shared, studied, and so forth with almost endless possibilities for ongoing learning and growth.

The world has changed from embedded systems to a universe of connected systems.

Think distributed computing and the internet. With distributed computing, we are silos or separate domains of information, but by connecting the islands of information using the internet for example, we can all harness the vast amounts of information out there and in turn process it within our own lives and contribute back information to others.

The connection and sharing is our strength.

In the intelligence world, information is often referred to as dots, and it is the connection of the dots that make for viable and actionable intelligence.

As people, we are also proverbially just little dots in this big world of ours.

But as we have learnt with social media, we are able to grow as individuals and become more potent and more fulfilled human beings by being connected with others—we’ve gone from doing this in our limited physical geographies to a much larger population in cyberspace.

In the end, information resides in people or can be embedded in machines, but connecting the information to with other humans and machines is the true power of the information technology.


Share/Save/Bookmark

November 13, 2008

The Awesome Implications of Gmail and Enterprise Architecture

Recently, I took the leap from Yahoo! and added a Gmail Account.

For a long time, I thought, “What can be the difference? E-mail is e-mail.” Further, I thought people were just switching because it was the latest fad, and they wanted to be associated with the then-upcoming Google versus the troubled Yahoo!

While this may be partly true, there are some tangible advantages to Gmail. Gmail has a better interface than Yahoo!—it provides one look and feel while Yahoo! has a switching mechanism between the legacy email and a new Yahoo! mail, which is still kind of quirky. Gmail better integrates other tools like instant messaging and VOIP. Gmail offers a huge amount of storage. Gmail associates email strings so you can easily expand or click through the chain.

And finally,
Gmail has a label structure for emails versus Yahoo’s folder structure. This is the one that matters most.

The label structure is superior to the folders. You can have multiple labels for an e-mail and can therefore locate items of interest much more easily by checking in any of the pertinent label categories. In contrast, in the Yahoo! folder structure, you can only store the e-mail in one folder, period. This makes it it difficult to store, connect, and discover items that cross categories.

For example, if you have e-mails on enterprise architecture topics from a particular source, you may want to label it by the topic EA and by the source it came from, so in the future you can find it by topic or by source.

Reflecting on this archiving structure from an enterprise architecture perspective, it became apparent to me that the legacy folder structure used in Yahoo! mail and the typical Microsoft Office applications such as Outlook and My Documents is built according to a typical taxonomy structure. By this I mean that here are one “parent” to multiple “children” relationships (i.e. a folder has one or more files/emails, but a file/email is constrained to only one folder).

However, in Gmail, the archiving structure is built according to an ontology structure, where there are multiple relationships between objects, so that there is a many-to-many relationship. (i.e. a label category can have multiple files/emails and files/emails can be tagged to many labels)—a much more efficient and expansive metadata structure.

So in short, the analogy goes like this--

Folder structure : Taxonomy : : Labels : Ontology

And Google wins in e-mail archiving hands down!

In enterprise architecture, the implications are enormous. For example, Microsoft, which is the defacto standard in most of our organizations, rules the way we store files in the legacy folder structure. Perhaps, the time has come for us to evolve to the superior metadata structure using labeling. This will make it far easier and more productive for the average user to search and discover information they need.

Further, metadata is at the heart of enterprise architecture, where we seek to break down the siloes in and between our organizations and make for better interoperability and information sharing. The goal is a holistic view of what’s going on in our organization and between organizations, and the only way to achieve that from an IT perspective is to label information so that it is discoverable and usable outside stereotypical stovepipes.

Share/Save/Bookmark

October 19, 2008

Net-centricity and Enterprise Architecture

See video on Department of Defense (DoD) vision for Net-Centricity:



Source: Department of Defense
Share/Save/Bookmark

July 20, 2008

A Net-centric Military and Enterprise Architecture

Information is central to the Department of Defense’s arsenal for fighting and defeating our enemies and the ability to share information across interoperable systems in the way ahead.

National Defense, March 2008 reports that while a net-centric military is our goal, the transformation is a work in progress.

Brig. Gen. David Warner, director of command and control at DISA stated: “in this war, information is truly our primary weapon. You can’t move, you can’t shoot, if you can’t communicate.”

Yet, “the Defense Department continues to acquire stovepiped systems…the requirements change, the system grows, and then there are cost overruns. One of the first items to cut from the budget is interoperability.”

Air Force Gen. Lance L. Smith says, “the dream of a truly net-centric U.S. military will not happen overnight. But progress could be achieved within the next five to 10 years, It will be a matter of waiting for the stovepiped legacy systems to come to the end of their lifespan. If the services get onboard and stop building non-interoperable technologies now, then the new generation of net-centric communications can take over and become the norm.”

This sounds to me like the problem isn’t limited to legacy systems, but that there are still cultural, project management, and change management issues that are obstacles to achieving the net-centric goal.

The challenges are even greater and more complex when it comes to sharing information with “federal civilian agencies and foreign allies…NATO, for example, has no mechanism to ensure its members are interoperable with each other.”

Today the normal way to do business is to ‘exchange hostages’ which means sending personnel from one service, agency, or coalition partner to each other’s command centers so they can verbally relay information.” This typically takes the form of interagency operation command center, and is not very net-centric.

So we continue to have stovepipes for “communications or data sharing systems built by different agencies, armed services, or coalition partners that cannot link to each other…[yet] the U.S. military is trying to make itself more lethal, faster, and more survivable. [And] the key to doing that is the ability to share information.”

Net-centricity, interoperability, and information sharing are true cornerstones to what enterprise architecture is about, and it is where we as architects are needed to take center stage now and in the years ahead in the war on terrorism and the other challenges we will face.

From an EA perspective, we need to ensure that all of our agencies’ targets, transition plans, and IT governance structures not only include, but emphasize net-centricity and enforce it through the EA review processes and the investment review board. There is no excuse for these stovepipes to persist.
Share/Save/Bookmark

June 13, 2008

Preventing Another 9/11 and Enterprise Architecture

From the tragic events of 9/11 came the Intelligence Reform and Terrorism Protection Act, the findings of the 9/11 Commission in 2004, and the Presidential memoranda in 2005 and 2007 to better share information.

ComputerWorld Magazine, 26 May 2008, reports that “nearly seven years after 9/11, information-sharing problems that hobble law enforcement are just beginning to be solved.”

What is the information sharing problem in law enforcement?

There are “20 federal agencies and 20,000 state, county, local, and tribal enforcement organizations nationwide.” The problem is how do you get this multitude of varied law enforcement organizations to share information to identify the bad guys?

While 75% of police agencies use automated systems to manage incident report data, only 25% of those systems are capable of sharing that information.

What’s being done to fix the problem?

First (not mentioned by ComputerWorld), the Office of the Director of National Intelligence (ODNI) is establishing common terrorism information sharing standards (CTISS) to drive and enable information sharing among Law Enforcement, Homeland Security, Intelligence, Military, and Diplomatic information domains.

Additionally, the Department of Justice is developing a data dictionary/schema to establish a “common vocabulary and structure for the exchange of data.” First, this took the form of the Global Justice XML Data Model (GJXDM) in 2003, and later took form in the National Information Exchange Model (NIEM) in 2005 that extended the effort from “law enforcement to other areas of justice, public safety, intelligence, homeland security, and emergency and disaster management.” (Note: Defense and the Intelligence Community have a comparable data standard initiative called U-CORE.)

This past March, DOJ and the FBI’s Criminal Justice Information Service (CJIS) division “began rolling out the National Data Exchange Initiative (N-DEx), a NIEM complaint database and data sharing network.” N-DEx provides “federated search capability across incident reports residing in state and local record management systems nationwide while allowing those records to be updated and maintained by their local owners.”

The goal is to have “the majority of the country participating” by 2009. The biggest obstacle is that many agencies’ systems have been so customized that integration is now challenging and expensive.

According to the FBI’s website, NDEx will be accessible via the internet and “includes several basic but vital capabilities, including searching and correlating incident/case report information and arrest data to help resolve entities (determining a person’s true identity despite different aliases, addresses, etc.). N-DEx will also create link analysis charts to assist in criminal investigations and identify potential terrorist activity.”

According to the NDEx brochure (available online at the FBI website), law enforcement agencies who participate in NDEX will:

  • “Sign an operational Memorandum of Understanding (MOU)
  • Identify and map incident/case data to the N-DEx Information Exchange Package Documentation (IEPD)
  • Obtain network connectivity through an existing CJIS Wide-Area Network (WAN) or connect over the Law Enforcement Online (LEO). “

The architecture concept here is summed up nicely by Linda Rosenberg, Director of the Pennsylvania Office of Criminal Justice: “Now you don’t have to go back and build these data warehouses and totally redo your entire infrastructure.”

Instead, you plug in to the NDEx and share information that’s been mapped to the common data standards. NDEx provides the target infrastructure, while NIEM provides the data exchange standards. Together, we can share information for better achieving our law enforcement mission—protecting the American people.


Share/Save/Bookmark

June 3, 2008

24 Hour Knowledge Factories and Enterprise Architecture

Enterprise Architecture is strategic information asset base that defines the business, information necessary to operate the business, technologies necessary to support the business operations, and transitional processes for implementing new technologies in response to the changing needs of business."

In the information economy we live in today, information is certainly an asset with expected returns for the numerous businesses in the services sector and the millions of people working as knowledge workers.

The Wall Street Journal in conjunction with MIT Sloan School of Management on 15 September 2007 reports that “today, we are witnessing the advent of the 24-hour knowledge factories…thanks to more robust information technology and a growing acceptance of offshoring, the concept is feasible.”The key to the organization being able to support 24 hour (round the clock) knowledge work is to have “an online repository of information accessible to all groups.”

What makes for a successful knowledge repository for sharing information between sites and teams?

  • Acquisition— capturing all relevant knowledge that can support users knowledge work.
  • Discovery—making data discoverable so it can be mined for those nuggets that can aid in job performance.
  • Management—developing data standards including a common lexicon and metadata to deal with differences in semantics and formats.
  • Dissemination—making the information accessible for standardized reporting or ad-hoc queries.
In User-centric EA, we are in the information business (providing information for planning and governance). And whether or not the information is needed 24.7, or during "regular" business hours, we need to develop information products, relate the data in useful ways, and make the information easy to understand and readily accessible to end-users. To do this, a robust EA knowledge center or repository is essential.
Share/Save/Bookmark

May 15, 2008

Information Governance and Enterprise Architecture

We all know that information is vital to making sound and timely decisions. How do we govern information (the term information to include data and information) so that it is truly valuable to the organization and not just another case of GIGI (Garbage In, Garbage Out)?

DM Review, May 2008, reports on some research by Accenture that confirms that “high-performing organizations make far better use of information than their peers.”

Information is a strategic enterprise asset. The key to getting better results from information is the effective use of information governance. Information governance includes decision making and management over the full information life cycle, including: information capture, processing, storage, retrieval, and reporting and disposition.

Without information governance, what can happen to corporate information assets and the end users that rely on it?

  1. Information Hoarding (or Silos)—the information exists in the organization, but people hoard it rather than share information. They treat information as power and currency and they do not readily provide information to others in their organization even if it helps the organization they work for.
  2. Information Quality NOT (“multiple versions of the truth”)—information quality will suffer if decisions are not made and enforced to ensure authoritative information sources, quality control processing, and adequate security to protect it.
  3. Information Overload—not managing the way information is rolled up, presented, and reported can result is too much information that cannot be readily processed or understood by those on the receiving end. It’s like the floodgates have been opened or as one of my bosses used to say, “trying to drink from a fire hose.”
  4. Information Gaps—without proper requirements gathering and planning and provision for systems to meet information needs, users may be left holding the bag, and it’s empty; they won’t have the information they need to support their functional processes and day-to-day decision making needs.

Not having effective information governance is costly for the organization. The target enterprise architecture state for information management is to have the right information to the right people at the right time. Anything less will mean sub-optimized processes, excessive management activity, and poor decision making and that will be costly for the organization—lost sales, dissatisfied customers, compliance lapses, safety and legal issues, publicity snafus, and other mistakes that can even put the enterprise out of business!

According to Accenture’s survey of more than 1000 large companies in the U.S. and UK, information is not being governed very well today:

  • “Managers spend more than one quarter of their work week searching for information.
  • More than half of what they obtain is of no value.
  • Managers accidently use the wrong data more than once a week.
  • It is challenging to get different parts of the company to share needed information.”

The good news is that “the majority of CIOs seem ready to act” by employing information governance.

Information, as one of the perspectives of the enterprise architecture, is already governed through the Enterprise Architecture Board (EAB). However, to give more focus to information governance, perhaps we need to establish a separate Information Governance Board (IGB). I see the IGB as a sub-committee of the EAB that provides findings and recommendation to the EAB; the EAB would be the decision authority for governing all the perspectives of the architecture, including: performance, business, information, services, technology, security, and human capital. To better focus and decompose on the various EA perspective areas, perhaps they will all have their own sub-committees (like a Performance Governance Board, Business Governance Board, and so forth) similar to the IGB in the future.


Share/Save/Bookmark

May 6, 2008

Information Management and Enterprise Architecture

Information management is the key to any enterprise architecture.

Information is the nexus between the business and technical components of the EA:

  • On one hand, we have the performance requirements and the business processes to achieve those.
  • On the other hand, we have systems and technologies.
  • In between is the information.

Information is required by the business to perform its functions and activities and it is served up by the systems and technologies that capture, process, transmit, store, and retrieve it for use by the business. (The information perspective is sandwiched in between the business and the services/technology perspectives.)

Recently, I synthesized a best practice for information management. This involves key values, goals for these, and underlying objectives. The values and objectives include the following:

  1. Sharing –making information visible, understandable, and accessible.
  2. Quality—information needs to be valid, consistent, and comprehensive.
  3. Efficiency—information should be requirement-based (mission-driven), non-duplicative, timely, and delivered in a financially sound way.
  4. Security—information must be assured in terms of confidentiality, integrity, and availability.
  5. Compliance—information has to comply with requirements for privacy, Freedom of Information Act (FOIA), and records management.

The importance of information management to enterprise architecture was recently addressed in DM Review Magazine, May 2008. The magazine reports that in developing an architecture, you need to focus on the information requirements and managing these first and foremost!

“You need to first understand and agree on the information architecture that your business needs. Then determine the data you need, the condition of that data and what you need to do to cleanse, conform, and transform that data into business transformation.”

Only after you fully understand your information requirements, do you move on to develop technology solutions.

“Next, determine what technologies (not products) are required by the information and data architectures. Finally, almost as an afterthought, evaluate and select products.” [I don’t agree with the distinction between technologies and products, but I do agree that you first need your information requirements.]

Remember, business drives technology—and this is done through information requirements—rather than doing technology for technology’s sake.

“Let me also suggest …Do not chase the latest and greatest if your incumbent products can get the job done.”

In enterprise architecture, the customer/end-user is king and the information requirements are their edicts.


Share/Save/Bookmark