- “Sensing opportunities and threats in the environment”—(recognizing future impacts) this entails “foreseeing events” and technologies that will affect the organization and one’s stakeholders. This means not only constantly scanning the environment for potential impacts, but also making the mental connections between, internal and external factors, the risks and opportunities they pose, and the probabilities that they will occur.
- “Setting strategic direction”—(determining plans to respond) this means identifying the best strategies to get out ahead of emerging threats and opportunities and determining how to mitigate risks or leverage opportunities (for example, to increase mission effectiveness, revenue, profitability, market share, and customer satisfaction).
- “Inspiring constituents”—(executing on a way ahead) this involves assessing change readiness, “challenging the status quo” (being a change agent), articulating the need and “new ways of doing things”, and motivating constituent to take necessary actions.
January 24, 2009
January 18, 2009
The fact that information itself has become a problem is validated by the fact that Google is world’s #1 brand with a market capitalization of almost $100 billion. As we know the mission statement of Google is to “to organize the world's information and make it universally accessible and useful.”
The key to making information useful is not just organizing it and making it accessible, but also to make sure that it is based on good data—and not the proverbial, “garbage in, garbage out” (GIGO).
There are two types of garbage information:
- Incorrect, incomplete, or dated
- Misleading /propagandistic or an outright lie
When information is not reliable, it causes confusion, rather than bringing clarity. And then, the information can actually result in worse decision making, then if you didn’t have it in the first place. This is an enterprise architecture that is not only worthless, but is harmful or poison to the enterprise.
Generally, in enterprise architecture, we are optimistic about human nature and focus on #1, i.e., we assume that people mean to provide objective and complete data and try to ensure that they can do that. But unfortunately there is a darker side to human nature that we must grapple with, and that is #2.
Misinformation by accident or by intent is used in organizations all the time to make poor investment decisions. Just think how many non-standardized, non-interoperable, costly tools your organization has bought because someone provided “information” or developed a business case, which “clearly demonstrated” that is was a great investment with a high ROI. Everyone wants their toys!
Wired Magazine, February 2009, talks about disinformation in the information age in “Manufacturing Confusion: How more information leads to less knowledge” (Clive Thompson).
Thompson writes about Robert Proctor, a historian of science from Stanford, who coined the word “Agnotology,” or “the study of culturally constructed ignorance.” Proctor theorizes that “people always assume that if someone doesn’t know something, it’s because they haven’t paid attention or haven’t yet figured it out. But ignorance also comes from people literally suppressing truth—or drowning it out—or trying to make it so confusing that people stop hearing about what’s true and what’s not.” Thompson offers as examples:
- “Bogus studies by cigarette companies trying to link lung cancer to baldness, viruses—anything but their product.”
- Financial firms creating fancy-dancy financial instruments like “credit-default swaps [which] were designed not merely to dilute risk but to dilute knowledge; after they changed hands and been serially securitized, no one knew what they were worth.”
We have all heard the saying that “numbers are fungible” and we are also all cautious about “spin doctors” who appear in the media telling their side of the story rather than the truth.
So it seems that despite the advances wrought by the information revolution, we have some new challenges on our hands: not just incorrect information but people who literally seek to promote its opposite.
So we need to get the facts straight. And that means not only capturing valuable information, but also eliminating bias so that we are not making investment decisions on the basis of B.S.
January 17, 2009
Read about Decentralization, Technology, and Anti-Terror Planning in The Total CIO.
The concept of decentralization is very simple. Rather than concentrating all your vital assets in one place, you spread them out so that if one is destroyed, the others remain functional. The terrorists already do this by operating in dispersed “cells.” Not only that, but we know that very often one “cell” doesn’t know what the other one is doing or even who they are. All this to keep the core organization intact in case one part of it is compromised.
Both the public and private sectors understand this and often strategically decentralize and have backup and recovery plans. However, we still physically concentrate the seat of our federal government in a geographically close space. Given that 9/11 represented an attack on geographically concentrated seats of U.S. financial and government power, is it a good enterprise architecture decision to centralize many or all government headquarters in one single geographic area?
On the one hand the rationale for co-locating federal agencies is clear: The physical proximity promotes information-sharing, collaboration, productivity, a concentrated talent pool, and so on. Further, it is a signal to the world that we are a free and proud nation and will not cower before those who threaten us.
Yet on the other hand, technology has advanced to a point where physical proximity, while a nice-to-have, is no longer an imperative to efficient government. With modern telecommunications and the Internet, far more is possible today than ever before in this area. Furthermore, while we have field offices dispersed throughout the country, perhaps having some headquarters outside DC would bring us closer to the citizens we serve.
On balance, I believe that both centralization and decentralization have their merits, but that we need to more fully balance these. To do this, we should explore the potential of decentralization before automatically reverting to the former.
It seems to me that decentralization carries some urgency given the recent report “World At Risk,” by The Commission on the Prevention of Weapons of Mass Destruction Proliferation and Terrorism—it states that “terrorists are determined to attack us again—with weapons of mass destruction if they can. Osama bin Laden has said that obtaining these weapons is a ‘religious duty’ and is reported to have sought to perpetuate another ‘Hiroshima.’
Moreover, the report goes on to state that the commission “believes that unless the world community acts decisively and with great urgency, it is more likely than not that a weapon of mass destruction will be used in a terrorist attack somewhere in the world by the end of 2013.”
Ominously the report states “we know the threat we face. We know our margin of safety is shrinking, not growing. And we know what we must do to counter the risk.”
Enterprise architecture teaches us to carefully vet and make sound investment decisions. Where should we be investing our federal assets—centrally or decentralized and how much in each category?
Obviously, changing the status quo is not cheap and would be especially difficult in the current global economic realty. But it is still something we should carefully consider.
January 11, 2009
According to an recent article in Harvard Business Review, December 2008, one way that enterprises can better architect their products and services is by “choice architecture.”
Choice Architecture is “design of environments to order to influence decisions.” By “covertly or overly guiding your choices,” enterprises “benefit both company and consumer by simplifying decision making, enhancing customer satisfaction, reducing risk, and driving profitable purchases.”
For example, companies set “defaults” for products and services that are “the basic form customers receive unless they take action to change it.”
“At a basic level, defaults can serve as manufacturer recommendations, and more often than not we’re happy with what we get by accepting them. [For example,] when we race through those software installation screens and click ‘next’ to accept the defaults, we’re acknowledging that the manufacturer knows what’s best for us.”
“Of course, defaults can be nefarious as well. They have caused many of us to purchase unwanted extended warranties or to inadvertently subscribe to mailing lists.”
“Given the power of defaults to influence decisions and behaviors both positively and negatively, organizations must consider ethics and strategy in equal measure in designing them.”
Here are some interesting defaults and how they affect decision making:
Mass defaults—“apply to all customers…without taking customers; individual preferences into account.” This architecture can result in suboptimal offerings and therefore some unhappy customers.
Some mass defaults have hidden options—“the default is presented as a customer’s only choice, although hard-to-find alternatives exist.” For example, computer industry vendors, such as Microsoft, often use hidden options to keep the base product simple, while at the same time having robust functionality available for power users.
Personalized defaults—“reflect individual differences and can be tailored to better meet customers’ needs.” For example, information about an individual’s demography or geography may be taken into account for product/service offerings.
One type of personalized default is adaptive defaults—which “are dynamic: they update themselves based on current (often real-time) decisions that a customer has made.” This is often used in online retailing, where customers make a series of choices.
There are other defaults types such as benign, forced, random, persistent, and smart: each limiting or granting greater amounts of choice to decision makers.
When we get defaults right (whether we are designing software, business processes, other end-user products, or supplying services), we can help companies and customers to make better, faster, and cheaper decisions, because there is “intelligent” design to guide the decision process. In essence, we are simplifying the decision making process for people, so they can generally get what they want in a logical, sequenced, well-presented way.
Of course, the flip side is that when choice architecture is done poorly, we unnecessarily limit options, drive people to poor decisions, and people are dissatisfied and will seek alternative suppliers and options in the future.
Certainly, we all love to choose what we want, how we want, when we want and so on. But like all of us have probably experienced at one time or another: when you have too many choices, unconstrained, not guided, not intelligently presented, then consumers/decision makers can be left dazed and confused. That is why we can benefit from choice architecture (when done well) to help make decision making simple, smarter, faster, and generally more user-centric.
January 10, 2009
To me the question is important from an enterprise architecture perspective, because EA is seeks to help organizations and people make better decisions and not get roped into decision-making by gut, intuition, politics, or subjective management whim. Are there lessons to be learned from this huge and embarrassing Ponzi scheme that can shed light on how people get suckered in and make the wrong decision?
The Wall Street Journal, 3-4 January, has a fascinating article called the “Anatomy of Gullibility,” written by one of the Madoff investors who lost 30% of their retirement savings in the fund.
Point #1—Poor decision-making is not limited to investing. “Financial scams are just one of the many forms of human gullibility—along with war (the Trojan Horse), politics (WMD in Iraq), relationships (sexual seduction), pathological science [people are tricked into false results]…and medical fads.”
Point #2—Foolish decisions are made despite information to the contrary (i.e. warning signs). “A foolish (or stupid) act is one in which someone goes ahead with a socially or physically risky behavior in spit of danger signs or unresolved questions.
Point #3—There are at least four contributors to making bad decisions.
- SITUATION—There has to be a event that requires a choice (i.e. a decision point). “Every gullible act occurs when an individual is presented with a social challenge that he has to solve.” In the enterprise, there are situations (economic, political, social, legal, personal…) that necessitate decision-making every day.
- COGNITION—Decision-making requires cognition, whether sound or unsound. “Gullibility can be considered a form of stupidity, so it is safe to assume deficiencies in knowledge and/or clear thinking are implicated.” In the organization and personally, we need lots of good useful and usable information to make sound decisions. In the organization, enterprise architecture is a critical framework, process, and repository for the strategic information to aid cognitive decision-making processes.
- PERSONALITY—People and their decisions are influenced positively or negatively by others (this includes the social affect…are you following the “in-crowd”.) “The key to survival in a world filled with fakers…or unintended misleaders…is to know when to be trusting and when not to be.” In an organization and in our personal lives, we need to surround ourselves with those who can be trusted to be provide sound advice and guidance and genuinely look after our interests.
- EMOTION—As humans, we are not purely rational beings, we are swayed by feelings (including fear, greed, compassion, love, hate, joy, anger…). “Emotion enters into virtually every gullible act.” While, we can never remove emotion, nor is it even desirable to do this, from the decision-making process, we do need to identify the emotional aspects and put them into perspective. For example, the enterprise may feel threatened and competitive in the marketplace and feel a need to make a big technological investment; however, those feelings should be tempered by an objective business case including cost-benefit analysis, analysis of alternatives, risk determination, and so forth.
Hopefully, by better understanding the components of decision-making and what makes us as humans gullible and prone to mistakes, we can better structure our decision-making processes to enable more objective, better vetted, far-sighted and sound decisions in the future.
January 4, 2009
At the most basic level, people have physiological needs for food, water, shelter, and so on. Then “higher-level” needs come into play including those for safety, socializing, self-esteem, and finally self-actualization.
The second order need for safety incorporates the human desire for feeling a certain degree of control over one’s life and that there is, from the macro perspective, elements of predictability, order, and consistency in the world.
Those of us who believe in G-d generally attribute “real” control over our lives and world events to being in the hands of our creator and sustainer. Nevertheless, we see ourselves having an important role to play in doing our part—it is here that we strive for control over our lives in choosing a path and working hard at it. A lack of any semblance of control over our lives makes us feel like sheer puppets without the ability to affect things positively or negatively. We are lost in inaction and frustration that whatever we do is for naught. So the feeling of being able to influence or impact the course of our lives is critical for us as human beings to feel productive and a meaningful part of the universe that we live in.
How does this impact technology?
Mike Elgan has an interesting article in Computerworld, 2 January 2009, called “Why Products Fail,” in which he postulates that technology “makers don’t understand what users want most: control.”
Of course, technical performance is always important, but users also have a fundamental need to feel in control of the technology they are using. The technology is a tool for humans and should be an extension of our capabilities, rather than something like in the movie Terminator that runs rogue and out of the control of the human beings who made them.
When do users feel that the technology is out of their control?
Well aside from getting the blue screen of death, when they are left waiting for the computer to do something (especially the case when they don’t know how long it will be) and when the user interface is complicated, not intuitive, and they cannot find or easily understand how to do what they want to do.
Elgan says that there are a number of elements that need to be built into technology to help user feel in control.
Consistetency—“predictability…users know what will happen when they do something…it’s a feeling of mastery of control.”
Usability—“give the user control, let them make their own mistakes, then undo the damage if they mess something up” as opposed to the “Microsoft route—burying and hiding controls and features, which protects newbies from their own mistakes, but frustrates the hell out of experienced users.”
Simplicity—“insist on top-to-bottom, inside-and-outside simplicity,” rather than “the company that hides features, buries controls, and groups features into categories to create the appearance of few options, with actually reducing options.”
Performance/Stability—“everyone hates slows PCs. It’s not the waiting. It’s the fact that the PC has wrenched control from the user during the time that the hourglass is displayed.”
Elgan goes on to say that vendors’ product tests “tend to focus on enabling user to ‘accomplish goals…but how the user feels during the process is more important than anything else.”
As a huge proponent of user-centricity, I agree that people have an inherent need to feel they are in some sort of control in their lives, with the technology they use, and over the direction that things are going in (i.e. enterprise architecture).
However, I would disagree that how the user feels is more important than how well we accomplish goals; mission needs and the ability of the user to execute on these must come first and foremost!
In performing our mission, users must be able to do their jobs, using technology, effectively and efficiently. So really, it’s a balance between meeting mission requirements and considering how users feel in the process.
Technology is amazing. It helps us do things better, faster, and cheaper that we could ever do by ourselves. But we must never forget that technology is an extension of ourselves and as such must always be under our control and direction in the service of a larger goal.
January 3, 2009
Generally, embedded systems are dedicated to specific tasks, while general-purpose computers can be used for a variety of functions. In either case, the systems are vital for our everyday functioning.
Government Computer News, 15 December 2008 reports that “thanks to the plummeting cost of microprocessors, computing…now happens in automobiles, Global Positioning Systems, identification cards and even outer space.
The challenge with embedded systems are that they “must operate on limited resources—small processors, tiny memory and low power.”
Rob Oshana, director of engineering at Freescale Semiconductor says that “With embedded it’s about doing as much as you can with as little as you can.”
What’s new—haven’t we had systems embedded in automobiles for years?
“Although originally designed for interacting with the real world, such systems are increasingly feeding information into larger information systems,” according to Wayne Wolf, chair of embedded computing systems at Georgia Institute of Technology.
According to Wolf, “What we are starting to see now is [the emergence] of what the National Science Foundation is called cyber-physical systems.”
In other words, embedded systems are used for command and control or information capture in the physical domain (like in a car or medical imaging machine), but then they can also share information over a network with others (think OnStar or remote medical services).
When the information is shared from the car to the Onstar service center, information about an accident can be turned into dispatch of life-saving responders. Similarly, when scans from a battlefield MRI is shared with medical service providers back in the States, quality medical services can be provided, when necessary, from thousands of miles away.
As we should hopefully have all come to learn after 9-11, information hoarding is faux power. But when information is shared, the power is real because it can be received and used by others and others, so that its influence is exponential.
Think for example, of the Mars Rover, which has embedded systems for capturing environmental samples. Left alone, the information is contained to a physical device millions of miles away, but sharing the information back to remote tracking stations here on Earth, the information can be analyzed, shared, studied, and so forth with almost endless possibilities for ongoing learning and growth.
The world has changed from embedded systems to a universe of connected systems.
Think distributed computing and the internet. With distributed computing, we are silos or separate domains of information, but by connecting the islands of information using the internet for example, we can all harness the vast amounts of information out there and in turn process it within our own lives and contribute back information to others.
The connection and sharing is our strength.
In the intelligence world, information is often referred to as dots, and it is the connection of the dots that make for viable and actionable intelligence.
As people, we are also proverbially just little dots in this big world of ours.
But as we have learnt with social media, we are able to grow as individuals and become more potent and more fulfilled human beings by being connected with others—we’ve gone from doing this in our limited physical geographies to a much larger population in cyberspace.
In the end, information resides in people or can be embedded in machines, but connecting the information to with other humans and machines is the true power of the information technology.