February 7, 2009

The Perilous Pitfalls of Unconscious Decision Making

Every day as leaders, we are called upon to make decisions—some more important than others—but all having impacts on the organization and its stakeholders. Investments get made for better or worse, employees are redirected this way or that, customer requirements get met or are left unsatisfied, suppliers receive orders while others get cancelled, and stakeholders far and wide have their interests fulfilled or imperiled.

Leadership decisions have a domino effect. The decisions we make today will affect the course of events well into the future--especially when we consider a series of decisions over time.

Yet leadership decisions span the continuum from being made in a split second to those that are deliberated long and hard.

In my view, decision makers can be categorized into three types: “impulsive,” “withholding,” and “optimizers.”

  1. Impulsive leaders jump the gun and make a decision without sufficient information—sometimes possibly correctly, but often risking harm to the organization because they don’t think things through.
  2. Withholding leaders delay making decisions, searching for the optimal decision or Holy Grail. While this can be effective to avoid overly risky decisions, the problem is that they end up getting locked into “analysis paralysis”. They never get off the dime; decisions linger and die while the organization is relegated to a status quo—stagnating or even declining in times of changing market conditions.
  3. Optimizers rationally gather information, analyze it, vet it, and drive towards a good enough decision; they attempt to do due diligence and make responsible decisions in reasonable time frames that keep the organization on a forward momentum, meeting strategic goals and staying competitive. But even the most rational individuals can falter in the face of an array of data.

So it is clear that whichever mode decision makers assume, many decisions are still wrong. In my view, this has to do with the dynamics of the decision-making process. Even if they think they are being rational, in reality leaders too often make decisions for emotional or even unconscious reasons. Even optimizers can fall into this trap.

CIOs, who are responsible for substantial IT investment dollars, must understand why this happens and how they can use IT management best practices, structures, and tools to improve the decision-making process.

An insightful article that sheds light on unconscious decision-making, “Why Good Leaders Make Bad Decisions,” was published this month in Harvard Business Review.

The article states: “The reality is that important decisions made by intelligent, responsible people with the best information and intentions are sometimes hopelessly flawed.”

Here are two reasons cited for poor decision making:

  • Pattern Recognition—“faced with a new situation, we make assumptions based on prior experiences and judgments…but pattern recognition can mislead us. When we’re dealing with seemingly familiar situations, our brains can cause us to think we understand then when we don’t.”
  • Emotional Tagging—“emotional information attaches itself to the thoughts and experiences stored in our memories. This emotional information tells us whether to pay attention to something or not, and it tells us what sort of action we should be contemplating.” But what happens when emotion gets in the way and inhibits us from seeing things clearly?

The authors note some red flags in decision making: the presence of inappropriate self-interest, distorting attachments (bonds that can affect judgment—people, places, or things), and misleading memories.

So what can we do to make things better?

According to the authors of the article, we can “inject fresh experience or analysis…introduce further debate and challenge…impose stronger governance.”

In terms of governance, the CIO certainly comes with a formidable arsenal of IT tools to drive sound decision making. In particular, enterprise architecture provides for structured planning and governance; it is the CIO’s disciplined way to identify a coherent and agreed to business and technical roadmap and a process to keep everyone on track. It is an important way to create order of organizational chaos by using information to guide, shape, and influence sound decision making instead of relying on gut, intuition, politics, and subjective management whim—all of which are easily biased and flawed!

In addition to governance, there are technology tools for information sharing and collaboration, knowledge management, business intelligence, and yes, even artificial intelligence. These technologies help to ensure that we have a clear frame of reference for making decisions. We are no longer alone out there making decisions in an empty vacuum, but rather now we can reach out –far and wide to other organizations, leaders, subject matter experts, and stakeholders to get and give information, to analyze, to collaborate and to perhaps take what would otherwise be sporadic and random data points and instead connect the dots leading to a logical decision.

To help safeguard the decision process (and no it will never be failsafe), I would suggest greater organizational investments in enterprise architecture planning and governance and in technology investments that make heavily biased decisions largely a thing of the past.


Share/Save/Bookmark

January 24, 2009

Vision and The Total CIO

Vision is often the telltale demarcation between a leader and a manager. A manager knows how to climb a ladder, but a leader knows where the ladder needs to go—leaders have the vision to point the organization in the right direction!
Harvard Business Review, January 2009, asks “what does it mean to have vision?”
First of all, HBR states that vision is the “central component in charismatic leadership.” They offer three components of vision, and here are my thoughts on these:
  1. Sensing opportunities and threats in the environment”—(recognizing future impacts) this entails “foreseeing events” and technologies that will affect the organization and one’s stakeholders. This means not only constantly scanning the environment for potential impacts, but also making the mental connections between, internal and external factors, the risks and opportunities they pose, and the probabilities that they will occur.
  2. Setting strategic direction”—(determining plans to respond) this means identifying the best strategies to get out ahead of emerging threats and opportunities and determining how to mitigate risks or leverage opportunities (for example, to increase mission effectiveness, revenue, profitability, market share, and customer satisfaction).
  3. Inspiring constituents”—(executing on a way ahead) this involves assessing change readiness, “challenging the status quo” (being a change agent), articulating the need and “new ways of doing things”, and motivating constituent to take necessary actions.
The CIO/CTO is in a unique position to provide the vision and lead in the organization, since they can bring alignment between the business needs and the technologies that can transform it.
The IT leader cannot afford to get bogged down in firefighting the day-to-day operations to the exclusion of planning for the future of the enterprise. Firefighting is mandatory when there is a fire, but he fire must eventually be extinguished and the true IT leader must provide a vision that goes beyond tomorrow’s network availability and application up-time. Sure the computers and phones need to keep working, but the real value of the IT leader is in providing a vision of the future and not just more status quo.
The challenge for the CIO/CTO is to master the business and the technical, the present and the future—to truly understand the mission and the stakeholders as they are today as well as the various technologies and management best practices available and emerging to modernize and reengineer. Armed with business and technical intelligence and a talent to convert the as-is to the to-be, the IT leader can increase organizational efficiency and effectiveness, help the enterprise better compete in the marketplace and more fully satisfy customers now and in the future.

Share/Save/Bookmark

January 18, 2009

Information: Knowledge or B.S.?

With modern technology and the Internet, there is more information out there than ever before in human history. Some argue there is too much information or that it is too disorganized and hence we have “information overload.”

The fact that information itself has become a problem is validated by the fact that Google is world’s #1 brand with a market capitalization of almost $100 billion. As we know the mission statement of Google is to “to organize the world's information and make it universally accessible and useful.”

The key to making information useful is not just organizing it and making it accessible, but also to make sure that it is based on good data—and not the proverbial, “garbage in, garbage out” (GIGO).

There are two types of garbage information:

  1. Incorrect, incomplete, or dated
  2. Misleading /propagandistic or an outright lie

When information is not reliable, it causes confusion, rather than bringing clarity. And then, the information can actually result in worse decision making, then if you didn’t have it in the first place. This is an enterprise architecture that is not only worthless, but is harmful or poison to the enterprise.

Generally, in enterprise architecture, we are optimistic about human nature and focus on #1, i.e., we assume that people mean to provide objective and complete data and try to ensure that they can do that. But unfortunately there is a darker side to human nature that we must grapple with, and that is #2.

Misinformation by accident or by intent is used in organizations all the time to make poor investment decisions. Just think how many non-standardized, non-interoperable, costly tools your organization has bought because someone provided “information” or developed a business case, which “clearly demonstrated” that is was a great investment with a high ROI. Everyone wants their toys!

Wired Magazine, February 2009, talks about disinformation in the information age in “Manufacturing Confusion: How more information leads to less knowledge” (Clive Thompson).

Thompson writes about Robert Proctor, a historian of science from Stanford, who coined the word “Agnotology,” or “the study of culturally constructed ignorance.” Proctor theorizes that “people always assume that if someone doesn’t know something, it’s because they haven’t paid attention or haven’t yet figured it out. But ignorance also comes from people literally suppressing truth—or drowning it out—or trying to make it so confusing that people stop hearing about what’s true and what’s not.” Thompson offers as examples:

  1. “Bogus studies by cigarette companies trying to link lung cancer to baldness, viruses—anything but their product.”
  2. Financial firms creating fancy-dancy financial instruments like “credit-default swaps [which] were designed not merely to dilute risk but to dilute knowledge; after they changed hands and been serially securitized, no one knew what they were worth.”

We have all heard the saying that “numbers are fungible” and we are also all cautious about “spin doctors” who appear in the media telling their side of the story rather than the truth.

So it seems that despite the advances wrought by the information revolution, we have some new challenges on our hands: not just incorrect information but people who literally seek to promote its opposite.

So we need to get the facts straight. And that means not only capturing valuable information, but also eliminating bias so that we are not making investment decisions on the basis of B.S.


Share/Save/Bookmark

January 17, 2009

Decentralization, Technology, and Anti-Terror Planning

Given that 9/11 represented an attack on geographically concentrated seats of U.S. financial and government power, is it a good enterprise architecture decision to centralize many or all government headquarters in one single geographic area?

Read about Decentralization, Technology, and Anti-Terror Planning in The Total CIO.


Share/Save/Bookmark

Decentralization, Technology, and Anti-Terror Planning

Even though there hasn’t been a successful terrorist attack against the United States since 9/11, we are all aware that terrorists continue to seek ways to harm us. Of course, we have assets deployed nationally as well as internationally to protect our interests. However, there is always more that can be done. And one thing that immediately comes to my mind is decentralization.

The concept of decentralization is very simple. Rather than concentrating all your vital assets in one place, you spread them out so that if one is destroyed, the others remain functional. The terrorists already do this by operating in dispersed “cells.” Not only that, but we know that very often one “cell” doesn’t know what the other one is doing or even who they are. All this to keep the core organization intact in case one part of it is compromised.

Both the public and private sectors understand this and often strategically decentralize and have backup and recovery plans. However, we still physically concentrate the seat of our federal government in a geographically close space. Given that 9/11 represented an attack on geographically concentrated seats of U.S. financial and government power, is it a good enterprise architecture decision to centralize many or all government headquarters in one single geographic area?

On the one hand the rationale for co-locating federal agencies is clear: The physical proximity promotes information-sharing, collaboration, productivity, a concentrated talent pool, and so on. Further, it is a signal to the world that we are a free and proud nation and will not cower before those who threaten us.

Yet on the other hand, technology has advanced to a point where physical proximity, while a nice-to-have, is no longer an imperative to efficient government. With modern telecommunications and the Internet, far more is possible today than ever before in this area. Furthermore, while we have field offices dispersed throughout the country, perhaps having some headquarters outside DC would bring us closer to the citizens we serve.

On balance, I believe that both centralization and decentralization have their merits, but that we need to more fully balance these. To do this, we should explore the potential of decentralization before automatically reverting to the former.

It seems to me that decentralization carries some urgency given the recent report “World At Risk,” by The Commission on the Prevention of Weapons of Mass Destruction Proliferation and Terrorism—it states that “terrorists are determined to attack us again—with weapons of mass destruction if they can. Osama bin Laden has said that obtaining these weapons is a ‘religious duty’ and is reported to have sought to perpetuate another ‘Hiroshima.’

Moreover, the report goes on to state that the commission “believes that unless the world community acts decisively and with great urgency, it is more likely than not that a weapon of mass destruction will be used in a terrorist attack somewhere in the world by the end of 2013.”

Ominously the report states “we know the threat we face. We know our margin of safety is shrinking, not growing. And we know what we must do to counter the risk.”

Enterprise architecture teaches us to carefully vet and make sound investment decisions. Where should we be investing our federal assets—centrally or decentralized and how much in each category?

Obviously, changing the status quo is not cheap and would be especially difficult in the current global economic realty. But it is still something we should carefully consider.


Share/Save/Bookmark

January 11, 2009

Choice Architecture and Enterprise Architecture

In a free society like America, we are generally all strong believers in our rights and freedoms—like those often cited from the Bill of Rights-- speech, press, religion, assembly, bearing arms, due process and so forth. More broadly, we cherish our right and freedom to choose.

According to an recent article in Harvard Business Review, December 2008, one way that enterprises can better architect their products and services is by “choice architecture.”

Choice Architecture is “design of environments to order to influence decisions.” By “covertly or overly guiding your choices,” enterprises “benefit both company and consumer by simplifying decision making, enhancing customer satisfaction, reducing risk, and driving profitable purchases.”

For example, companies set “defaults” for products and services that are “the basic form customers receive unless they take action to change it.”

“At a basic level, defaults can serve as manufacturer recommendations, and more often than not we’re happy with what we get by accepting them. [For example,] when we race through those software installation screens and click ‘next’ to accept the defaults, we’re acknowledging that the manufacturer knows what’s best for us.”

Of course, defaults can be nefarious as well. They have caused many of us to purchase unwanted extended warranties or to inadvertently subscribe to mailing lists.”

Given the power of defaults to influence decisions and behaviors both positively and negatively, organizations must consider ethics and strategy in equal measure in designing them.”

Here are some interesting defaults and how they affect decision making:

Mass defaults—“apply to all customers…without taking customers; individual preferences into account.” This architecture can result in suboptimal offerings and therefore some unhappy customers.

Some mass defaults have hidden options—“the default is presented as a customer’s only choice, although hard-to-find alternatives exist.” For example, computer industry vendors, such as Microsoft, often use hidden options to keep the base product simple, while at the same time having robust functionality available for power users.

Personalized defaults—“reflect individual differences and can be tailored to better meet customers’ needs.” For example, information about an individual’s demography or geography may be taken into account for product/service offerings.

One type of personalized default is adaptive defaults—which “are dynamic: they update themselves based on current (often real-time) decisions that a customer has made.” This is often used in online retailing, where customers make a series of choices.

There are other defaults types such as benign, forced, random, persistent, and smart: each limiting or granting greater amounts of choice to decision makers.

When we get defaults right (whether we are designing software, business processes, other end-user products, or supplying services), we can help companies and customers to make better, faster, and cheaper decisions, because there is “intelligent” design to guide the decision process. In essence, we are simplifying the decision making process for people, so they can generally get what they want in a logical, sequenced, well-presented way.

Of course, the flip side is that when choice architecture is done poorly, we unnecessarily limit options, drive people to poor decisions, and people are dissatisfied and will seek alternative suppliers and options in the future.

Certainly, we all love to choose what we want, how we want, when we want and so on. But like all of us have probably experienced at one time or another: when you have too many choices, unconstrained, not guided, not intelligently presented, then consumers/decision makers can be left dazed and confused. That is why we can benefit from choice architecture (when done well) to help make decision making simple, smarter, faster, and generally more user-centric.


Share/Save/Bookmark

January 10, 2009

Why We Make Bad Decisions and Enterprise Architecture

With the largest Ponzi scheme in history ($50 billion!!) still unfolding, and savvy investors caught off guard, everyone is asking how can this happen—how can smart, experienced investors be so gullible and make such big mistakes with their savings?

To me the question is important from an enterprise architecture perspective, because EA is seeks to help organizations and people make better decisions and not get roped into decision-making by gut, intuition, politics, or subjective management whim. Are there lessons to be learned from this huge and embarrassing Ponzi scheme that can shed light on how people get suckered in and make the wrong decision?

The Wall Street Journal, 3-4 January, has a fascinating article called the “Anatomy of Gullibility,” written by one of the Madoff investors who lost 30% of their retirement savings in the fund.

Point #1—Poor decision-making is not limited to investing. “Financial scams are just one of the many forms of human gullibility—along with war (the Trojan Horse), politics (WMD in Iraq), relationships (sexual seduction), pathological science [people are tricked into false results]…and medical fads.”

Point #2—Foolish decisions are made despite information to the contrary (i.e. warning signs). “A foolish (or stupid) act is one in which someone goes ahead with a socially or physically risky behavior in spit of danger signs or unresolved questions.

Point #3—There are at least four contributors to making bad decisions.

  • SITUATION—There has to be a event that requires a choice (i.e. a decision point). “Every gullible act occurs when an individual is presented with a social challenge that he has to solve.” In the enterprise, there are situations (economic, political, social, legal, personal…) that necessitate decision-making every day.
  • COGNITION—Decision-making requires cognition, whether sound or unsound. “Gullibility can be considered a form of stupidity, so it is safe to assume deficiencies in knowledge and/or clear thinking are implicated.” In the organization and personally, we need lots of good useful and usable information to make sound decisions. In the organization, enterprise architecture is a critical framework, process, and repository for the strategic information to aid cognitive decision-making processes.
  • PERSONALITY—People and their decisions are influenced positively or negatively by others (this includes the social affect…are you following the “in-crowd”.) “The key to survival in a world filled with fakers…or unintended misleaders…is to know when to be trusting and when not to be.” In an organization and in our personal lives, we need to surround ourselves with those who can be trusted to be provide sound advice and guidance and genuinely look after our interests.
  • EMOTION—As humans, we are not purely rational beings, we are swayed by feelings (including fear, greed, compassion, love, hate, joy, anger…). “Emotion enters into virtually every gullible act.” While, we can never remove emotion, nor is it even desirable to do this, from the decision-making process, we do need to identify the emotional aspects and put them into perspective. For example, the enterprise may feel threatened and competitive in the marketplace and feel a need to make a big technological investment; however, those feelings should be tempered by an objective business case including cost-benefit analysis, analysis of alternatives, risk determination, and so forth.

Hopefully, by better understanding the components of decision-making and what makes us as humans gullible and prone to mistakes, we can better structure our decision-making processes to enable more objective, better vetted, far-sighted and sound decisions in the future.


Share/Save/Bookmark

January 4, 2009

The Need for Control and Enterprise Architecture

Human beings have many needs and these have been well documented by prominent psychologists like Abraham Maslow.

At the most basic level, people have physiological needs for food, water, shelter, and so on. Then “higher-level” needs come into play including those for safety, socializing, self-esteem, and finally self-actualization.

The second order need for safety incorporates the human desire for feeling a certain degree of control over one’s life and that there is, from the macro perspective, elements of predictability, order, and consistency in the world.

Those of us who believe in G-d generally attribute “real” control over our lives and world events to being in the hands of our creator and sustainer. Nevertheless, we see ourselves having an important role to play in doing our part—it is here that we strive for control over our lives in choosing a path and working hard at it. A lack of any semblance of control over our lives makes us feel like sheer puppets without the ability to affect things positively or negatively. We are lost in inaction and frustration that whatever we do is for naught. So the feeling of being able to influence or impact the course of our lives is critical for us as human beings to feel productive and a meaningful part of the universe that we live in.

How does this impact technology?

Mike Elgan has an interesting article in Computerworld, 2 January 2009, called “Why Products Fail,” in which he postulates that technology “makers don’t understand what users want most: control.”

Of course, technical performance is always important, but users also have a fundamental need to feel in control of the technology they are using. The technology is a tool for humans and should be an extension of our capabilities, rather than something like in the movie Terminator that runs rogue and out of the control of the human beings who made them.

When do users feel that the technology is out of their control?

Well aside from getting the blue screen of death, when they are left waiting for the computer to do something (especially the case when they don’t know how long it will be) and when the user interface is complicated, not intuitive, and they cannot find or easily understand how to do what they want to do.

Elgan says that there are a number of elements that need to be built into technology to help user feel in control.

Consistetency—“predictability…users know what will happen when they do something…it’s a feeling of mastery of control.”

Usability—“give the user control, let them make their own mistakes, then undo the damage if they mess something up” as opposed to the “Microsoft route—burying and hiding controls and features, which protects newbies from their own mistakes, but frustrates the hell out of experienced users.”

Simplicity—“insist on top-to-bottom, inside-and-outside simplicity,” rather than “the company that hides features, buries controls, and groups features into categories to create the appearance of few options, with actually reducing options.”

Performance/Stability—“everyone hates slows PCs. It’s not the waiting. It’s the fact that the PC has wrenched control from the user during the time that the hourglass is displayed.”

Elgan goes on to say that vendors’ product tests “tend to focus on enabling user to ‘accomplish goals…but how the user feels during the process is more important than anything else.”

As a huge proponent of user-centricity, I agree that people have an inherent need to feel they are in some sort of control in their lives, with the technology they use, and over the direction that things are going in (i.e. enterprise architecture).

However, I would disagree that how the user feels is more important than how well we accomplish goals; mission needs and the ability of the user to execute on these must come first and foremost!

In performing our mission, users must be able to do their jobs, using technology, effectively and efficiently. So really, it’s a balance between meeting mission requirements and considering how users feel in the process.

Technology is amazing. It helps us do things better, faster, and cheaper that we could ever do by ourselves. But we must never forget that technology is an extension of ourselves and as such must always be under our control and direction in the service of a larger goal.


Share/Save/Bookmark

January 3, 2009

Embedded Systems and Enterprise Architecture

Information technology is not just about data centers, desktops, and handheld devices anymore. These days, technology is everywhere—embedded in all sorts of devices from cars and toaster ovens to traffic lights and nuclear power plants. Technology is pervasive in every industry from telecommunications to finance and from healthcare to consumer electronics.

Generally, embedded systems are dedicated to specific tasks, while general-purpose computers can be used for a variety of functions. In either case, the systems are vital for our everyday functioning.

Government Computer News, 15 December 2008 reports that “thanks to the plummeting cost of microprocessors, computing…now happens in automobiles, Global Positioning Systems, identification cards and even outer space.

The challenge with embedded systems are that they “must operate on limited resources—small processors, tiny memory and low power.”

Rob Oshana, director of engineering at Freescale Semiconductor says that “With embedded it’s about doing as much as you can with as little as you can.”

What’s new—haven’t we had systems embedded in automobiles for years?

Although originally designed for interacting with the real world, such systems are increasingly feeding information into larger information systems,” according to Wayne Wolf, chair of embedded computing systems at Georgia Institute of Technology.

According to Wolf, “What we are starting to see now is [the emergence] of what the National Science Foundation is called cyber-physical systems.”

In other words, embedded systems are used for command and control or information capture in the physical domain (like in a car or medical imaging machine), but then they can also share information over a network with others (think OnStar or remote medical services).

When the information is shared from the car to the Onstar service center, information about an accident can be turned into dispatch of life-saving responders. Similarly, when scans from a battlefield MRI is shared with medical service providers back in the States, quality medical services can be provided, when necessary, from thousands of miles away.

As we should hopefully have all come to learn after 9-11, information hoarding is faux power. But when information is shared, the power is real because it can be received and used by others and others, so that its influence is exponential.

Think for example, of the Mars Rover, which has embedded systems for capturing environmental samples. Left alone, the information is contained to a physical device millions of miles away, but sharing the information back to remote tracking stations here on Earth, the information can be analyzed, shared, studied, and so forth with almost endless possibilities for ongoing learning and growth.

The world has changed from embedded systems to a universe of connected systems.

Think distributed computing and the internet. With distributed computing, we are silos or separate domains of information, but by connecting the islands of information using the internet for example, we can all harness the vast amounts of information out there and in turn process it within our own lives and contribute back information to others.

The connection and sharing is our strength.

In the intelligence world, information is often referred to as dots, and it is the connection of the dots that make for viable and actionable intelligence.

As people, we are also proverbially just little dots in this big world of ours.

But as we have learnt with social media, we are able to grow as individuals and become more potent and more fulfilled human beings by being connected with others—we’ve gone from doing this in our limited physical geographies to a much larger population in cyberspace.

In the end, information resides in people or can be embedded in machines, but connecting the information to with other humans and machines is the true power of the information technology.


Share/Save/Bookmark

January 2, 2009

It Time to Stop the Negativity and Move towards Constructive Change

Recently, there was an article in Nextgov (http://techinsider.nextgov.com/2008/12/the_industry_advisory_council.php) about the Industry Advisory Council (IAC), a well respected industry-government consortium under the Auspices of the American Council for Technology, that recommended to the incoming Obama Administration the standup of an innovation agency under the auspices of the new Chief Technology Officer.
The Government Innovation Agency “would serve as an incubator for new ideas, serve as a central repository for best practices and incorporate an innovation review in every project. As we envision it, the Government Innovation Agency would house Centers of Excellence that would focus on ways to achieve performance breakthroughs and leverage technology to improve decision making, institute good business practices and improve problem solving by government employees.:”
While I am a big proponent for innovation and leveraging best practices, what was interesting to me was not so much the proposal from IAC (which I am not advocating for by the way), so much as one of the blistering comments posted anonymously from one of the readers, under the pseudonym “concerned retiree,” which I am posting in its entirety as follows:
“Hmmmmm...."innovation"..."central repository of new ideas"......can this be just empty news release jargon? Just more slow-news day, free-range clichés scampering into the daily news hole?.. .or perhaps this item is simply a small sized news item without the required room to wisely explicate on the real life banalities of the government sponsored “innovation” world...such as: 1)patent problems - is the US going to be soaking up, or handing out patent worthy goodies via the "innovation" czar or czarina? Attention patent attorneys, gravy train a comin’ 2)"leverage technology to improve decision making" – wow! a phrase foretelling a boon-doggle bonanza, especially since it’s wonderfully undefined and thereby, prompting generous seed money to explore it’s vast potential (less just fund it at say, $20-30 million?); 3) the "Government Innovation Agency" - -well now, just how can we integrate this new member to the current herd of government “innovation” cows, including: A) a the Dod labs, like say the Naval Research Lab, or the Dept of Commerce lab that produced the Nobel prize winner (oh, I see now, the proposal would be for “computer” type innovation pursuits – oh, how wise, like the health research lobbyists, we’re now about slicing “innovation” and/or research to match our vendor supplier concerns, how scientific!, how MBAishly wise); B) existing labs in private industry (e.g. former Bell Labs. GM-Detroit area "labs"/innovation groups), C) university labs – currently watered by all manner of Uncle Sam dollars via the great roiling ocean of research grants. Finally - given the current Wall Street melt-down and general skepticism for American business nimbleness (this too will pass, of course) -- what's the deal with all the Harvard Grad School-type hyper-ventilation on the bubbling creativity (destructive or otherwise) of American capitalism - -surely the GAO/Commerce/SEC could pop out some stats on the progressive deterioration of expenditures -- capital and otherwise--on "innovation". Or perhaps the sponsors of the "Government Innovation Agency" - will be happy to explain at the authorization hearing - how all the dough to date spent to date on development of the green automobile has yet to put a consumer friendly one on the road from a US corp -- a fact that argues either for a vast expansion of the GIA, or, the merciful euthenasiaing of this dotty idea. See you all at the authorizing hearing?”
What’s so disheartening about this retiree’s comments?
It’s not that there is not some truth intermixed with the blistering comments, but it is the sheer magnitude of the cynicism, bitterness, negativity, resistance to ”new” (or at times reformulated) ideas, and “been-there-done-that” attitude that unfairly provides a bad name to other government workers who are smart, innovative, positive, and hard-charging and want to continuously improve effectiveness and efficiency of government for the benefit of the nation and to serve our citizens.
Sure, we need to listen and learn from those that preceded us--those with age, experience, expertise, and certainly vast amounts of wisdom. And yes, those of us who do not learn from the mistakes of the past are doomed to repeat it. So we must be mindful to be respectful, collaborative, inclusive, and careful to vet new ideas and changes.
However, those that have served before or have been serving a long time now should also give hope, innovation, change (not for change’s sake, but based on genuine learning and growth) and continuous improvement a chance.
It is always easier to be a naysayer, a doomsday prognosticator, and to tear down and destroy. It is much, much harder to be positive, hopeful, and constructive—to seek to build a brighter future rather than rest on the laurels of the past.
Unfortunately, many people have been hurt by past mistakes, false leaders, broken promises, and dashed hopes, so they become resistant to change, in addition to, of course, fearing change.
Those of us in information technology and other fields (like science, engineering, product design and development, and so many others—in fact all of us can make a difference) need to be stay strong amidst the harsh rhetoric of negativity and pessimism, and instead continue to strive for a better tomorrow.

Share/Save/Bookmark

January 1, 2009

Scraping the Landlines and The Total CIO

It’s long overdue. It’s time to get rid of the landline telephones from the office (and also from our homes, if you still have them). Wireless phones are more than capable of doing the job and just think you already probably have at least one for business and one for personal use—so redundancy is built in!
Getting rid of the office phones will save the enterprise money, reduce a maintenance burden (like for office moves) and remove some extra telejunk clutter from your desk. More room for the wireless handheld charger. :-)
USA Today, 20 December 2008 reports that according to Forrester Research “Estimated 25% of businesses are phasing out desk phones in effort to save more money.”
Additionally, “more than 8% of employees nationwide who travel frequently have only cellphones.”
Robert Rosenberg, president of The Insight Research Corp., stated: U.S. businesses are lagging behind Europe and Asia in going wireless, because major cellular carriers…are also earning money by providing landlines to businesses—an $81.4 billion industry in 2008.”
“In Washington, D.C., the City Administrator’s office launched a pilot program in October in which 30 employees with government-issued cellphones gave up their desk phones, said deputy mayor Dan Tangherlini. Because the government has issued more than 11,000 cellphones to employees, the program could multiply into significant savings.”
A study by the National Center for Health Statistics between January and June found that more than 16% of families “have substituted a wireless telephone for a land line.”
So what’s stopping organizations from getting rid of the traditional telephones?
The usual culprits: resistance to change, fear of making a mistake, not wanting to give up something we already have—“old habits die hard” and people don’t like to let go of their little treasures—even a bulky old deskphone (with the annoying cord that keeps getting twisted).
Things are near and dear to people and they clutch on to them with their last breath—in their personal lives (think of all the attics, garages, and basements full of items people can’t let go off—yard sale anyone?) and in the professional lives (things equate to stature, tenure, turf—a bigger rice bowl sound familiar?).
Usually the best way to get rid of something is to replace it with something better, so the Total CIO needs to tie the rollout of new handheld devices with people turning in their old devices--land lines, pagers, and even older cell phones (the added benefit is more room and less weight pulling on your belt).
By the way, we need to do the same thing with new applications systems that we roll out. When the new one is fully operational than the old systems need to be retired. Now how often does that typically happen?
Folks, times are tough, global competition is not going away, and we are wasting too much money and time maintaining legacy stuff we no longer need. We need to let go of the old and progress with the new and improved.

Share/Save/Bookmark

December 31, 2008

IT Planning, Governance and The Total CIO

See new article in Architecture and Governance Magazine on: IT Planning, Governance and the CIO: Why a Structured Approach Is Critical to Long-Term Success

(http://www.architectureandgovernance.com/content/it-planning-governance-and-cio-why-structured-approach-critical-long-term-success)

Here's an exrcept:

"IT planning and governance undoubtedly runs counter to the intuitive response—to fight fire with a hose on the spot. Yet dealing with crises as they occur and avoiding larger structures and processes for managing IT issues is ultimately ineffective. The only way to really put out a fire is to find out where the fire is coming from and douse it from there, and further to establish a fire department to rapidly respond to future outbreaks."


Share/Save/Bookmark

Comments from OMB's Chief Architect: Kshemendra Paul

Recently, Kshemendra Paul, Chief Enterprise Architect, at the President's Office of Management and Budget (OMB) made the following critical comments to me about business cases and pilots and incorporating these in the Systems Development Life Cycle:

"I was online and came across your site -
http://usercentricea.blogspot.com/2007/08/system-development-life-cycle-and.html.

I had two comments I wanted to share. First, I would recommend you highlight a business case step, a formal decision to move out of select/conceptual planning and into control. While this is implied, it is such a crucial step and we don't do it well - meaning that we don't force programs to work through all of the kinks in terms of putting forward a real business case (tied to strong performance architecture).

Also, this is a step that is inevitably cross boundary - either on the mission side and for sure on the funding side.

Second, I'd like to see more emphasis on smaller scale rollout or piloting. The goal of which is to prove the original business case in a limited setting. Nothing goes as planned, so another objective is to have real world data to refine the over all plan."

I completely agree with Kshemendra on the need to develop business cases and do them well for all new initiatives.

Organizations, all too often, in their zeal to get out new technologies, either skip this step altogether or do it as a "paper" (i.e. compliance) exercise. Symbolic, but wholly without intent to do due diligence and thus, without any genuine value.

Therefore, whenever we plan for new IT, we must ensure strategic business alignment, return on investment, and risk mitigation by developing and properly vetting business cases through the Enterprise Architecture and Investment Review Boards.

It's great to want to move quickly, get ahead of the pack, and gain competitive advantage by deploying new technologies quickly, but we risk doing more harm than good, by acting rashly and without adequately thinking through and documenting the proposed investment, and vetting it with the breadth and depth of organizational leadership and subject matter experts.

Secondly, as to Kshemendra's point on doing pilots to prove out the business case. This is an important part of proofing new ideas and technologies, before fully committing and investing. It's a critical element in protecting the enterprise from errant IT investments in unproven technologies, immature business plans, and the lack of ability to execute.

Pilots should be incorporated along with concept of operations, proof of concepts, and prototypes in rolling out new IT. (See my blog http://usercentricea.blogspot.com/2007/08/conops-proof-of-concepts-prototypes.html)

With both business cases and pilots for new IT projects, it's a clear case of "look before you leap." This is good business and good IT!

Share/Save/Bookmark

December 22, 2008

MSNBC on the ATF and Enterprise Architecture


Share/Save/Bookmark

December 21, 2008

Engineering Employee Productivity and Enterprise Architecture

Ever since (and realistically way before) Fredrick Taylor’s time and motion studies, employers have looked to “engineer” the way employees do their work to make them more efficient and effective.
The Wall Street Journal, 17 November 2008, reports that “Stores Count Seconds to Trim Labor Costs.”
Companies “break down tasks such as working a cash register into quantifiable units and devise standard times to complete them, called ‘engineered labor standards.’ Then it writes software to help clients keep watch over employees.”
So for example, in some retailers, “A clock starts ticking the instant he scans a customer’s first item, and it doesn’t shut off until his register spits out a receipt.”
Employees who don’t meet performance standards (e.g. they fall below 95%), get called into for counseling, training, and “various alternatives” (i.e. firing).
The result is “everybody is under stress.”
So, is this workforce optimization or micromanagement? Is this helping employees learn do a better job or is this just scare tactics geting them under the management whip?
Some employers are claiming improved productivity and cost savings:
One retailer, for example, claims saving $15,000 in labor costs across 34 stores for every one second shaved from the checkout process.
But others are finding that customer service and employee morale is suffering:
Check clerks are not as friendly. They don’t chat with customers during checkout. Cashiers “avoid eye contact with shoppers and generally hurry along older or infirm customers who might take longer to unload carts and count money.”
Additionally, as another cashier put it, “when you’re afraid you’re going to lose your job, you make more mistakes.”
Other employees are gaming the system to circumvent the rigid performance measures and for example, improving their time by hitting the suspend button to stop the clock more than they are supposed to—it is meant only for use when remotely scanning bulky merchandise.
The other problem with the engineered labor standards is that they often don’t take into account the “x factors”—the things that can go wrong that adversely affect your performance times. Some examples: customers who don’t have enough cash or those “digging through a purse,” credit cards that don’t swipe, “an item with no price or item number,” customers who forget something and go back or those that ask for an item located at the other end of the store.
It seems obvious that while we need to measure performance, we need to make sure that we measure that right things and in the right way.
What good is measuring pure speed of transactions to “boost efficiency” if at the same time we
  1. alienate our customers with poor service or
  2. harm employee morale, integrity and retention with exacting, inflexible, and onerous measurements?
Like all sound enterprise architecture efforts, we need to make sure that they are reasonable, balanced, and take into account not just technology, but people, and process.
In this case, we need to ensure the process is customer service driven and the employees are treated fairly and humanly. Without these, the productivity savings of engineered labor standards will be more than offset over time by the negative effects to our customers and employees.

Share/Save/Bookmark

December 19, 2008

A Productivity Boost and Enterprise Architecture

For years, the IT industry has been putting out more and more devices to help us communicate, access and analyze information, conduct eCommence, be entertained, and generally increase productivity. To do these things, we have desktops, landlines, laptops, tablets, handhelds, cellphones, pagers, and all the application systems, social media, and internet to run on them. And for the organization, the CIO has been central to evaluating, planning, implementing, and maintaining these technologies.

You would think with all these tools for managing information, our lives would be simpler, more straightforward, and ever more carefree. Isn’t that what “tools” are supposed to do for people?

Well, think about your own life. Is your life less hectic with all the IT tools? Do you have more time to focus on what you’re working on? How’s your work-life balance?

If you are like most people these days, the answer is likely that you are more frantic and are trying to more things at the same time than ever before—almost driving yourself crazy, at times, to keep up.

The Wall Street Journal, 15 December 2008 had a book review by Christopher Chabris on “The Overflowing Brain.”

Here’s an excerpt of the review that I believes tells the story well:

“Take a look at your computer screen and the surface of your desk. A lot is going on. Right now, I count 10 running programs with 13 windows on my iMac, plus seven notes or documents on my computer desk and innumerable paper piles, folders, and books on my ‘main’ desk, which serves primarily as overflow space. My 13 computer windows include four for my internet browsers, itself showing tabs for 15 separate Web pages. The task in progress, in addition to writing this review…include monitoring three email accounts, keeping up with my Facebook friends, figuring out how to wire money into one of my bank accounts, digging into several scientific articles about genes, checking the weather in the city I will be visiting next week and reading various blogs, some which are actually work-related. And this is at home. At the office, my efforts to juggle these tasks would be further burdened by meetings to attend, conference calls to join, classes to teach, and co-workers to see. And there is still the telephone call or two—one of my three phone lines (home, office, mobile).”

Does this ring a bell for anybody? Dare I say that this is the reality for more of us these days.

So has IT (and the CIOs of our time) succeeded in giving us the technologies we need and want?

Well let’s look at what we said earlier were goals of IT—communication, information, commerce, entertainment, and productivity. Yep, we sure have all of these—big time!

Great, let’s just stop here at the outputs of technology and claim victory for our CIOs and the society we’ve created for ourselves.

But wait, what about the simpler, straightforward, and carefree parts—the anticipated outcomes, for many, of IT—shouldn’t we all be breathing a little easier with all the technology tools and new capabilities we have?

Ah, here’s the disconnect: somehow the desired outputs are NOT leading to the outcomes many people had hoped for.

One possible answer is that we really don’t want simple and carefree. Rather, in line with the ‘alpha male theory,’ we are high achievers, competitive, and some would even say greedy. And all the IT in the world just pours oil on our fire for doing and wanting more, more, more.

As many of us take some time off for the holidays and put our feet up for a week or two, we realize how much we look forward to some peace and quiet from all the helpful technology that surrounds us every day. But at the end of a few weeks, most of us are ready to go back to work and go crazy again with all our technology-driven productivity.

On a more serious note, from an enterprise architecture perspective, one has to ask if all this running around is leading to a strategic, desirable result in our personal and professional lives or is it just business for business sake, like technology for technology sake.


Share/Save/Bookmark

December 17, 2008

Chief Interaction Officer -- Architecting Social Networking

See my CIO Magazine article: Encourage Social Networking, Without Discouraging In-Person Meetings

Appreciate any comments.
Share/Save/Bookmark

Nanobots—Mobility Solutions Saves Organizations Money

Times are tough. The economy is in tatters. People have lost confidence, savings, jobs, and in many cases, even their homes. So, fear is pervasive among consumers, and they are cutting back on their spending.

And in an economy, where consumer spending drives 70% of the total economy, organizations are cutting back to save money too. One thing that they are doing is cutting facility costs and encouraging alternate work arrangements for staff such as teleworking, hoteling, and so forth,

The CIO is a major enabler for these alternate work arrangements and therefore for saving organizations money.

In teleworking, telecommunications is used for workers to link to the office, rather than have them actually commuting to work everyday, and in hoteling, workers have unassigned, flexible seating in the office, so their does not need to be separate office space allocated for every worker.

In these non-conventional work arrangements, IT creates for a far more mobile and agile workforce and this enables organizations to save significant money on costly fixed office space.

According to Area Development Online “as much as 50 percent of corporate office space goes unused at any given time, yet companies continue to pay for 100 percent of it. Yesterday’s ‘everyone in one place’ approach to workspace has become outdated in a business world where some types of work can be more about what you do than where you go.”

Moreover, “With laptops, cell phones, mobile e-mail devices, and high-speed Internet available on every corner — and the 70 million-strong Millennial generation entering the work force — some workers have little need to spend time at a desk in a corporate office. In fact, research group IDC expects 75 percent of the U.S. work force to be mobile by 2011.”

The Wall Street Journal, 15 December 2008 reports that “There’s a new class of workers out there: Nearly Autonomous, Not in the Office, doing Business in their Own Time Staff. Or nanobots for short…Managed correctly, nanobots can be a huge asset to their company.”

Here’s how to enable nanobot workers?

  1. Robust technology—give them the access to the technologies they need to be successful; to stay connected and be productive. Remember, the technology has to provide telecommunications to overcome both the geographical distance as well the psychological distance of not having the social contact and face-to-face communication with management, peers, and even staff.
  2. Clear performance expectations—It important to set clear performance expectations, since the nanobot is not planted in a cube or office under watchful management eyes. Without clear expectatiuons nanobots may either underwork or overwork themselves. Generally, “nanobots thrive on their driven natures and the personal freedom with which they are entrusted…while nanobots relish the independence that mobile technologies give them, they are painfully aware that their devices are both freeing and binding. In some sense, they set their own hours because of their mobile devices; in another sense, they can never get away from the business which follows them everywhere.”
  3. Different strokes for different folks—recognize which employees are good candidates for each type of work arrangement. Some can be very successful working remotely, while others thrive in the office setting. Either way, enabling workers with a variety of mobility solutions will make for a happier and more productive workforce and a more cost efficient enterprise.

Share/Save/Bookmark

December 13, 2008

Coming Soon: A Federal Chief Technology Officer (CTO)

What is the role of the Federal Chief Technology Officer (CTO) that we are anxiously awaiting to be announced soon in the President-elect Obama administration?

There are some interesting insights in Federal Computer Week, 8 December 2008.

CHANGE: Norman Lorenz, the first CTO for OMB, sees the role of the Federal CTO as primarily a change agent, so much so that the title should be the federal chief transformation officer.

TEAMWORK: Jim Flyzik, the Former CIO of the U.S. Department of the Treasury, and one of my former bosses, sees the CTO role as one who inspires teamwork across the federal IT community, and who can adeptly use the Federal CIO Council and other CXO councils to get things done—in managing the large, complex government IT complex.

VISION: Kim Nelson, the former CIO of the Environmental Protection Agency says it’s all about vision to ensure that agencies “have the right infrastructure, policies, and services for the 21st century and ensure they use best-in-class technologies.”

ARCHITECTURE: French Caldwell, a VP at Gartner, says the CTO must “try to put some cohesion and common [enterprise] architecture around the IT investment of federal agencies.”

SECURITY: Dan Tynan, of “Culture Clash” blog at Computerworld’s website said the federal CTO should create a more secure IT infrastructure for government.

CITIZENS: Don Tapscott, author of “Wikinomics: How Mass Collaboration Changes Everything,” seems to focus on the citizens in terms of ensuring access to information and services, conditions for a vibrant technology industry, and generally fostering collaboration and transformation of government and democracy.

This is great stuff and I agree with these.

I would add that the following four:

INNOVATION: The Federal CTO should promote and inspire innovation for better, faster, and cheaper ways of conducting government business and serving the citizens of this country.

STRATEGY: The Federal CTO should develop a strategy with clear IT goals and objectives for the federal government IT community to unite around, manage to, and measure performance against. We need to all be working off the same sheet of music, and it should acknowledge both commonalities across government as well as unique mission needs.

STRUCTURE: The Federal CTO should provide efficient policies and processes that will enable structured and sound ways for agencies to make IT investments, prioritize projects, and promote enterprise and common solutions.

OUTREACH; The Federal CTO is the face of Federal IT to not only citizens, but also state, local, and tribal governments, international forums, and to the business community at large. He/she should identify stakeholder requirements for federal IT and align them to the best technical solutions that are not bound by geographical, political, social, economic, or other boundaries.

The Federal CTO is a position of immense opportunity with the enormous potential to drive superior mission performance using management and IT best practices and advanced and emerging technologies, breaking down agency and functional silos in order to build a truly citizen-centric, technology-enabled government in the service to citizen and country.


Share/Save/Bookmark

December 7, 2008

GM and Enterprise Architecture

Where has enterprise architecture gone wrong at General Motors?

THEN: In 1954, GM’s U.S. auto market share reached 54%; in 1979, their number of worldwide employees hit 853,000, and in 1984 earning peak at $5.4 billion.

NOW: In 2007, U.S. market share stands at 23.7% and GM loses $38.7 billion; by 2008 employment is down to 266,000.

(Associated Press, “A Brief History of General Motors Corp., September 14, 2008)

Fortune Magazine, 8 December 2008, reports that “It was a great American Company when I started covering it three decades ago. But by clinging to the attributes that made it an icon, General Motors drove itself to ruin.”

GM clung to its past and “drove itself to ruin”—they weren’t nimble (maybe due to their size, but mostly due to their culture). In the end, GM was not able to architect a way ahead—they were unable to change from what they were (their baseline) to what they needed to be (their target).

“But in working for the largest company in the industry for so long, they became comfortable, insular, self-referential, and too wedded to the status quo—traits that persist even now, when GM is on the precipice.”

The result of their stasis—their inability to plan for change and implement change—“GM has been losing market share in the U.S. since the 1960’s destroying capital for years, and returning no share price appreciation to investors.”

GM’s share price is now the lowest in 58 years.

When the CEO of GM, Rick Wagoner, is asked why GM isn’t more like Toyota (the most successful auto company is the world with a market cap of $103.6 billion to GM’s $1.8 billion), his reply?

“We’re playing our own game—taking advantage of our own unique heritage and strengths.”

Yes, GM is playing their own game and living in their own unique heritage. “Heritage” instead of vision. “Playing their own game” instead of effectively competing in the global market—all the opposite of enterprise architecture!!

GM has been asphyxiated by their stubbornness, arrogance, resistance to change and finally their high costs.

“ GM’s high fixed costs…no cap on cost-of-living adjustments to wages, full retirement after 30 years regardless of age, and increases in already lavish health benefits. Detroiters referred to the company as ‘Generous Motors.’ The cost of these benefits would bedevil GM for the next 35 years.”

GM’s cost structure has been over-the-top and even though they have been in “perpetual turnaround,” they have unable to change their profligate business model.

Too many models, too many look-alike cars, and too high a cost structure—GM “has lost more than $72 billion in the past four years” and the result is? Are heads rolling?

The article says no—“you can count on one hand the number of executives who have been reassigned or lost their jobs”

At GM, conformity was everything, and rebellion was frowned on.” Obviously, this is not a successful enterprise architecture strategy.

Frankly, I cannot understand GM’s intransience to create a true vision and lead. Or if they couldn’t innovate, why not at least imitate their Japanese market leader brethren?

It’s reminds me of the story of the Exodus from Egypt in the bible. Moses goes to Pharaoh time and again and implores him to “let my people go” and even after G-d smites the Egyptians with plague after plague, he is still unmovable.

Well we know how that story ended up for the Egyptians and it doesn’t bode well for GM.

The bottom line, if the enterprise isn’t open to genuine growth and change, nothing can save them from themselves.


Share/Save/Bookmark

December 6, 2008

User-Centric Enterprise Architecture on YouTube

Click here for YouTube post on User-centric Enterprise Architecture.

http://www.youtube.com/watch?v=SkRgh8mjbpM

Enjoy and I appreciate your (constructive) comments.
Share/Save/Bookmark