January 24, 2009

Vision and The Total CIO

Vision is often the telltale demarcation between a leader and a manager. A manager knows how to climb a ladder, but a leader knows where the ladder needs to go—leaders have the vision to point the organization in the right direction!
Harvard Business Review, January 2009, asks “what does it mean to have vision?”
First of all, HBR states that vision is the “central component in charismatic leadership.” They offer three components of vision, and here are my thoughts on these:
  1. Sensing opportunities and threats in the environment”—(recognizing future impacts) this entails “foreseeing events” and technologies that will affect the organization and one’s stakeholders. This means not only constantly scanning the environment for potential impacts, but also making the mental connections between, internal and external factors, the risks and opportunities they pose, and the probabilities that they will occur.
  2. Setting strategic direction”—(determining plans to respond) this means identifying the best strategies to get out ahead of emerging threats and opportunities and determining how to mitigate risks or leverage opportunities (for example, to increase mission effectiveness, revenue, profitability, market share, and customer satisfaction).
  3. Inspiring constituents”—(executing on a way ahead) this involves assessing change readiness, “challenging the status quo” (being a change agent), articulating the need and “new ways of doing things”, and motivating constituent to take necessary actions.
The CIO/CTO is in a unique position to provide the vision and lead in the organization, since they can bring alignment between the business needs and the technologies that can transform it.
The IT leader cannot afford to get bogged down in firefighting the day-to-day operations to the exclusion of planning for the future of the enterprise. Firefighting is mandatory when there is a fire, but he fire must eventually be extinguished and the true IT leader must provide a vision that goes beyond tomorrow’s network availability and application up-time. Sure the computers and phones need to keep working, but the real value of the IT leader is in providing a vision of the future and not just more status quo.
The challenge for the CIO/CTO is to master the business and the technical, the present and the future—to truly understand the mission and the stakeholders as they are today as well as the various technologies and management best practices available and emerging to modernize and reengineer. Armed with business and technical intelligence and a talent to convert the as-is to the to-be, the IT leader can increase organizational efficiency and effectiveness, help the enterprise better compete in the marketplace and more fully satisfy customers now and in the future.

Share/Save/Bookmark

January 18, 2009

Information: Knowledge or B.S.?

With modern technology and the Internet, there is more information out there than ever before in human history. Some argue there is too much information or that it is too disorganized and hence we have “information overload.”

The fact that information itself has become a problem is validated by the fact that Google is world’s #1 brand with a market capitalization of almost $100 billion. As we know the mission statement of Google is to “to organize the world's information and make it universally accessible and useful.”

The key to making information useful is not just organizing it and making it accessible, but also to make sure that it is based on good data—and not the proverbial, “garbage in, garbage out” (GIGO).

There are two types of garbage information:

  1. Incorrect, incomplete, or dated
  2. Misleading /propagandistic or an outright lie

When information is not reliable, it causes confusion, rather than bringing clarity. And then, the information can actually result in worse decision making, then if you didn’t have it in the first place. This is an enterprise architecture that is not only worthless, but is harmful or poison to the enterprise.

Generally, in enterprise architecture, we are optimistic about human nature and focus on #1, i.e., we assume that people mean to provide objective and complete data and try to ensure that they can do that. But unfortunately there is a darker side to human nature that we must grapple with, and that is #2.

Misinformation by accident or by intent is used in organizations all the time to make poor investment decisions. Just think how many non-standardized, non-interoperable, costly tools your organization has bought because someone provided “information” or developed a business case, which “clearly demonstrated” that is was a great investment with a high ROI. Everyone wants their toys!

Wired Magazine, February 2009, talks about disinformation in the information age in “Manufacturing Confusion: How more information leads to less knowledge” (Clive Thompson).

Thompson writes about Robert Proctor, a historian of science from Stanford, who coined the word “Agnotology,” or “the study of culturally constructed ignorance.” Proctor theorizes that “people always assume that if someone doesn’t know something, it’s because they haven’t paid attention or haven’t yet figured it out. But ignorance also comes from people literally suppressing truth—or drowning it out—or trying to make it so confusing that people stop hearing about what’s true and what’s not.” Thompson offers as examples:

  1. “Bogus studies by cigarette companies trying to link lung cancer to baldness, viruses—anything but their product.”
  2. Financial firms creating fancy-dancy financial instruments like “credit-default swaps [which] were designed not merely to dilute risk but to dilute knowledge; after they changed hands and been serially securitized, no one knew what they were worth.”

We have all heard the saying that “numbers are fungible” and we are also all cautious about “spin doctors” who appear in the media telling their side of the story rather than the truth.

So it seems that despite the advances wrought by the information revolution, we have some new challenges on our hands: not just incorrect information but people who literally seek to promote its opposite.

So we need to get the facts straight. And that means not only capturing valuable information, but also eliminating bias so that we are not making investment decisions on the basis of B.S.


Share/Save/Bookmark

January 17, 2009

Decentralization, Technology, and Anti-Terror Planning

Given that 9/11 represented an attack on geographically concentrated seats of U.S. financial and government power, is it a good enterprise architecture decision to centralize many or all government headquarters in one single geographic area?

Read about Decentralization, Technology, and Anti-Terror Planning in The Total CIO.


Share/Save/Bookmark

Decentralization, Technology, and Anti-Terror Planning

Even though there hasn’t been a successful terrorist attack against the United States since 9/11, we are all aware that terrorists continue to seek ways to harm us. Of course, we have assets deployed nationally as well as internationally to protect our interests. However, there is always more that can be done. And one thing that immediately comes to my mind is decentralization.

The concept of decentralization is very simple. Rather than concentrating all your vital assets in one place, you spread them out so that if one is destroyed, the others remain functional. The terrorists already do this by operating in dispersed “cells.” Not only that, but we know that very often one “cell” doesn’t know what the other one is doing or even who they are. All this to keep the core organization intact in case one part of it is compromised.

Both the public and private sectors understand this and often strategically decentralize and have backup and recovery plans. However, we still physically concentrate the seat of our federal government in a geographically close space. Given that 9/11 represented an attack on geographically concentrated seats of U.S. financial and government power, is it a good enterprise architecture decision to centralize many or all government headquarters in one single geographic area?

On the one hand the rationale for co-locating federal agencies is clear: The physical proximity promotes information-sharing, collaboration, productivity, a concentrated talent pool, and so on. Further, it is a signal to the world that we are a free and proud nation and will not cower before those who threaten us.

Yet on the other hand, technology has advanced to a point where physical proximity, while a nice-to-have, is no longer an imperative to efficient government. With modern telecommunications and the Internet, far more is possible today than ever before in this area. Furthermore, while we have field offices dispersed throughout the country, perhaps having some headquarters outside DC would bring us closer to the citizens we serve.

On balance, I believe that both centralization and decentralization have their merits, but that we need to more fully balance these. To do this, we should explore the potential of decentralization before automatically reverting to the former.

It seems to me that decentralization carries some urgency given the recent report “World At Risk,” by The Commission on the Prevention of Weapons of Mass Destruction Proliferation and Terrorism—it states that “terrorists are determined to attack us again—with weapons of mass destruction if they can. Osama bin Laden has said that obtaining these weapons is a ‘religious duty’ and is reported to have sought to perpetuate another ‘Hiroshima.’

Moreover, the report goes on to state that the commission “believes that unless the world community acts decisively and with great urgency, it is more likely than not that a weapon of mass destruction will be used in a terrorist attack somewhere in the world by the end of 2013.”

Ominously the report states “we know the threat we face. We know our margin of safety is shrinking, not growing. And we know what we must do to counter the risk.”

Enterprise architecture teaches us to carefully vet and make sound investment decisions. Where should we be investing our federal assets—centrally or decentralized and how much in each category?

Obviously, changing the status quo is not cheap and would be especially difficult in the current global economic realty. But it is still something we should carefully consider.


Share/Save/Bookmark

January 11, 2009

Choice Architecture and Enterprise Architecture

In a free society like America, we are generally all strong believers in our rights and freedoms—like those often cited from the Bill of Rights-- speech, press, religion, assembly, bearing arms, due process and so forth. More broadly, we cherish our right and freedom to choose.

According to an recent article in Harvard Business Review, December 2008, one way that enterprises can better architect their products and services is by “choice architecture.”

Choice Architecture is “design of environments to order to influence decisions.” By “covertly or overly guiding your choices,” enterprises “benefit both company and consumer by simplifying decision making, enhancing customer satisfaction, reducing risk, and driving profitable purchases.”

For example, companies set “defaults” for products and services that are “the basic form customers receive unless they take action to change it.”

“At a basic level, defaults can serve as manufacturer recommendations, and more often than not we’re happy with what we get by accepting them. [For example,] when we race through those software installation screens and click ‘next’ to accept the defaults, we’re acknowledging that the manufacturer knows what’s best for us.”

Of course, defaults can be nefarious as well. They have caused many of us to purchase unwanted extended warranties or to inadvertently subscribe to mailing lists.”

Given the power of defaults to influence decisions and behaviors both positively and negatively, organizations must consider ethics and strategy in equal measure in designing them.”

Here are some interesting defaults and how they affect decision making:

Mass defaults—“apply to all customers…without taking customers; individual preferences into account.” This architecture can result in suboptimal offerings and therefore some unhappy customers.

Some mass defaults have hidden options—“the default is presented as a customer’s only choice, although hard-to-find alternatives exist.” For example, computer industry vendors, such as Microsoft, often use hidden options to keep the base product simple, while at the same time having robust functionality available for power users.

Personalized defaults—“reflect individual differences and can be tailored to better meet customers’ needs.” For example, information about an individual’s demography or geography may be taken into account for product/service offerings.

One type of personalized default is adaptive defaults—which “are dynamic: they update themselves based on current (often real-time) decisions that a customer has made.” This is often used in online retailing, where customers make a series of choices.

There are other defaults types such as benign, forced, random, persistent, and smart: each limiting or granting greater amounts of choice to decision makers.

When we get defaults right (whether we are designing software, business processes, other end-user products, or supplying services), we can help companies and customers to make better, faster, and cheaper decisions, because there is “intelligent” design to guide the decision process. In essence, we are simplifying the decision making process for people, so they can generally get what they want in a logical, sequenced, well-presented way.

Of course, the flip side is that when choice architecture is done poorly, we unnecessarily limit options, drive people to poor decisions, and people are dissatisfied and will seek alternative suppliers and options in the future.

Certainly, we all love to choose what we want, how we want, when we want and so on. But like all of us have probably experienced at one time or another: when you have too many choices, unconstrained, not guided, not intelligently presented, then consumers/decision makers can be left dazed and confused. That is why we can benefit from choice architecture (when done well) to help make decision making simple, smarter, faster, and generally more user-centric.


Share/Save/Bookmark

January 10, 2009

Why We Make Bad Decisions and Enterprise Architecture

With the largest Ponzi scheme in history ($50 billion!!) still unfolding, and savvy investors caught off guard, everyone is asking how can this happen—how can smart, experienced investors be so gullible and make such big mistakes with their savings?

To me the question is important from an enterprise architecture perspective, because EA is seeks to help organizations and people make better decisions and not get roped into decision-making by gut, intuition, politics, or subjective management whim. Are there lessons to be learned from this huge and embarrassing Ponzi scheme that can shed light on how people get suckered in and make the wrong decision?

The Wall Street Journal, 3-4 January, has a fascinating article called the “Anatomy of Gullibility,” written by one of the Madoff investors who lost 30% of their retirement savings in the fund.

Point #1—Poor decision-making is not limited to investing. “Financial scams are just one of the many forms of human gullibility—along with war (the Trojan Horse), politics (WMD in Iraq), relationships (sexual seduction), pathological science [people are tricked into false results]…and medical fads.”

Point #2—Foolish decisions are made despite information to the contrary (i.e. warning signs). “A foolish (or stupid) act is one in which someone goes ahead with a socially or physically risky behavior in spit of danger signs or unresolved questions.

Point #3—There are at least four contributors to making bad decisions.

  • SITUATION—There has to be a event that requires a choice (i.e. a decision point). “Every gullible act occurs when an individual is presented with a social challenge that he has to solve.” In the enterprise, there are situations (economic, political, social, legal, personal…) that necessitate decision-making every day.
  • COGNITION—Decision-making requires cognition, whether sound or unsound. “Gullibility can be considered a form of stupidity, so it is safe to assume deficiencies in knowledge and/or clear thinking are implicated.” In the organization and personally, we need lots of good useful and usable information to make sound decisions. In the organization, enterprise architecture is a critical framework, process, and repository for the strategic information to aid cognitive decision-making processes.
  • PERSONALITY—People and their decisions are influenced positively or negatively by others (this includes the social affect…are you following the “in-crowd”.) “The key to survival in a world filled with fakers…or unintended misleaders…is to know when to be trusting and when not to be.” In an organization and in our personal lives, we need to surround ourselves with those who can be trusted to be provide sound advice and guidance and genuinely look after our interests.
  • EMOTION—As humans, we are not purely rational beings, we are swayed by feelings (including fear, greed, compassion, love, hate, joy, anger…). “Emotion enters into virtually every gullible act.” While, we can never remove emotion, nor is it even desirable to do this, from the decision-making process, we do need to identify the emotional aspects and put them into perspective. For example, the enterprise may feel threatened and competitive in the marketplace and feel a need to make a big technological investment; however, those feelings should be tempered by an objective business case including cost-benefit analysis, analysis of alternatives, risk determination, and so forth.

Hopefully, by better understanding the components of decision-making and what makes us as humans gullible and prone to mistakes, we can better structure our decision-making processes to enable more objective, better vetted, far-sighted and sound decisions in the future.


Share/Save/Bookmark

January 4, 2009

The Need for Control and Enterprise Architecture

Human beings have many needs and these have been well documented by prominent psychologists like Abraham Maslow.

At the most basic level, people have physiological needs for food, water, shelter, and so on. Then “higher-level” needs come into play including those for safety, socializing, self-esteem, and finally self-actualization.

The second order need for safety incorporates the human desire for feeling a certain degree of control over one’s life and that there is, from the macro perspective, elements of predictability, order, and consistency in the world.

Those of us who believe in G-d generally attribute “real” control over our lives and world events to being in the hands of our creator and sustainer. Nevertheless, we see ourselves having an important role to play in doing our part—it is here that we strive for control over our lives in choosing a path and working hard at it. A lack of any semblance of control over our lives makes us feel like sheer puppets without the ability to affect things positively or negatively. We are lost in inaction and frustration that whatever we do is for naught. So the feeling of being able to influence or impact the course of our lives is critical for us as human beings to feel productive and a meaningful part of the universe that we live in.

How does this impact technology?

Mike Elgan has an interesting article in Computerworld, 2 January 2009, called “Why Products Fail,” in which he postulates that technology “makers don’t understand what users want most: control.”

Of course, technical performance is always important, but users also have a fundamental need to feel in control of the technology they are using. The technology is a tool for humans and should be an extension of our capabilities, rather than something like in the movie Terminator that runs rogue and out of the control of the human beings who made them.

When do users feel that the technology is out of their control?

Well aside from getting the blue screen of death, when they are left waiting for the computer to do something (especially the case when they don’t know how long it will be) and when the user interface is complicated, not intuitive, and they cannot find or easily understand how to do what they want to do.

Elgan says that there are a number of elements that need to be built into technology to help user feel in control.

Consistetency—“predictability…users know what will happen when they do something…it’s a feeling of mastery of control.”

Usability—“give the user control, let them make their own mistakes, then undo the damage if they mess something up” as opposed to the “Microsoft route—burying and hiding controls and features, which protects newbies from their own mistakes, but frustrates the hell out of experienced users.”

Simplicity—“insist on top-to-bottom, inside-and-outside simplicity,” rather than “the company that hides features, buries controls, and groups features into categories to create the appearance of few options, with actually reducing options.”

Performance/Stability—“everyone hates slows PCs. It’s not the waiting. It’s the fact that the PC has wrenched control from the user during the time that the hourglass is displayed.”

Elgan goes on to say that vendors’ product tests “tend to focus on enabling user to ‘accomplish goals…but how the user feels during the process is more important than anything else.”

As a huge proponent of user-centricity, I agree that people have an inherent need to feel they are in some sort of control in their lives, with the technology they use, and over the direction that things are going in (i.e. enterprise architecture).

However, I would disagree that how the user feels is more important than how well we accomplish goals; mission needs and the ability of the user to execute on these must come first and foremost!

In performing our mission, users must be able to do their jobs, using technology, effectively and efficiently. So really, it’s a balance between meeting mission requirements and considering how users feel in the process.

Technology is amazing. It helps us do things better, faster, and cheaper that we could ever do by ourselves. But we must never forget that technology is an extension of ourselves and as such must always be under our control and direction in the service of a larger goal.


Share/Save/Bookmark

January 3, 2009

Embedded Systems and Enterprise Architecture

Information technology is not just about data centers, desktops, and handheld devices anymore. These days, technology is everywhere—embedded in all sorts of devices from cars and toaster ovens to traffic lights and nuclear power plants. Technology is pervasive in every industry from telecommunications to finance and from healthcare to consumer electronics.

Generally, embedded systems are dedicated to specific tasks, while general-purpose computers can be used for a variety of functions. In either case, the systems are vital for our everyday functioning.

Government Computer News, 15 December 2008 reports that “thanks to the plummeting cost of microprocessors, computing…now happens in automobiles, Global Positioning Systems, identification cards and even outer space.

The challenge with embedded systems are that they “must operate on limited resources—small processors, tiny memory and low power.”

Rob Oshana, director of engineering at Freescale Semiconductor says that “With embedded it’s about doing as much as you can with as little as you can.”

What’s new—haven’t we had systems embedded in automobiles for years?

Although originally designed for interacting with the real world, such systems are increasingly feeding information into larger information systems,” according to Wayne Wolf, chair of embedded computing systems at Georgia Institute of Technology.

According to Wolf, “What we are starting to see now is [the emergence] of what the National Science Foundation is called cyber-physical systems.”

In other words, embedded systems are used for command and control or information capture in the physical domain (like in a car or medical imaging machine), but then they can also share information over a network with others (think OnStar or remote medical services).

When the information is shared from the car to the Onstar service center, information about an accident can be turned into dispatch of life-saving responders. Similarly, when scans from a battlefield MRI is shared with medical service providers back in the States, quality medical services can be provided, when necessary, from thousands of miles away.

As we should hopefully have all come to learn after 9-11, information hoarding is faux power. But when information is shared, the power is real because it can be received and used by others and others, so that its influence is exponential.

Think for example, of the Mars Rover, which has embedded systems for capturing environmental samples. Left alone, the information is contained to a physical device millions of miles away, but sharing the information back to remote tracking stations here on Earth, the information can be analyzed, shared, studied, and so forth with almost endless possibilities for ongoing learning and growth.

The world has changed from embedded systems to a universe of connected systems.

Think distributed computing and the internet. With distributed computing, we are silos or separate domains of information, but by connecting the islands of information using the internet for example, we can all harness the vast amounts of information out there and in turn process it within our own lives and contribute back information to others.

The connection and sharing is our strength.

In the intelligence world, information is often referred to as dots, and it is the connection of the dots that make for viable and actionable intelligence.

As people, we are also proverbially just little dots in this big world of ours.

But as we have learnt with social media, we are able to grow as individuals and become more potent and more fulfilled human beings by being connected with others—we’ve gone from doing this in our limited physical geographies to a much larger population in cyberspace.

In the end, information resides in people or can be embedded in machines, but connecting the information to with other humans and machines is the true power of the information technology.


Share/Save/Bookmark

January 2, 2009

It Time to Stop the Negativity and Move towards Constructive Change

Recently, there was an article in Nextgov (http://techinsider.nextgov.com/2008/12/the_industry_advisory_council.php) about the Industry Advisory Council (IAC), a well respected industry-government consortium under the Auspices of the American Council for Technology, that recommended to the incoming Obama Administration the standup of an innovation agency under the auspices of the new Chief Technology Officer.
The Government Innovation Agency “would serve as an incubator for new ideas, serve as a central repository for best practices and incorporate an innovation review in every project. As we envision it, the Government Innovation Agency would house Centers of Excellence that would focus on ways to achieve performance breakthroughs and leverage technology to improve decision making, institute good business practices and improve problem solving by government employees.:”
While I am a big proponent for innovation and leveraging best practices, what was interesting to me was not so much the proposal from IAC (which I am not advocating for by the way), so much as one of the blistering comments posted anonymously from one of the readers, under the pseudonym “concerned retiree,” which I am posting in its entirety as follows:
“Hmmmmm...."innovation"..."central repository of new ideas"......can this be just empty news release jargon? Just more slow-news day, free-range clichés scampering into the daily news hole?.. .or perhaps this item is simply a small sized news item without the required room to wisely explicate on the real life banalities of the government sponsored “innovation” world...such as: 1)patent problems - is the US going to be soaking up, or handing out patent worthy goodies via the "innovation" czar or czarina? Attention patent attorneys, gravy train a comin’ 2)"leverage technology to improve decision making" – wow! a phrase foretelling a boon-doggle bonanza, especially since it’s wonderfully undefined and thereby, prompting generous seed money to explore it’s vast potential (less just fund it at say, $20-30 million?); 3) the "Government Innovation Agency" - -well now, just how can we integrate this new member to the current herd of government “innovation” cows, including: A) a the Dod labs, like say the Naval Research Lab, or the Dept of Commerce lab that produced the Nobel prize winner (oh, I see now, the proposal would be for “computer” type innovation pursuits – oh, how wise, like the health research lobbyists, we’re now about slicing “innovation” and/or research to match our vendor supplier concerns, how scientific!, how MBAishly wise); B) existing labs in private industry (e.g. former Bell Labs. GM-Detroit area "labs"/innovation groups), C) university labs – currently watered by all manner of Uncle Sam dollars via the great roiling ocean of research grants. Finally - given the current Wall Street melt-down and general skepticism for American business nimbleness (this too will pass, of course) -- what's the deal with all the Harvard Grad School-type hyper-ventilation on the bubbling creativity (destructive or otherwise) of American capitalism - -surely the GAO/Commerce/SEC could pop out some stats on the progressive deterioration of expenditures -- capital and otherwise--on "innovation". Or perhaps the sponsors of the "Government Innovation Agency" - will be happy to explain at the authorization hearing - how all the dough to date spent to date on development of the green automobile has yet to put a consumer friendly one on the road from a US corp -- a fact that argues either for a vast expansion of the GIA, or, the merciful euthenasiaing of this dotty idea. See you all at the authorizing hearing?”
What’s so disheartening about this retiree’s comments?
It’s not that there is not some truth intermixed with the blistering comments, but it is the sheer magnitude of the cynicism, bitterness, negativity, resistance to ”new” (or at times reformulated) ideas, and “been-there-done-that” attitude that unfairly provides a bad name to other government workers who are smart, innovative, positive, and hard-charging and want to continuously improve effectiveness and efficiency of government for the benefit of the nation and to serve our citizens.
Sure, we need to listen and learn from those that preceded us--those with age, experience, expertise, and certainly vast amounts of wisdom. And yes, those of us who do not learn from the mistakes of the past are doomed to repeat it. So we must be mindful to be respectful, collaborative, inclusive, and careful to vet new ideas and changes.
However, those that have served before or have been serving a long time now should also give hope, innovation, change (not for change’s sake, but based on genuine learning and growth) and continuous improvement a chance.
It is always easier to be a naysayer, a doomsday prognosticator, and to tear down and destroy. It is much, much harder to be positive, hopeful, and constructive—to seek to build a brighter future rather than rest on the laurels of the past.
Unfortunately, many people have been hurt by past mistakes, false leaders, broken promises, and dashed hopes, so they become resistant to change, in addition to, of course, fearing change.
Those of us in information technology and other fields (like science, engineering, product design and development, and so many others—in fact all of us can make a difference) need to be stay strong amidst the harsh rhetoric of negativity and pessimism, and instead continue to strive for a better tomorrow.

Share/Save/Bookmark

January 1, 2009

Scraping the Landlines and The Total CIO

It’s long overdue. It’s time to get rid of the landline telephones from the office (and also from our homes, if you still have them). Wireless phones are more than capable of doing the job and just think you already probably have at least one for business and one for personal use—so redundancy is built in!
Getting rid of the office phones will save the enterprise money, reduce a maintenance burden (like for office moves) and remove some extra telejunk clutter from your desk. More room for the wireless handheld charger. :-)
USA Today, 20 December 2008 reports that according to Forrester Research “Estimated 25% of businesses are phasing out desk phones in effort to save more money.”
Additionally, “more than 8% of employees nationwide who travel frequently have only cellphones.”
Robert Rosenberg, president of The Insight Research Corp., stated: U.S. businesses are lagging behind Europe and Asia in going wireless, because major cellular carriers…are also earning money by providing landlines to businesses—an $81.4 billion industry in 2008.”
“In Washington, D.C., the City Administrator’s office launched a pilot program in October in which 30 employees with government-issued cellphones gave up their desk phones, said deputy mayor Dan Tangherlini. Because the government has issued more than 11,000 cellphones to employees, the program could multiply into significant savings.”
A study by the National Center for Health Statistics between January and June found that more than 16% of families “have substituted a wireless telephone for a land line.”
So what’s stopping organizations from getting rid of the traditional telephones?
The usual culprits: resistance to change, fear of making a mistake, not wanting to give up something we already have—“old habits die hard” and people don’t like to let go of their little treasures—even a bulky old deskphone (with the annoying cord that keeps getting twisted).
Things are near and dear to people and they clutch on to them with their last breath—in their personal lives (think of all the attics, garages, and basements full of items people can’t let go off—yard sale anyone?) and in the professional lives (things equate to stature, tenure, turf—a bigger rice bowl sound familiar?).
Usually the best way to get rid of something is to replace it with something better, so the Total CIO needs to tie the rollout of new handheld devices with people turning in their old devices--land lines, pagers, and even older cell phones (the added benefit is more room and less weight pulling on your belt).
By the way, we need to do the same thing with new applications systems that we roll out. When the new one is fully operational than the old systems need to be retired. Now how often does that typically happen?
Folks, times are tough, global competition is not going away, and we are wasting too much money and time maintaining legacy stuff we no longer need. We need to let go of the old and progress with the new and improved.

Share/Save/Bookmark