July 11, 2009

Adaptive Leaders Rule The Day

One of the key leadership traits is of course, agility. No single course of action—no matter how intelligent or elegant—will be successful in every situation. That’s why effective leaders need to be able to quickly adapt and to apply situation-appropriate behaviors (situational leadership) to the circumstances as they arise.

Leaders need a proverbial "toolkit" of successful behaviors to succeed and even more so be able to adapt and create innovative new tools to meet new unchartered situations.

Harvard Business Review, July/August 2009, has a interesting article called “Leadership in a (Permanent) Crisis” that offers up some useful insights on adaptive leadership.

But first, what is clear is that uncertainty abounds and leadership must adapt and meet the challenges head on:

“Uncertainty will continue as the norm even after the recession ends. Economics cannot erect a firewall against intensifying global competition, energy constraints, climate change, and political instability.”

But some things that effective leaders can do in challenging and uncertain times are as follows:

Foster adaptation”—leaders need to be able to function in two realities—today and tomorrow. They “must execute in order to meet today’s challenges and they must adapt what and how things get done in order to thrive in tomorrow’s world.” Or to put it another way: leaders “must develop ‘next practices’ while excelling at today’s best practices.”

Stabilize, then solve—in uncertain times, when an emergency situation arises, first stabilize the situation and then adapt by tackling the underlying causes and building capacity to thrive in a new reality.

Experiment—don’t be afraid to experiment and try out new ways of doing things, innovate products and services, or field new technologies. “The way forward will be characterized by constant midcourse corrections.” But that is how learning occurs and that’s how success is bred—one experience and experiment at a time.

“Embrace disequilibrium”—Often people and organizations won’t or can’t change until the pain of not adapting is greater than the pain of staying the course. Too little pain and people stay in their comfort zone. Too much change, and people “fight, flee, or freeze.” So we have to be ready to change at the tipping point when the discomfort opens the way for change to drive forward.

Make people safe to question—unfortunately, too often [poor] leadership is afraid or threatened by those who question or seek alternative solutions. But effective leaders are open to new ideas, constructive criticism and innovation. Leaders need be confident and “create a culture of courageous conversations”—where those who can provide critical insights “are protected from the organizational pressure to remain silent.”

Leverage diversity—the broader the counsel you have, the better the decision you are likely to make. “If you do not engage in the widest possible range of life experiences and views—including those of younger employees—you risk operating without a nuanced picture of the shifting realities facing the business internally and externally.

To me, while leaders may intuitively fall back on tried and true techniques that have worked for them in the past, adaptive leaders need to overcome that tendency and think creatively and in situation-appropriate ways to be most effective. The adaptive leader doesn’t just do what is comfortable or known, but rather he/she synthesizes speed, agility, and courage in confronting new and evolving challenges. No two days or situations are the same and leadership must stand ready to meet the future by charting and creative new ways ahead.


Share/Save/Bookmark

July 10, 2009

The Microgrid Versus The Cloud

It’s strange how the older you get, the more you come to realize that life is not black and white. However, when it comes to technology, I once held out hope that the way to the future was clear.

Then things started to get all gray again.

First, I read a few a few weeks ago about the trends with wired and wireless technologies. On one hand, phones have been going from wired to wireless (many are even giving up their landlines all together). Yet on the other hand, television has been going the other way—from wireless (antennas) to wired (cable).

Okay, I thought this was an aberration; generally speaking technology advances—maybe with some thrashing about—but altogether in a specific direction that we can get clearly define and get our arms around.

Well, then I read another article—this one in Fast Company, July/August 2009, about the micogrid. Here’s what this is all about:

“The microgrid is simple. Imagine you could go to Home Depot and pick out a wind or solar appliance that’s as easy to install as a washer/dryer. It makes all the electricity your home needs and pays for itself in just a few years. Your home still connects to the existing wires and power plants, but is a two-way connection. You’re just as likely to be uploading power to the grid as downloading from it. You power supply communicates with the rest of the system via a two-way digital smart meter, and you can view your energy use and generation in real time.”

Is this fantasy or reality for our energy markets?

Reality. “From the perspective of both our venture capital group and some senior people within GE Energy, distributed generation is going to happen in a big way.” IBM researchers agree—“IBM’s vision is achieving true distributed energy on a massive scale.”

And indeed we see this beginning to happen in the energy industry with our own eyes as “going green” environmentalism, and alternate energy has become important to all of us.

The result is that in the energy markets, let’s summarize, we are going from centralized power generation to a distributed model. Yet—there is another trend in the works on the information technology side of the house and that is—in cloud computing, where we are moving from distributed applications, platforms, storage, and so forth (in each organization) to a more centralized model where these are provisioned by service providers such as Amazon, Google, Microsoft, and IBM—to name a just a few. So in the energy markets, we will often be pushing energy back to the grid, while in information technology, we will be receiving metered services from the cloud.

The takeaway for me is that progress can be defined in many technological ways at one time. It’s not black or white. It’s not wired or wireless. It’s not distributed or centralized services. Rather, it’s whatever meets the needs of the particular problem at hand. Each must be analyzed on its own merits and solved accordingly.


Share/Save/Bookmark

July 4, 2009

CIO Support Services Framework

The CIO Support Service Framework (CSSF) has 5 major components:
  1. Enterprise Architecture--for strategic, tactical, and operational planning
  2. Capital Planning & Investment Control (or IT governance)--for managing the IT investment decision process (i.e. "putting those plans to work")
  3. Project Management (or a project management office)--to effectively execute on the programs and projects in the transition strategy
  4. Customer Relationship Management (or IT service management)--for managing service and support to our customer (i.e. with a single--belly button; one call does it all)
  5. Business Performance Management--how we measure & drive performance (like with an IT executive dashboard--so we know whether we are hitting the target or not!)
Together these five areas make up a holistic and synergistic set of CIO support functions.

So that we move the mindset of the CIO from fighting day to day operational problems to instead strategically managing IT service provision through:
  • Planning
  • Investing
  • Executing
  • Servicing
  • Measuring
This is how we are going to achieve genuine success for the CIO in the 21st century and beyond.


Share/Save/Bookmark

July 3, 2009

Industry Architecture—What’s in a Name?

ComputerWorld, 22 June 2009 has an opinion piece, called “The Benefits of Working Together,” about developing an “Industry Architecture (IA)”—in this particular case for the hotel industry.

It takes the concept of a company or organizational architecture and applies it across an entire industry.

“In difficult economic times, every company seeks cost reductions and process improvements. But now an entire industry has banded together to help its constituents maximize their IT-based assets.”

I can see how from a private sector approach, IA is a way for companies to work together and benefit their overall industry through:

  • Improved IT products—“a clear architectural roadmap allows suppliers to focus efforts on the capabilities most important to customers.”
  • Lower IT product costs—standardized products from suppliers are generally less costly to produce than customized one (but they are also less differentiated and may be less exciting and inviting to customers). The IA also facilitates component reuse, standardized interfaces, and so forth.
  • Lower training costs—IA could reduce training costs, since there are standard processes and products spanning the entire industry meaning that employees can move more seamlessly between companies and not have to learn a whole new way of doing things.
  • Improved agility—industry standards allow for faster deployments and configurations of IT.
  • Increased buyer confidence—industry architectures could provide for a “product certification program”, so buyers can have confidence that IT products meet guidelines and are interoperable with other IA certified products.
  • Improved security—IA can incorporate IT security standards, resulting in companies being more secure than if they had “conflicting security approaches.”

From a public sector perspective, the Federal Enterprise Architecture (FEA) is similar to Industry Architecture in the private sector. Ideally, the FEA looks across all the federal departments (like an IA looks across the various companies in an industry) and creates a roadmap, standards, certification programs, interopability, component reuse, umbrella security, and more resulting in lower IT costs, more agility, and improved service to the citizen.

In terms of naming conventions, we can come up with all types of architectures from company architectures to industry architectures, from solution architectures (for meeting specific requirements) to segment architectures (for specific lines of business). We can develop horizontal architectures (across entities in the same stage of production or service provision) or vertical architectures (in entities that span different stages of production or service provision). We can create national architectures (like it looks like we may end up doing the financial services sector now) or perhaps even global architectures (such as through environmental, economic, or military agreements and treaties).

Whatever we call the various levels of architecture, they are all enterprise architectures (just with the “enterprise” representing different types or levels of entities). In other words, an enterprise can be a company or industry, an agency or a department in the federal government. Some enterprise architectures are bigger than others. Some are more complex. But what all these enterprise architectures have in common is that they seek to provide improved IT planning and governance resulting in cost savings, cost avoidance, and performance improvement for the enterprise in question.

So, we must at all levels continue to plan, develop and implement our enterprise architectures so that we realize the benefits – from the micro to the macro environment – of both private and public sector best practices. 


Share/Save/Bookmark

June 27, 2009

Now We All Have Skin In The Game

It used to be that cybersecurity was something we talked about, but took for granted. Now, we’re seeing so many articles and warnings these days about cybersecurity. I think this is more than just hype. We are at a precipice, where cyberspace is essential to each and every one of us.

Here are some recent examples of major reviews in this area:

  • The White House released its 60-days Cyberspace Policy Review on May 29, conducted under the auspices of Melissa Hathaway, the Cybersecurity Chief at the National Security Council; and the reports states: “Cybersecurity risks pose some of the most serious economic and national security challenges of the 21st century…the nation’s approach to cybersecurity over the past 15 years has failed to keep pace with the threat."
  • The Center for Strategic and International Studies’ Commission on Cybersecurity for the 44th President wrote in a December 2008 report: “America’s failure to protect cyberspace is one of the most urgent national security problems facing the new administration…It is a battle we are losing.”

Cyberspace is becoming a more dangerous place as the attacks against it are growing. Federal Computer Week, June 2009, summarized the threat this way:

“Nation states are stealing terabytes of sensitive military data, including some of the most advanced technology. Cybercrime groups are taking hundreds of millions of dollars from bank accounts and using some of that money to buy weapons that target U.S. soldiers. The attacks are gaining in sophistication and the U.S. defenses are not keeping up.

Reviewing the possibilities as to why this is happening: Have we dropped our guard or diverted resources or knowhow away from cybersecurity in a tight budgetary environment and now have to course correct? Or, have our adversaries become more threatening and more dangerous to us?

I believe that the answer is neither. While our enemies continue to gain in sophistication, they have always been tenacious against us and our determination has never wavered to overcome those who would threaten our freedoms and nation. So what has happened?

In my view the shift has to do with our realization that technology and cyberspace have become more and more vital to us and underpins everything we do--so that we would be devastated by any serious disruption. As the Cyberspace Policy Review states definitively: “The globally-interconnected digital information and communications infrastructure known as “cyberspace” underpins almost every facet of modern society and provides critical support for the U.S economy, civil infrastructure, public safety, and national security.”

We rely on cyberspace in every facet of our lives, and quite honestly, most would be lost without the connectivity, communications, commerce, productivity, and pleasure we derive from it each and every day.

The result is that we now have some serious “skin in the game”. We have something to lose--things that we deeply care about. Thus, we fear for our safety and survival should something bad happen. We think consciously or subconsciously how would we survive without the technology, Internet, and global communications that we have come to depend upon.

Let’s think for a second:

What if cyberspace was taken down or otherwise manipulated or controlled by hostile nation states, terrorists, or criminals?

Would there be a breakdown in our ability to communicate, share information, and learn? Would there be interruptions to daily life activities, disruptions to commerce, finance, medicine and so forth, concerns about physical safety or “accidents”, risks to critical infrastructure, and jeopardy to our ability to effectively protect ourselves and country?

The point here is not to scare, but to awaken to the new realities of cyberspace and technology dependence.

Safeguarding cyberspace isn’t a virtual reality game. Cyberspace has physical reality and implications for all of us if we don’t protect it. Cyberspace if a critical national asset, and we had better start treating it as such if we don’t want our fear to materialize.


Share/Save/Bookmark

June 26, 2009

The Cloud is a Natural Evolution of IT


Cloud computing is bringing us closer than ever to providing IT as utility, where users no longer need to know or care about how the IT services are provided, and only want to know that they are reliably there—just like turning on the light.
This rent-an-IT model of cloud computing can apply to any portion of an organization’s IT architecture, as follows:
  • Service architecture—for application systems, there is “software as a service” (SaaS) such as Google Apps suite for office-productivity or Salesforce.com for customer relationship management. And for developing those systems, there is “platform as a service” (PaaS) such as Google Apps Engine (GAE) or the Defense Information Systems Agency (DISA) Rapid Access Computing Environment (RACE).
  • Information architecture—for storing the data used in systems, there is “storage as a service” such as Amazon’s Simple Storage Service (S3).
  • Technology architecture—for hosting systems, there is “infrastructure as a service” such as Amazon’s Elastic Compute Cloud (EC2)
The big advantage to using hosted IT or cloud computing is that it provides on-demand information technology—again like your electricity usage; the juice is there when you need it. Additionally, by outsourcing to specialist IT providers, you can generally get more efficiency, economy, and agility in providing IT your organization.
Of course, there are challenges that include ownership, security, privacy, and a cultural shift from a vertical (stovepiped) to horizontal (enterprise and common services) mindset.
From my perspective, cloud computing is a natural evolution in our IT service provision:
  1. At first, we did everything in-house, ourselves—with our own employees, equipment, and facilities. This was generally very expensive in terms of finding and maintaining employees with the right skill sets, and developing and maintaining all our own systems and technology infrastructure, securing it, patching it, upgrading it, and so on.
  2. So then came, the hiring of contractors to support our in-house staff; this helped alleviate some of the hiring and training issues on the organization. But it wasn’t enough to make us cost-efficient, especially since we were still managing all our own systems and technologies for our organization as a stovepipe.
  3. Next, we moved to a managed services model, where we out-sourced vast chunks of our IT—from our helpdesk to desktop support, from data centers to applications development, and even to security and more. But apparently that didn’t go far enough, because we were still buying, building, and maintaining our own IT instances for our organization, but now employing call centers and data centers in far-flung places.
  4. And finally, the realization has emerged that we do not need to provide IT services either with our own or contracted staff, but rather we can rely on IT cloud providers who will manage our information technology and that of tens, hundreds, and thousands of others and provide it seamlessly over the Internet, so that we all benefit from a more scalable and unified service provision model.
The cloud computing model takes the CIO/CTO and their staffs out of the fire-fighting mode of IT management and into the drivers seat for managing IT strategically, innovatively, and with a focus on the specific mission needs of their organization.

Share/Save/Bookmark

June 21, 2009

Making More Out of Less

One thing we all really like to hear about is how we can do more with less. This is especially the case when we have valuable assets that are underutilized or potentially even idle. This is “low hanging fruit” for executives to repurpose and achieve efficiencies for the organization.

In this regard, there was a nifty little article in Federal Computer Week, 15 Jun 2009, called “Double-duty COOP” about how we can take continuity of operations (COOP) failover facilities and use them for much more than just backup and business recovery purposes in the case of emergencies. 

“The time-tested approach is to support an active production facility with a back-up failover site dedicated to COOP and activated only during an emergency. Now organizations can vary that theme”—here are some examples:

Load balancing—“distribute everyday workloads between the two sites.”

Reduced downtime—“avoid scheduled outages” for maintenance, upgrades, patches and so forth.

Cost effective systems development—“one facility runs the main production environment while the other acts as the primary development and testing resource.”

Reduced risk data migration—when moving facilities, rather than physically transporting data and risk some sort of data loss, you can instead mirror the data to the COOP facility and upload the data from there once “the new site is 100 percent operational.”

It’s not that any of these ideas are so innovatively earth shattering, but rather it is their sheer simplicity and intuitiveness that I really like.

COOP is almost the perfect example of resources that can be dual purposed, since they are there “just in case.” While the COOP site must ready for the looming contingency, it can also be used prudently for assisting day-to-day operational needs.

As IT leaders, we must always look for improvements in the effectiveness and efficiency of what we do. There is no resting on our laurels. Whether we can do more with less, or more with more, either way we are going to advance the organization and keep driving it to the next level of optimization. 


Share/Save/Bookmark

June 20, 2009

Who Says Car Companies Can't See?


Check out the concept for the new "Local Motors" car company:

  • "Vote for the designs you want. If you are a designer, you can upload your own. Either way, you help choose which designs are developed and built by the Local Motors community. Vote for competition designs, Checkup critiques, or portfolio designs.
  • Open Development, sort of like open source. Once there is enough support for any single design, Local Motors will develop it openly. That means that you not only choose which designs you want to drive, you get to help develop them - every step of the way.
  • Choose the Locale During the development process, help choose where the design should be made available. Local Motors is not a big car company, we are Local. The community chooses car designs with local regions in mind; where will this design fit best? You tell us. We make it happen.
  • Build your Local Motors vehicle Then, once the design and engineering is fully developed you can go to the Local Motors Micro-Factory and build your own - with our help, of course. See the "Buy" page for purchase and Build Experience details.
  • Drive your Local Motors car, the one you helped design and build, home."

I like this user-centric approach to car design and development. This is how we really put the user in the driver's seat.

The is the type of opportunity where we go from Henry Ford's one car for the masses approach to a more localized implementation.

While I don't know the specific economics of this approach for a car company, it seems like it has bottom-line potential since they will only proceed with car development once they have enough demand identified.

Why build cars that no one wants or likes and why pay for internal design and market research studies, when people will willingly participate for free in order to get what they really want?

Finally, this is a terrific example of open source development and crowdsourcing--getting the masses to contribute and making something better and better over time. More minds to the task, more productivity and quality as a result.


Share/Save/Bookmark

June 19, 2009

The Total CIO - Honored by CIO Magazine

FRI, JUN 19, 2009 10:42 EDT

What We’re Reading

Blogs and books selected by the staff of CIO magazine from the June 1 issue

POSTED BY: Christine Celli in Best Practices

TOPIC: Applications

BLOG: The Techie Reading List

CURRENT RATING:  COMMENTS: 0

The Total CIOBy Andy Blumenthal.

Andy Blumenthal, CTO at the Bureau of Alcohol, Tobacco, Firearms and Explosives, blogs on all things leadership, including the challenges of a being a change agent, bridging the business and IT divide and the importance of being customer-centric.


Share/Save/Bookmark

June 16, 2009

Rocky and The Total CIO



The Total CIO:
  1. Multitasks
  2. Always is training (and learning)
  3. Leads by example
  4. Inspires others
  5. Is determined and persistent
  6. Has inner strength
  7. Everything is a potential technology/tool
  8. Means business
  9. Gets results
  10. Above all, has a heart

Share/Save/Bookmark

June 14, 2009

Architecture of Freedom

In the United States, we have been blessed with tremendous freedom, and these freedoms are enshrined in the Constitution and Bill of Rights. However, in many countries around the world, people do not share these basic freedoms and human rights.

Now in many countries, the limitation and subjugation of people has extended from the physical to the virtual world of the Internet. People are prevented through filtering software from freely “surfing” the Internet for information, news, research and so forth. And they are prohibited from freely communicating their thoughts and feelings in email, instant messages, blogs, social networks and other communications media, and if are identified and caught, they are punished often through rehabilitation by hard prison labor or maybe just disappear altogether.

In fact, many countries are now insisting that technology companies build in filtering software so that the government can control or block their citizen’s ability to view information or ideas that are unwanted or undesirable.

Now however, new technology is helping defend human rights around the world—this is the architecture for anonymity and circumvention technologies.

MIT Technology Review (May/June 2009) has an article entitled “Dissent Made Safer—how anonymity technology could save free speech on the Internet.”

An open source non-profit project called TOR has developed a peer to peer technology that enables users to encrypt communications and route data through multiple hops on a network of proxies. “This combination of routing and encryption mask a computer’s actual location and circumvent government filters; to prying eyes, the Internet traffic seems to be coming from the proxies.”

This creates a safe environment for user to browse the Internet and communicate anonymously and safely—“without them, people in these [repressive] countries might be unable to speak or read freely online.”

The OpenNet Initiative in 2006 “discovered some form of filtering in 25 of 46 nations tested. A more current study by OpenNet found “more than 36 countries are filtering one or more kinds of speech to varying degrees…it is a practice growing in scope, scale, and sophistication.”

Generally, filtering is done with some combination of “blocking IP addresses, domain names… and even Web pages containing certain keywords.”

Violations of Internet usage can result in prison or death for treason.

Aside from TOR, there are other tools for “beating surveillance and censorship” such as Psiphon, UltraReach, Anonymizer, and Dynaweb Freegate.

While TOR and these other tools can be used to help free people from repression around the world, these tools can also be used, unfortunately, by criminals and terrorists to hide their online activities—and this is a challenge that law enforcement must now understand and contend with.

The architecture of TOR is fascinating and freeing, and as they say, “the genie is out of the bottle” and we cannot hide our heads in the sand. We must be able to help those around the world who need our help in achieving basic human rights and freedoms, and at the same time, we need to work with the providers of these tools to keep those who would do us harm from taking advantage of a good thing. 


Share/Save/Bookmark

June 12, 2009

Future Police Cruiser Architected for Law Enforcement

Carbon Motors E7 Police Car Photoshoot - Douglas Sonders Photography from Douglas Sonders on Vimeo.

Coming in 2012. This new law enforcement vehicle rocks!! 

The first police vehicle architected for the law enforcement end-user (User-centric EA in action). 

"Carbon Motors is a new Atlanta-based automaker that is developing the Carbon E7, the world's first purpose-built law enforcement vehicle that will provide enhanced performance and improved efficiency compared to the off-the-line cars used by today's officers. Automotive engineers from Carbon Motors are collaborating with law enforcement personnel across the country to design a vehicle that is equipped to meet the unique demands of day-to-day patrol operations." (Homeland Defense Journal)

Share/Save/Bookmark

June 7, 2009

Digital Object Architecture and Internet 2.0

There is an interesting interview in Government Executive, 18 May 2009, with Robert Kahn, one of the founders of the Internet.

In this interview Mr. Kahn introduces a vision for an Internet 2.0 (my term) based on Digital Object Architecture (DOA) where the architecture focus is not on the efficiency of moving information around on the network (or information packet transport i.e. TCP/IP), but rather on the broader notion of information management and on the architecture of the information itself.

The article states: Mr Kahn “still harbors a vision for how the Internet could be used to manage information, not just move packets of information” from place to place.

In DOA, “the key element of the architecture is the ‘digital element’ or structured information that incorporates a unique identifier and which can be parsed by any machine that knows how digital objects are structured. So I can take a digital object and store it on this machine, move it somewhere else, or preserve it for a long time.”

I liked the comparison to electronic files:

“A digital object doesn’t become a digital object any more than a file becomes a file if it doesn’t have the equivalent of a name and an ability to access it.”

Here are some of the key elements of DOA:

  • Handles—these are like file names; they are the digital object identifiers that are unique to each and enable each to be distinctly stored, found, transported, accessed and so forth. The handle record specifies things like where the object is stored, authentication information, terms and conditions for use, and/or “some sense of what you might do with the object.”
  • Resolution system —this is the ‘handle system’ that “gives your computer the handle record for that identifier almost immediately.”
  • Repository—“where digital objects may be deposited and from which they may be accessed later on.” Unlike traditional database systems, you don't need to know a lot of the details about it to get in or find what you're looking for.
  • Security at object layer—In DOA, the security “protection occurs at the object level rather than protecting the identifier or by providing only a password at the boundary.”

The overall distinguishing factor of DOA from the current Internet is that in the current Internet environment, you “have to know exactly where to look for certain information” and that’s why search engines are so critical to indexing the information out there and being able to find it. In contrast, in DOA, information is tagged when it is stored in the repository and given all the information up front about “how do you want to characterize it” and who can manage it, transport it, access it, and so on.

To me, in DOA (or Internet 2.0) the information itself provides for the intelligent use of it as opposed to in the regular Internet, the infrastructure (transport) and search features must provide for its usability.

As I am thinking about this, an analogy comes to mind. Some people with medical conditions wear special information bracelets that identify their unique medical conditions and this aids in the speed and possibly the accuracy of the medical treatment they receive—i.e. better medical management.  This is like the tagging of information in DOA where the information itself wears a metaphorical bracelet identifying it and what to do with it thereby yielding faster and better information management.

Currently, we sort of retrofit data about our information into tags called metadata, but instead here we have the notion of creating the information itself with the metadata almost as part of the genetic makeup of the information itself.

Information with “handles” built into as a part of the information creation and capture process would be superior information for sharing, collaboration, and ultimately more user-centric for people. 

In my humble opinion, DOA has some teeth and is certainly not "Dead On Arrival."


Share/Save/Bookmark

June 1, 2009

The Secret Service in Action


Once again, it's all about the mission. 

Focus, determination, absolute dedication to service. 

Principles every organization can adopt in their architectures.

And by the way, I am very proud to say my alma mater.


Share/Save/Bookmark

May 31, 2009

From Pigging Out to Piggybanking

Recently there was some media interest in the government system of funding allocation, which essentially rests on one principle: “Use it or lose it.” Unlike in the private sector, where unused funds may be reserved for future use, money that is not spent in a given appropriation year is simply returned, for the most part.

In our own personal financial worlds, in fact, it is a primary lesson that we should not spend every dollar we earn. Rather, any financial adviser will tell you that money must be managed over many years, including saving money for the proverbial “rainy day” (the recent financial meltdown and recovery act not withstanding).

In business as well as in our personal lives, we are taught to do three things with our money:

·      Spend some—for business operating expenses or living expenses in our personal lives.

·      Save some—for unexpected needs like when a economic recession negatively impacts business cash flow or in our personal lives when a job is lost and we need savings to tide us over; or the saving could be for opportunities like to accumulate funds to get into a new business or to save up for a deposit on a home.

·      Invest some—for longer-term needs like research and development, potential business acquisitions, and so forth or in our personal lives for college education, weddings, retirement and more.

My question is why in government is there not an option #2 or #3—to save or invest funds for the future, like we have in our personal lives and in business?  Why can’t agencies and lawmakers plan longer-term and manage funds strategically instead of tactically—beyond the current year here and now?

The Clinger-Cohen Act of 1996 called for the development and maintenance of an IT architecture, since interpreted more broadly as the mandate for enterprise architecture, where we plan and govern investments strategically (i.e. no longer based on short-term gut, intuition, politics, or subjective management whim).

Managing for enterprise architecture necessitates that we manage business and IT investments with the ability to spend, save, or invest as necessitated by agency mission and vision, customer requirements, and the overall investment climate (i.e. the return on spending versus the return on saving or longer-term investment).

Managing money by driving an end of year spend-down seems to negate the basic principles of finance and investing that we are taught from grade school and that we use in business and our personal lives.

By changing the government budget process to allow for spending, saving, and investing, we will open up more choices to our leaders and hold them responsible and accountable for the strategic long-term success of our vital mission.


Share/Save/Bookmark