Showing posts with label Distributed. Show all posts
Showing posts with label Distributed. Show all posts

August 20, 2009

Andy Blumenthal Talks about Cloud Computing

Here is the podcast from MeriTalk Silverlining Series (August 2009)


Share/Save/Bookmark

August 12, 2009

Andy's Cloud Computing Presentation on MeriTalk

Introduction

First let me start out by saying that cloud computing brings us closer than ever to providing IT as a utility such as electricity, where users no longer need to know or care about how IT services are provided, and only need to know that they are reliably there, just like turning on the light. This is the subscription approach to using information technology, where base services are hosted, shared, and you pay only for what you need and use.

In cloud computing, there are a number of basic models. First, in public clouds, we have a multi-tenant, shared services environment with access provided over a secure Internet connection. In contrast in a private cloud, the IT shared services is behind the company’s firewall and is controlled by in-house staff. Then, there is also a community cloud, which is an extension of the private cloud, where IT resources are shared by several organizations that make-up a specific community.

The advantage to cloud computing—whether public or private—is that you have a shared, enterprise-wide solution that offers a number of distinct advantages:

  1. Efficiency–with cloud computing, we build once and reuse multiple times—i.e. we share resources—rather than everyone having their own.
  2. Flexibility–we are more nimble and agile when we can quickly expand or contract capacity on-demand, as needed—what some call rapid elasticity. Moreover, by outsourcing the utility computing elements of our IT infrastructure, we can focus our internal efforts on building our core mission areas.
  3. Economy (or economy of scale)–it’s cheaper and more cost effective when we can tap into larger pools of common resources maintained by companies with subject matter expertise. They then are responsible for ensuring that IT products are patched, upgraded and modernized. Moreover, we pay only for what we actually use.

Issue

So cloud computing sounds pretty good, doesn’t it? What then is the big issue? Plain and simple it comes down to—Is cloud computing effective for the organization? And what I mean by that is a few things:

  • First is customization, personalization and service: when you buy IT computing services in this shared services model, do you really get what you need and want – or are you just getting a canned approach, like the Model T that came in one color, black? For example, when you purchase Software as a Service are you getting the solution you need for your agency or the one built for someone else?
  • Next is security, privacy, and disaster recovery. This is a big deal because in a public cloud, you are capturing, processing, sending, and storing data outside of your proprietary infrastructure. This opens the door for theft, manipulation, or other ways of our data being compromised by criminals, cyber-terrorists, and even hostile nation-states.
  • Third, and maybe most important, is cultural, especially in a very individualistic society, like ours, where people are used to getting what they want, when they want, without having to share. For example, we prefer owning our own vacation home to having a time-share. We love the concept of a personal home theater. Everyone now has a personal cell phone, and the old public telephones that were once on every corner are now practically extinct. And most people prefer driving their own cars to work rather than using mass transit—even though it’s not environmentally friendly. So the idea of giving up our proprietary data centers, application systems, the control of our data, in a cloud computing model, is alien to most and possibly even frightening to many.

The Reality

So how do we harmonize the distinct advantages of cloud computing—efficiency, flexibility, and economy—with the issues of customization, security, and culture?

The reality is that regardless of customization issues, we can simply no longer afford for everyone to have their own IT platforms—it’s wasteful. We are recovering from a deep financial recession, the nation has accumulated unprecedented levels of debt, and we are competing in a vast global economy, where others are constantly raising the bar—working faster, better, and cheaper.

Moreover, from a technology standpoint, we have advanced to where it is now possible to build an efficient cloud computing environment using distributed architecture, virtualization/consolidation, and grid computing.

Thirdly, on a cultural level, as individualistic as we are, it is also true that we now recognize the importance of information sharing and collaboration. We are well aware of the fact that we need to break the stovepiped verticals and build and work horizontally. This is exemplified by things like Google Docs, SharePoint, Wikipedia, and more.

In terms of security, I certainly understand people’s concern and it is real. However, we are all already using the cloud. Are you using online banking? Are you ordering things online through Amazon, Overstock or other e-commerce vendors? Do you use yahoo or Google email? Then you are already using the cloud and for most of us, we don’t even realize it. The bottom line on security is that every agency has to decide for itself in terms of its mission and ability to mitigate any risks.

How to Choose

So there are two questions then. Assuming—and I emphasize assuming—that we can solve the security issues with a “Trusted Cloud” that is certified and accredited, can we get over the anxiety of moving towards cloud computing as the new standard? I believe that since the use case—for flexibility, economy, and efficiency—is so compelling, that the answer is going to be a resounding yes.

The next question is, once we accept the need for a cloud computing environment, how do we filter our choices among the many available?

Of course I’m not going to recommend any particular vendor or solution, but what I will do is advocate for using enterprise architecture and sound IT governance as the framework for the decision process.

For too many years, we based our decisions on gut, intuition, politics, and subjective management whim, which is why statistics show that more than 82% of IT projects are failing or seriously challenged.

While a full discussion of the EA and governance process is outside the scope of this talk, I do want to point out that to appropriately evaluate our cloud computing options, we must use a strong framework of architecture planning and capital planning and investment control to ensure the strategic alignment, technical compliance, return on investment, and risk mitigation—including of course security and privacy—necessary for successful implementation.

How Cloud Computing fits with Enterprise Architecture:

As we move to cloud computing, we need to recognize that this is not something completely new, but rather an extension of Service Oriented Architecture (SOA) where there are service providers and consumers and applications are built by assembling reusable, shared services that are made available to consumers to search, access, and utilize. Only now with public cloud computing, we are sharing services beyond the enterprise and to include applications, data, and infrastructure.

In terms of a transition strategy, cloud computing is a natural evolution in IT service provision.

At first, we did everything in-house, ourselves—with our own employees, equipment, and facilities. This was generally very expensive in terms of finding and maintaining employees with the right skill sets, and developing and maintaining all our own systems and technology infrastructure, securing it, patching it, upgrading it, and so on.

So then came the hiring of contractors to support our in-house staff; this helped alleviate some of the hiring and training issues on the organization. But it wasn’t enough to make us cost-efficient, especially since we were still managing all our own systems and technologies for our organization, as a stovepipe.

Next, we moved to a managed services model, where we out-sourced vast chunks of our IT—from our helpdesk to desktop support, from data centers to applications development, and even to security and more.

Finally, the realization has emerged that we do not need to provide IT services either with our own or contracted staff, but rather we can rely on IT cloud providers who can offer an array of IT services, on demand, and who will manage our information technology and that of tens, hundreds, and thousands of others and provide it seamlessly over the Internet, so that we all benefit from a more scalable and unified service provision model.

Of course, from a target architecture perspective, cloud computing really hits the mark, because it provides for many of the inherent architecture principles that we are looking to implement, such as: services interoperability and component reuse, and technology standardization, simplification, and cost-efficiency. And on top of all that—using services on a subscription or metered basis is convenient for the end-user.

Just one last thing I would like to point out is that sound enterprise architecture and governance must be user-centric. That means that we only build decision products that are valuable and actionable to our users—no more ivory tower efforts or developing shelfware. We need to get the right information to the right decision makers to get the mission accomplished with the best, most agile and economical support framework available.


Share/Save/Bookmark

July 10, 2009

The Microgrid Versus The Cloud

It’s strange how the older you get, the more you come to realize that life is not black and white. However, when it comes to technology, I once held out hope that the way to the future was clear.

Then things started to get all gray again.

First, I read a few a few weeks ago about the trends with wired and wireless technologies. On one hand, phones have been going from wired to wireless (many are even giving up their landlines all together). Yet on the other hand, television has been going the other way—from wireless (antennas) to wired (cable).

Okay, I thought this was an aberration; generally speaking technology advances—maybe with some thrashing about—but altogether in a specific direction that we can get clearly define and get our arms around.

Well, then I read another article—this one in Fast Company, July/August 2009, about the micogrid. Here’s what this is all about:

“The microgrid is simple. Imagine you could go to Home Depot and pick out a wind or solar appliance that’s as easy to install as a washer/dryer. It makes all the electricity your home needs and pays for itself in just a few years. Your home still connects to the existing wires and power plants, but is a two-way connection. You’re just as likely to be uploading power to the grid as downloading from it. You power supply communicates with the rest of the system via a two-way digital smart meter, and you can view your energy use and generation in real time.”

Is this fantasy or reality for our energy markets?

Reality. “From the perspective of both our venture capital group and some senior people within GE Energy, distributed generation is going to happen in a big way.” IBM researchers agree—“IBM’s vision is achieving true distributed energy on a massive scale.”

And indeed we see this beginning to happen in the energy industry with our own eyes as “going green” environmentalism, and alternate energy has become important to all of us.

The result is that in the energy markets, let’s summarize, we are going from centralized power generation to a distributed model. Yet—there is another trend in the works on the information technology side of the house and that is—in cloud computing, where we are moving from distributed applications, platforms, storage, and so forth (in each organization) to a more centralized model where these are provisioned by service providers such as Amazon, Google, Microsoft, and IBM—to name a just a few. So in the energy markets, we will often be pushing energy back to the grid, while in information technology, we will be receiving metered services from the cloud.

The takeaway for me is that progress can be defined in many technological ways at one time. It’s not black or white. It’s not wired or wireless. It’s not distributed or centralized services. Rather, it’s whatever meets the needs of the particular problem at hand. Each must be analyzed on its own merits and solved accordingly.


Share/Save/Bookmark

August 23, 2008

Building Enterprise Architecture Momentum

Burton Group released a report entitled “Establishing and Maintaining Enterprise Architecture Momentum” on 8 August 2008.

Some key points and my thoughts on these:

  • How can we drive EA?

Value proposition—“Strong executive leadership helps establish the enterprise architecture, but…momentum is maintained as EA contributes value to ongoing activities.”

Completely agree: EA should not be a paper or documentation exercise, but must have a true value proposition where EA information products and governance services enable better decision making in the organization.

  • Where did the need for EA come from?

Standardization—“Back in the early days of centralized IT, when the mainframe was the primary platform, architecture planning was minimized and engineering ruled. All the IT resources were consolidated in a single mainframe computer…the architecture was largely standardized by the vendor…However distributed and decentralized implementation became the norm with the advent of personal computers and local area networks…[this] created architectural problems…integration issues…[and drove] the need to do architecture—to consider other perspectives, to collaboratively plan, and to optimize across process, information sources, and organizations.”

Agree. The distributed nature of modern computing has resulted in issues ranging from unnecessary redundancy, to a lack of interoperability, component re-use, standards, information sharing, and data quality. Our computing environments have become overly complex and require a wide range of skill sets to build and maintain, and this has an inherently high and spiraling cost associated with it. Hence, the enterprise architecture imperative to break down the silos, more effectively plan and govern IT with an enterprise perspective, and link resources to results!

  • What are some obstacles to EA implementation?

Money rules—“Bag-O-Money Syndrome Still Prevails…a major factor inhibiting the adoption of collaborative decision-making is the funding model in which part of the organization that bring the budget makes the rules.”

Agree. As long as IT funding is not centralized with the CIO, project managers with pockets of money will be able to go out and buy what they want, when they want, without following the enterprise architecture plans and governance processes. To enforce the EA and governance, we must centralize IT funding under the CIO and work with our procurement officials to ensure that IT procurements that do not have approval of the EA Board, IT Investment Review Board, and CIO are turned back and not allowed to proceed.

  • What should we focus on?

Focus on the target architecture—“Avoid ‘The Perfect Path’…[which] suggest capturing a current state, which is perceived as ‘analyze the world then figure out what to do with it.’ By the time the current state is collected, the ‘as-is’ has become the ‘as-was’ and a critical blow has been dealt to momentum…no matter what your starting point…when the program seems to be focused on studies and analysis…people outside of EA will not readily perceive its value.”

Disgree with this one. Collecting a solid baseline architecture is absolutely critical to forming a target architecture and transition plan. Remember the saying, “if you don’t know where you are going, then any road will get you there.” Similarly, if you don’t know where you are coming from you can’t lay in a course to get there. For example, try getting directions on Google Maps with only a to and no from location. You can’t do it. Similarly you can’t develop a real target and transition plan without identifying and understanding you current state and capabilities to determine gaps, redundancies, inefficiencies, and opportunities. Yes, the ‘as-is’ state is always changing. The organization is not static. But that does not mean we cannot capture a snapshot in time and build off of this. Just like configuration management, you need to know what you have in order to manage change to it. And the time spent on analysis (unless we’re talking analysis paralysis), is not wasted. It is precisely the analysis and recommendations to improve the business processes and enabling technologies that yield the true benefits of the enterprise architecture.

  • How can we show value?

Business-driven —“An enterprise architect’s ability to improve the organization’s use of technology comes through a deep understanding of the business side of the enterprise and from looking for those opportunities that provide the most business value. However, it is also about recognizing where change is possible and focusing on the areas where you have the best opportunity to influence the outcome.”

Agree. Business drives technology, rather than doing technology for technology’s sake. In the enterprise architecture, we must understand the performance results we are striving to achieve, the business functions, processes, activities, and tasks to produce to results, and the information required to perform those functions before we can develop technology solutions. Further, the readiness state for change and maturity level of the organization often necessitates that we identify opportunities where change is possible, through genuine business interest, need, and desire to partner to solve business problems.


Share/Save/Bookmark

April 4, 2008

Lessons from Mainframes and Enterprise Architecture

As many organizations have transitioned from a mainframe computing to a more distributed platform, they have become more decentralized and in many cases more lax in terms of managing changes, utilization rates, assuring availability, and standardization.

However, management best practices from the mainframe environment are now being applied to distributed computing,

DM Review, 21 March 2002, reports that “people from the two previously separate cultures—mainframe and distributed [systems]—are coming together to architect, develop, and manage the resulting unified infrastructure.”

  1. Virtualization—“Developers of distributed applications can learn from the approach mainframe developers use to build applications that operate effectively in virtualized environments…[such that] operating systems and applications bounce from one server to another as workload change” i.e. effective load balancing. This improves on distributed applications that “have traditionally been designed to run on dedicated servers” resulting in data centers with thousands of servers running at low utilizations rates, consuming lots of power, and having generally low return on investment.
  2. Clustering—“A computer cluster is a group of loosely coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer [like a mainframe]…Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability” (Wikipedia) The strategy here is to “reduce the number of servers you need with virtualization, while providing scaling and redundancy with clustering.”
  3. Standardization—Distributed computing has traditionally been known for freedom of choice marked by “diversity of hardware, operating platforms, application programming languages, databases, and packaged applications—and an IT infrastructure that is complex to manage and maintain. It takes multiple skill sets to manage and support the diverse application stack…standardization can help you get a handle on [this].

Thus, while we evolve the IT architecture from mainframe to distributed computing, the change in architecture are not so much revolutionary as it is evolutionary. The lessons of mainframe computing that protected our data, ensured efficient utilization and redundancy, and made for a cost-effective IT infrastructure do not have to be lost in a distributed environment. In fact, as architects, we need to ensure the best of both worlds.


Share/Save/Bookmark

January 30, 2008

Peer-to-Peer and Enterprise Architecture

Peer-to-peer (P2P)—“computer network uses diverse connectivity between participants in a network and the cumulative bandwidth of network participants rather than conventional centralized resources where a relatively low number of servers provide the core value to a service or application. Peer-to-peer networks are typically used for connecting nodes via largely ad hoc connections. Such networks are useful for many purposes. Sharing content files (see file sharing) containing audio, video, data or anything in digital format is very common, and realtime data, such as telephony traffic, is also passed using P2P technology. A pure peer-to-peer network does not have the notion of clients or servers, but only equal peer nodes that simultaneously function as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client-server model where communication is usually to and from a central server. A typical example for a non peer-to-peer file transfer is an FTP server where the client and server programs are quite distinct, and the clients initiate the download/uploads and the servers react to and satisfy these requests… Peer-to-peer architecture embodies one of the key technical concepts of the Internet” (Wikipedia)

CNET news, 24 January 2008, reports that P2P technology is important for reducing network traffic and speeding up downloads from the web.

How does P2P help users?

P2P as a “distributed model is much more efficient and cost effective for distributing large files on the internet, than the traditional client-server model.”

P2P for media distribution helps companies so that they “don’t have to spend millions of dollars building out their own server farms and high-speed infrastructure.”

How does P2P work?

“P2P leverages “peers” in the network to host pieces of content…P2P allows the file to be downloaded once and shared many times. In fact, distribution gets more efficient the more people who want the file.”

What is the next target architecture for P2P?

“The P2P solution adds network intelligence to the peering process, so that P2P applications can make smarter decisions about where they get the content…if a P2P service can understand how the network is configured to request the file at the closest peers rather than arbitrarily getting it from a peer across the country or around the globe, it could save a log of network resources…what’s more, using peers that are closer also helps files download faster.”

From a User-centric EA perspective, the ability to use bandwidth more efficiently and to download files faster is a positive development for satisfying user needs for transport of ever greater amounts of data, voice, and video over the internet. Moreover, as the technologies for carrying these converge, we will continue to see even greater requirements to move these communications more efficiently and effectively. P2P is a viable technology for accomplishing this.


Share/Save/Bookmark

September 30, 2007

Centralized, Distributed, & Hybrid IT Management and Enterprise Architecture

In User-centric EA, users IT needs are met (timely and with quality solutions), while governance ensure that those needs are aligned with mission and prioritized with others across the organization. To achieve these goals, how should IT management best be organized in the enterprise—centrally or distributed?

The debate over a centralized or distributed management model is an age-old battle. A popular theory states that organizations vacillate in roughly three year cycles between a strong centralization philosophy and a strong decentralization philosophy. The result is a management paradigm that shifts from standardization to autonomy, from corporate efficiency to local effectiveness and from pressure on costs and resources to accommodation of specific local needs, and then shifts back again. The centralized system is perceived to be too slow to react to problems in the field or to issues within a particular company department or division, and the decentralized operation is perceived as fragmented and inconsistent.

To address the pros and cons of each model, there is a hybrid model for IT management, which incorporates centralized IT governance and solutions along with distributed IT planning for the line of business and niche execution.

In the hybrid model for IT governance, an IT Investment Review Board (IRB) centrally directs, guides, and authorizes IT investments through enterprise architecture, IT policy and planning, and a CIO governed-consolidated IT budget. At the same time, IT requirements come from the lines of business, and the lines of business develop their own segment (business) architectures. In some cases, the lines of business actually plan and execute niche IT projects for their areas, while the systems development life cycle for enterprise IT systems and customer support are handled centrally.

The hybrid model for IT management is a very workable and balanced solution that demonstrates true business acumen in that it recognizes the strengths and weaknesses of both approaches (centralized and distributed management), and capitalizes on the strengths of each in coming up with a best solution for the organization.


Share/Save/Bookmark