Showing posts with label Virtualization. Show all posts
Showing posts with label Virtualization. Show all posts

August 25, 2012

IT Security, The Frankenstein Way

Here's a riddle: When is a computer virus not a dangerous piece of malware? Answer: when it is hidden as Frankenstein code. 

The Economist (25 August 2012) describes how computer viruses are now being secretly passed into computers, by simply sending a blueprint for the virus rather than the harmful code itself into your computer--then the code is harvested from innocuous programs and assembled to form the virus itself. 

Like the fictional character, Frankenstein, that is stitched together out of scavenged body parts, the semantic blueprint pulls together code from host programs to form the viruses. 

This results is a polymorphic viruses, where based on the actual code being drawn from other programs, each virus ends up appearing a little different and can potentially mask itself--bypassing antivirus, firewall, and other security barriers. 

Flipping this strategy around, in a sense, Bloomberg Businessweek (20 June 2012) reports on a new IT security product by Bromium that prevents software downloads from entering the entire computer, and instead sets aside a virtual compartment to contain the code and ensure it is not malicious--and if the code is deemed dangerous, the cordoned-off compartment will dissolve preventing damage to the overall system.

So while on the offensive side, Frankenstein viruses stitch together parts of code to make a dangerous whole--here on the defensive side, we separate out dangerous code from potentially infecting the whole computer.  

Computer attacks are getting more sinister as they attempt to do an end-run around standardized security mechanisms, leading to continually evolving computer defenses to keep the Frankensteins out there, harmless, at bay.

(Source Photo: here with attribution to Dougal McGuire)

Share/Save/Bookmark

November 11, 2010

Microsoft’s Three-headed Play

Computerworld, 8 November 2010, has an article called “Ozzie to Microsoft: Simplify, Simplify.” Unless Microsoft can become nimbler and less bureaucratic, they will not be able to keep pace with technology change in the marketplace.

Ray Ozzie, Microsoft’s departing Chief Software Architect (and Bill Gate’s successor since 2006) has prepared a five-year plan for the company that “exhorts the company to push further into the cloud—or perish.” (Hence, a recent Microsoft stock price that is half of what it was more ten years ago!)

According to Ozzie—and I believe most technology architects today would agree—the future of computing is far less about the PC and Windows and much more about mobile devices and services, which are not traditional core competencies of Microsoft.

The new technology landscape is one that is based on:

  • Mobility—access anywhere (smartphones, tablets, and embedded appliances)
  • Pervasiveness—access anytime (24/7, “always on”)
  • Shared services—access that is hosted and shared, rather than device or enterprise-based.

Despite seeing the future, Microsoft is having trouble changing with the times and many are questioning whether they are in a sense a “one pony show” that can no longer keep up with the other technology innovators such as Apple, Google, Amazon, and others that seem to be riding the mobility and cloud wave.

Wes Miller, a technology analyst, states about Microsoft: “My frustration is that it’s a big ship, and the velocity with which the boat is going will make it hard” for them to move from a PC-centric to a cloud-oriented world. “You’re talking about competing with companies that are, if not out-innovating Microsoft, then outpacing them.”

With the deep bench of intellectual talent and investment dollars that Microsoft has, why are they apparently having difficultly adjusting with the changing technology landscape that their own chief architect is jumping up and down screaming to them to confront head-on?

To me, it certainly isn’t ignorance—they have some of the smartest technologists on the planet.

So what is the problem? Denial, complacency, arrogance, obstinance, accountability, leadership, or is it a combination of these coupled with the sheer size (about 89,000 employees) and organizational complexity of Microsoft—that Ozzie and Miller point out—that is hampering their ability to effectively transform themselves.

This certainly wouldn’t be the first time that the small and nimble have outmaneuvered lumbering giants. That’s why according to Fortune Magazine, of Fortune 500 companies, only 62 have appeared on the list every year since 1955, another 1,952 have come and gone. It’s sort of the David vs. Goliath story again and again.

While Microsoft is struggling to keep pace, they are fortunate to have had people like Ray Ozzie pointing them in the right direction, and they have made major inroads with cloud offering for Office365 (Office, Exchange, SharePoint, and Lync—formerly OCS), Windows Azure (service hosting and management), and Hyper V (for server virtualization).

As I see it, Microsoft has 3 choices:

  1. Change leadership—find someone who can help the company adapt to the changing environment
  2. Break up the company into smaller, more nimble units or “sub-brands,” each with the autonomy to compete aggressively in their sphere
  3. Instead of focusing on (the past)—base product enhancements and the “next version,” they need to be thinking completely outside the box. Simply coming out with “Windows 13” is a bit ridiculous as a long-term strategy, as is mimicking competitors’ products and strategies.

As is often the case, this is really isn’t so much a question of the technology, because Microsoft can certainly do technology, but it is whether Microsoft can overcome their cultural challenges and once again innovate and do it quickly like their smaller and more agile rivals.


Share/Save/Bookmark

September 3, 2009

Zipcar = Cloud Computing

No, not exactly. But they actually do have a lot in common in that they are both about sharing resources and using them to achieve cost-savings and flexibility.

An article in Fortune Magazine (September 14, 2009) on Zipcars really got me thinking about this.

With cloud computing, we are sharing our IT infrastructure, storage, and/or applications with others and using the services of cloud providers. It is one big virtual environment, where instead of everyone having their own technologies and applications, we make use of shared resources and we meet our information technology needs on demand and pay only for what we use.

Zipcars has the same-shared model as the cloud, and shifting toward this new paradigm is going to help preserve the environment.

Usage: Like cloud computing, Zipcars provides for the use of automobile when you need one and you pay by the hour or day, according to what you use. It’s flexible, saves money, and cuts down on the number of vehicles on the road and therefore on the pollution associated with them.

Cost: Both Zipcars and cloud computing cost pennies on the dollar. For a basic $50 membership and $11.25 an hour you can drive a Zipcar (note: drivers who give up their own cars save an average of $800 per month). For 12-25 cents per month you can store a gigabyte in the cloud or for 10 cents-$1.25 an hour you can process tasks on the Elastic Computer Cloud (EC2).

Functionality: Zipcars move people around and cloud computing moves data.

Centralization: Zipcars are co-located in “company created ‘pods’ or group of cars in parking lots or garages,” and cloud computing services are centralized in data centers of large cloud providers (like Google, Amazon, Microsoft, and IBM)

Market: Zipcars has grown already to 325,000 members and is growing 30% a year with a overall market for shared vehicles expected to balloon to $800 million over the next five years (Fortune), and business IT spending on cloud computing is expected to rise from $16 billion last year to $42 billion by 2012 (IDC).

Users: Major companies (not just individuals) are using Zipcars—so far “about 8,500 companies have signed up, including Lockheed Martin, Gap, and Nike.” And brand name companies are signing up for cloud computing, such as NY Times, NASDAQ, Major League Baseball, ESPN, Hasbro and more. (http://www.johnmwillis.com/other/top-10-entperises-in-the-cloud/).

Going green: Each shared Zipcar “takes up to 20 cars off the road as members sell their rides or decide not to buy new ones.” Each move to cloud computing makes some or all of organizations unique servers, storage devices, and applications obsolete.

The trend: With the transportation market, the future will be “a blend of things like the Zipcar, public transportation, and private car ownership (according to Bill Ford), and with the IT industry, the future will be a combination of cloud computing, managed services, and in-house IT service provision.

Zipcars and cloud computing are benefiting from the new shared services model driven by cost-savings, flexibility, efficiencies of allotment, and eco-consciousness. These are driving change in our usage of transportation and computing for the better.


Share/Save/Bookmark

August 12, 2009

Andy's Cloud Computing Presentation on MeriTalk

Introduction

First let me start out by saying that cloud computing brings us closer than ever to providing IT as a utility such as electricity, where users no longer need to know or care about how IT services are provided, and only need to know that they are reliably there, just like turning on the light. This is the subscription approach to using information technology, where base services are hosted, shared, and you pay only for what you need and use.

In cloud computing, there are a number of basic models. First, in public clouds, we have a multi-tenant, shared services environment with access provided over a secure Internet connection. In contrast in a private cloud, the IT shared services is behind the company’s firewall and is controlled by in-house staff. Then, there is also a community cloud, which is an extension of the private cloud, where IT resources are shared by several organizations that make-up a specific community.

The advantage to cloud computing—whether public or private—is that you have a shared, enterprise-wide solution that offers a number of distinct advantages:

  1. Efficiency–with cloud computing, we build once and reuse multiple times—i.e. we share resources—rather than everyone having their own.
  2. Flexibility–we are more nimble and agile when we can quickly expand or contract capacity on-demand, as needed—what some call rapid elasticity. Moreover, by outsourcing the utility computing elements of our IT infrastructure, we can focus our internal efforts on building our core mission areas.
  3. Economy (or economy of scale)–it’s cheaper and more cost effective when we can tap into larger pools of common resources maintained by companies with subject matter expertise. They then are responsible for ensuring that IT products are patched, upgraded and modernized. Moreover, we pay only for what we actually use.

Issue

So cloud computing sounds pretty good, doesn’t it? What then is the big issue? Plain and simple it comes down to—Is cloud computing effective for the organization? And what I mean by that is a few things:

  • First is customization, personalization and service: when you buy IT computing services in this shared services model, do you really get what you need and want – or are you just getting a canned approach, like the Model T that came in one color, black? For example, when you purchase Software as a Service are you getting the solution you need for your agency or the one built for someone else?
  • Next is security, privacy, and disaster recovery. This is a big deal because in a public cloud, you are capturing, processing, sending, and storing data outside of your proprietary infrastructure. This opens the door for theft, manipulation, or other ways of our data being compromised by criminals, cyber-terrorists, and even hostile nation-states.
  • Third, and maybe most important, is cultural, especially in a very individualistic society, like ours, where people are used to getting what they want, when they want, without having to share. For example, we prefer owning our own vacation home to having a time-share. We love the concept of a personal home theater. Everyone now has a personal cell phone, and the old public telephones that were once on every corner are now practically extinct. And most people prefer driving their own cars to work rather than using mass transit—even though it’s not environmentally friendly. So the idea of giving up our proprietary data centers, application systems, the control of our data, in a cloud computing model, is alien to most and possibly even frightening to many.

The Reality

So how do we harmonize the distinct advantages of cloud computing—efficiency, flexibility, and economy—with the issues of customization, security, and culture?

The reality is that regardless of customization issues, we can simply no longer afford for everyone to have their own IT platforms—it’s wasteful. We are recovering from a deep financial recession, the nation has accumulated unprecedented levels of debt, and we are competing in a vast global economy, where others are constantly raising the bar—working faster, better, and cheaper.

Moreover, from a technology standpoint, we have advanced to where it is now possible to build an efficient cloud computing environment using distributed architecture, virtualization/consolidation, and grid computing.

Thirdly, on a cultural level, as individualistic as we are, it is also true that we now recognize the importance of information sharing and collaboration. We are well aware of the fact that we need to break the stovepiped verticals and build and work horizontally. This is exemplified by things like Google Docs, SharePoint, Wikipedia, and more.

In terms of security, I certainly understand people’s concern and it is real. However, we are all already using the cloud. Are you using online banking? Are you ordering things online through Amazon, Overstock or other e-commerce vendors? Do you use yahoo or Google email? Then you are already using the cloud and for most of us, we don’t even realize it. The bottom line on security is that every agency has to decide for itself in terms of its mission and ability to mitigate any risks.

How to Choose

So there are two questions then. Assuming—and I emphasize assuming—that we can solve the security issues with a “Trusted Cloud” that is certified and accredited, can we get over the anxiety of moving towards cloud computing as the new standard? I believe that since the use case—for flexibility, economy, and efficiency—is so compelling, that the answer is going to be a resounding yes.

The next question is, once we accept the need for a cloud computing environment, how do we filter our choices among the many available?

Of course I’m not going to recommend any particular vendor or solution, but what I will do is advocate for using enterprise architecture and sound IT governance as the framework for the decision process.

For too many years, we based our decisions on gut, intuition, politics, and subjective management whim, which is why statistics show that more than 82% of IT projects are failing or seriously challenged.

While a full discussion of the EA and governance process is outside the scope of this talk, I do want to point out that to appropriately evaluate our cloud computing options, we must use a strong framework of architecture planning and capital planning and investment control to ensure the strategic alignment, technical compliance, return on investment, and risk mitigation—including of course security and privacy—necessary for successful implementation.

How Cloud Computing fits with Enterprise Architecture:

As we move to cloud computing, we need to recognize that this is not something completely new, but rather an extension of Service Oriented Architecture (SOA) where there are service providers and consumers and applications are built by assembling reusable, shared services that are made available to consumers to search, access, and utilize. Only now with public cloud computing, we are sharing services beyond the enterprise and to include applications, data, and infrastructure.

In terms of a transition strategy, cloud computing is a natural evolution in IT service provision.

At first, we did everything in-house, ourselves—with our own employees, equipment, and facilities. This was generally very expensive in terms of finding and maintaining employees with the right skill sets, and developing and maintaining all our own systems and technology infrastructure, securing it, patching it, upgrading it, and so on.

So then came the hiring of contractors to support our in-house staff; this helped alleviate some of the hiring and training issues on the organization. But it wasn’t enough to make us cost-efficient, especially since we were still managing all our own systems and technologies for our organization, as a stovepipe.

Next, we moved to a managed services model, where we out-sourced vast chunks of our IT—from our helpdesk to desktop support, from data centers to applications development, and even to security and more.

Finally, the realization has emerged that we do not need to provide IT services either with our own or contracted staff, but rather we can rely on IT cloud providers who can offer an array of IT services, on demand, and who will manage our information technology and that of tens, hundreds, and thousands of others and provide it seamlessly over the Internet, so that we all benefit from a more scalable and unified service provision model.

Of course, from a target architecture perspective, cloud computing really hits the mark, because it provides for many of the inherent architecture principles that we are looking to implement, such as: services interoperability and component reuse, and technology standardization, simplification, and cost-efficiency. And on top of all that—using services on a subscription or metered basis is convenient for the end-user.

Just one last thing I would like to point out is that sound enterprise architecture and governance must be user-centric. That means that we only build decision products that are valuable and actionable to our users—no more ivory tower efforts or developing shelfware. We need to get the right information to the right decision makers to get the mission accomplished with the best, most agile and economical support framework available.


Share/Save/Bookmark

September 5, 2008

The Future of Cloud Computing

Cloud computing—“a style of computing where IT-related capabilities are provided ‘as a service’, allowing users to access technology-enabled services ‘in the cloud’ without knowledge of, expertise with, or control over the technology infrastructure that supports them.” (Wikipedia)

In an article in InfoWorld, 7 April 2008, called What Cloud Computing Really Means, Galen Gruman states that “Cloud computing encompasses any subscription-based or pay-per use service that, in real time over the Internet, extends IT capabilities.”

What’s an example of cloud computing?

An example of cloud computing is Google Apps that provides common business applications (similar to traditional office suits) online.”

How does cloud computing work?

In cloud computing, resources--either hardware or software--are available on-demand—as needed.

In the case of on-demand software, application service providers (ASPs) offer software as a service (SaaS). And for on-demand hardware or IT infrastructure (i.e. virtual data center capabilities such as servers or storage), the offering takes the form of utility computing. In both cases, technology resources are served up on a pay-as-you-go or metered basis, similar to the way a public utility would charge for electricity, oil/gas, telephone, water, and so on.

The cloud computing model is similar to service oriented architecture where there is a service provider and consumer, and here the Internet functions the basic service broker.

Cloud computing is has a basis in technology virtualization in which service providers "hide the physical characteristics of computing resources from their users [consumers]." (Wikipedia)

What are the major advantages of cloud computing?

Cost—one of the big advantages of this computing model is that the upfront IT investment cost is little to none, since the IT assets are in essence being rented.

Scalability—customers have the ability to use more resources when they have a surge in demand and can scale back or turn off the spigot when the resources are not needed.

Flexibility—As IT capabilities get updated by the service provider, consumers in the cloud model can make immediate use of them and benefit sooner than if they had to stand up the capabilities themselves.

Mission focus—The enterprise can stay focused on core mission and mission support capabilities and in essence easily outsource business support functions, where the service provider is responsible for enabling more generic (not strategic or differentiators) business capabilities.

What are the enterprise architecture implications?

Cloud computing can play an important role in focusing IT solutions on strategic mission requirements, simplifying and standardizing our IT infrastructures by outsourcing capabilities, utilizing a services oriented architecture (SOA) model where common business services are served up by providers and consumed by the enterprise, and more effectively managing costs.

What is the future of cloud computing?

Obviously, there are security implications, but as Galen Gruman states: “as SOA and virtualization permeate the enterprise, the idea of loosely coupled services running on an agile, scalable infrastructure should make every enterprise a node in the cloud. It’s a long-running tend with a far-out horizon. But among big metatrends, cloud computing is the hardest one to argue with in the long term.


Share/Save/Bookmark

May 4, 2008

Virtualization and Enterprise Architecture

"[Virtualization is] a technique for hiding the physical characteristics of computing resources from the way in which other systems, applications, or end users interact with those resources. This includes making a single physical resource (such as a server, an operating system, an application, or storage device) appear to function as multiple logical resources; or it can include making multiple physical resources (such as storage devices or servers) appear as a single logical resource." (Mann, Andi, Virtualization 101, Enterprise Management Associates (EMA), Retrieved on 29 October 2007 according to Wikipedia)

Virtualization places an intermediary between consumers and providers; it is an interface between the two. The interface allows a multiplicity of consumers to interact with one provider, or one consumer to interact with a multiplicity of providers, or both, with only the intermediary being aware of multiplicities. (adapted from Wikipedia)

ComputerWorld, 24 September 2007, reports in “Virtual Machines deployed on the Sly” that according to an InfoPro survey “28% of the respondents said they expect that half of all new servers installed at their companies this year will host virtual applications. And about 50% said that, by 2010, at least half of their new servers will likely host virtual software.

What are the major concerns in going virtual?

  • Service levels—users are concerned that performance will suffer without having dedicated hardware to run their applications.
  • Security—there is concern that application and information security will be compromised in a virtual environment.
  • Vendor support—“some vendors won’t support their software at all if it’s run on virtual machines.”
  • Pricing—pricing for software licensing utilized in a virtual environment can be higher due to added complexity of support.

From a User-centric Enterprise Architecture perspective, plan on moving to virtual machines. There is potential for significant cost savings from consolidating IT infrastructure that includes reducing the number of servers, reducing related facility costs, as well as increasing overall utilization rates of machines and balancing loads to achieve greater efficiency. Soon there is no need for a dedicated server to host applications anymore.


Share/Save/Bookmark

April 4, 2008

Lessons from Mainframes and Enterprise Architecture

As many organizations have transitioned from a mainframe computing to a more distributed platform, they have become more decentralized and in many cases more lax in terms of managing changes, utilization rates, assuring availability, and standardization.

However, management best practices from the mainframe environment are now being applied to distributed computing,

DM Review, 21 March 2002, reports that “people from the two previously separate cultures—mainframe and distributed [systems]—are coming together to architect, develop, and manage the resulting unified infrastructure.”

  1. Virtualization—“Developers of distributed applications can learn from the approach mainframe developers use to build applications that operate effectively in virtualized environments…[such that] operating systems and applications bounce from one server to another as workload change” i.e. effective load balancing. This improves on distributed applications that “have traditionally been designed to run on dedicated servers” resulting in data centers with thousands of servers running at low utilizations rates, consuming lots of power, and having generally low return on investment.
  2. Clustering—“A computer cluster is a group of loosely coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer [like a mainframe]…Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability” (Wikipedia) The strategy here is to “reduce the number of servers you need with virtualization, while providing scaling and redundancy with clustering.”
  3. Standardization—Distributed computing has traditionally been known for freedom of choice marked by “diversity of hardware, operating platforms, application programming languages, databases, and packaged applications—and an IT infrastructure that is complex to manage and maintain. It takes multiple skill sets to manage and support the diverse application stack…standardization can help you get a handle on [this].

Thus, while we evolve the IT architecture from mainframe to distributed computing, the change in architecture are not so much revolutionary as it is evolutionary. The lessons of mainframe computing that protected our data, ensured efficient utilization and redundancy, and made for a cost-effective IT infrastructure do not have to be lost in a distributed environment. In fact, as architects, we need to ensure the best of both worlds.


Share/Save/Bookmark