Showing posts with label 2001: A Space Odyssey. Show all posts
Showing posts with label 2001: A Space Odyssey. Show all posts

March 17, 2008

The Evils of Computers and Enterprise Architecture

Computers and information technology have revolutionized how we do just about everything in our lives. Yet some people have demonized technology either out of fear, ignorance, or a belief that we will not be able to control the awesome power of the technology we are developing.

The Wall Street Journal, 15-16 March 2008, reports that during the 1960s and 70s, Joseph Wiezenbaum, an MIT professor, was a gifted computer programmer who later came “to preach the evils of computers.”

Wiezenbaum created a “computer program called Eliza that was designed to simulate a psychiatrist…but after test subjects told him the program really empathized with their problems, Mr. Weizenbaum became a digital Jeremiah, and spent decades preaching the computer apocalypse.”

Surely Wiezenbaum isn’t alone in predicting the concern that computers could become smarter (and stronger) than people and could pose a dire threat to humankind’s very existence. These fears have been portrayed by Hollywood in 2001: A Space Odyssey, iRobot, Termininator, War Games, and other such hit movies.

Weizenbaum “soured on computers and condemned automated decision making as antihuman.”

“He raised questions that are as relevant today as they were when he first raised them” about 40 years ago.

As an enterprise architect, my job is to align technology solutions to business problems and requirements. Am I to consider the potential for the malevolent information system, database, storage server, or network router when trying to use technology to help achieve mission results?

OK. Maybe the question is a little too facetious. The truth is computer processing power is reaching ever greater potential, and at accelerating speeds, based on Moore’s Law. Computers now can process at speeds in trillions of calculations a second. Who can even imagine?

Is it possible, at some time that a computer or robot will go loony and do the unthinkable? Of course it is. Don’t some people have pit bulls that are friendly to their owners and then go nutty and attack the neighbor’s poodle or the neighbor himself? Don’t we all drive cars that are wonderful transportation mechanisms, but also hurt and kill thousands of people a year?

We raise and develop things that have tremendous capability to improve our way of life; however, they also have the potential to hurt us if not properly controlled.

A time will soon come with technology that we will have to worry about controlling the very machines that we have created to help us do our everyday tasks. We will have to architect safeguards for people from the very technologies that we developed and deployed to aid them.


October 24, 2007

Terascale Computing and Enterprise Architecture

In MIT Technology Review, 26 September 2007, in an article entitled “The Future of Computing, According to Intel” by Kate Green, the author describes terascale computing— computational power beyond a teraflop (a trillion calculations per second).

“One very important benefit is to create the computing ability that's going to power unbelievable applications, both in terms of visual representations, such as this idea of traditional virtual reality, and also in terms of inference. The ability for devices to understand the world around them and what their human owners care about.”

How do computer learn inference?

“In order to figure out what you're doing, the computing system needs to be reading data from sensor feeds, doing analysis, and computing all the time. This takes multiple processors running complex algorithms simultaneously. The machine-learning algorithms being used for inference are based on rich statistical analysis of how different sensor readings are correlated.”

What’s an example of how inference can be used in today’s consumer technologies?

For example, sensors in your phone could determine whether you should be interrupted for a phone call. “The intelligent system could be using sensors, analyzing speech, finding your mood, and determining your physical environment. Then it could decide [whether you need to take a call].”

What is machine learning?

As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn." At a general level, there are two types of learning: inductive and deductive. Inductive machine learning methods extract rules and patterns out of massive data sets. The major focus of machine learning research is to extract information from data automatically, by computational and statistical methods. (Wikipedia)

Where’s all this computational power taking us?

Seems like we’re moving ever closer to the reality of what was portrayed as HAL 9000, the supercomputer from 2001: A Space Odyssey—HAL was“the pinnacle in artificial machine intelligence, with a remarkable, error-free performance record…designed to communicate and interact like a human, and even mimic (or reproduce) human emotions.” (Wikipedia) An amazing vision for a 1968 science fiction film, no?

From a User-centric EA perspective, terascale computing, machine learning, and computer inference represent tremendous new technical capabilities for our organizations. They are a leap in computing power and end-user application that have the capability to significantly alter our organizations business activities and processes and enable better, faster, and cheaper mission execution.