Showing posts with label Hal. Show all posts
Showing posts with label Hal. Show all posts

August 31, 2012

Can a Computer Run the Economy?

I am not talking about socialism or totalitarianism, but about computers and artificial intelligence.

For a long time, we have seen political infighting and finger-pointing stall progress on creating jobs, balancing trade, taming the deficits, and sparking innovation. 

But what if we somehow took out the quest for power and influence from navigating our prosperity?

In politics, unfortunately no one seems to want to give the other side the upper hand--a political win with voters or a leg-up on with their platform.

But through the disciplines of economics, finance, organizational behavior, industrial psychology, sociology, geopolitics, and more--can we program a computer to steer the economy using facts rather than fighting and fear?

Every day, we need to make decisions, big and small, on everything from interests rates, tax rates, borrowing, defense spending, entitlements, pricing strategies, regulating critical industries, trade pacts, and more.

Left in the hands of politicians, we inject personal biases and even hatreds, powerplays, band-standing, bickering, and "pork-barrel" decision making, rather than rational acting based on analysis of alternatives, cost-benefits, risk management, and underlying ethics. 

We thumb our noises (rightfully) at global actors on the political stages, saying who is rational and who is perhaps just plain crazy enough to hit "the button."

But back here at home, we can argue about whether or not the button of economic destructionism has already been hit with the clock ticking down as the national deficit spirals upward, education scores plummet, and jobs are lost overseas?

Bloomberg BusinessWeek (30 August 2012) suggests using gaming as a way to get past the political infighting and instead focus on small (diverse) groups to make unambiguous trade-off decisions to guide the economy rather than "get reelected"--the results pleasantly were cooperation and collaboration.

Yes, a game is just a game, but there is lesson that we can learn from this--economic decision-making can be made (more) rationally by rewarding teamwork and compromise, rather than by an all or nothing, fall on your sword, party against party, winner takes no prisoner-politics. 

I would suggest that gaming is a good example for how we can improve our economy, but I can see a time coming where "bid data," analytics, artificial intelligence, modeling and simulation, and high-performance computing, takes this a whole lot further--where computers, guided and inspired by people, help us make rational economic choices, thereby trumping decisions by gut, intuition, politics, and subjective whims .

True, computers are programmed by human beings--so won't we just introduce our biases and conflict into the systems we develop and deploy?

The idea here is to filter out those biases using diverse teams of rational decision-makers, working together applying subject matter expertise and best practices and then have the computers learn over time in order to improve performance--this, separate from the desire and process to get votes and get elected.

Running the economy should not be about catering to constituencies, getting and keeping power for power sakes, but rather about rational decision-making for society--where the greatest good is provided to the greatest numbers, where the future takes center stage, where individuals preferences and rights are respected and upheld, and where ethics and morality underpin every decision we make.  

The final question is whether we will be ready to course-correct with collaboration and advances in technology to get out of this economic mess before this economic mess gets even more seriously at us?

(Source Photo: here with attribution to Erik Charlton)

Share/Save/Bookmark

September 15, 2009

Happy Birthday Internet

On September 2, 2009, the Internet celebrated its fortieth birthday.

ComputerWorld (14 Sept. 2009) reports that 40 years ago “computer scientists created the first network connection, a link between two computers at the University of California, Los Angeles.” This was the culmination of research funded by the Defense Advanced Research Projects Agency (DARPA) in the 1960s.

This information technology milestone was followed by another, less than two months later, on October 29 1969, when Leonard Kleinrock "sent a message from UCLA to a node at the Sanford Research Institute in Palo Alto, California."

While the Internet conceptually become a reality four decades ago, it didn’t really go mainstream until almost the 1990’s—with the founding of the World Wide Web project in 1989, AOL for DOS in 1991, and the Mosaic browser in 1993.

Now, I can barely remember what life was like before the Internet. Like the black and white pictures of yester-year: life was simple and composed, but also sort of lifeless, more boring indeed, and less colorful for sure. In other words, I wouldn’t want to go back.

Also, before the Internet, the world was a lot smaller. Even with connections to others far away—by phone and by plane—people’s day-to-day connections were more limited to those in close proximity—on their block, down on Main Street, or in and around town. It took an extra effort to communicate, share, deal, and interchange with people beyond the immediate area.

At present with the Internet, every email, chat, information share, e-commerce transaction, social media exchange, and application are a blast across the reaches of cyberspace. And like the vastness of the outer space beyond planet Earth, cyber space represents seemingly endless connectivity to others over the Internet.

What will the Next Generation Internet (NGI) bring us?

ComputerWorld suggests the following—many of which are already with us today:

  • Improved mobility—like “showing you things about where you are” (for example, where’s the nearest restaurant, restroom, or service station or even where are your friends and family members).
  • Greater information access—“point your mobile phone at a billboard, and you’ll see more information” about a particular advertisement.
  • Better e-commerce—“use the Internet to immediately pay for goods.”
  • Enhanced visualization—Internet will “take on a much more three-dimensional look.”

I believe the future Internet is going to be like Second Life on steroids with a virtual environment that is completely immersive—interactive with all five senses and like speaking with Hal the computer, answering your every question and responding to your every need.

It’s going to be great and I’m looking forward to saying “Happy Birthday Internet” for many more decades, assuming we don’t all blow ourselves out of the sky first.


Share/Save/Bookmark

October 24, 2007

Terascale Computing and Enterprise Architecture

In MIT Technology Review, 26 September 2007, in an article entitled “The Future of Computing, According to Intel” by Kate Green, the author describes terascale computing— computational power beyond a teraflop (a trillion calculations per second).

“One very important benefit is to create the computing ability that's going to power unbelievable applications, both in terms of visual representations, such as this idea of traditional virtual reality, and also in terms of inference. The ability for devices to understand the world around them and what their human owners care about.”

How do computer learn inference?

“In order to figure out what you're doing, the computing system needs to be reading data from sensor feeds, doing analysis, and computing all the time. This takes multiple processors running complex algorithms simultaneously. The machine-learning algorithms being used for inference are based on rich statistical analysis of how different sensor readings are correlated.”

What’s an example of how inference can be used in today’s consumer technologies?

For example, sensors in your phone could determine whether you should be interrupted for a phone call. “The intelligent system could be using sensors, analyzing speech, finding your mood, and determining your physical environment. Then it could decide [whether you need to take a call].”

What is machine learning?

As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn." At a general level, there are two types of learning: inductive and deductive. Inductive machine learning methods extract rules and patterns out of massive data sets. The major focus of machine learning research is to extract information from data automatically, by computational and statistical methods. (Wikipedia)

Where’s all this computational power taking us?

Seems like we’re moving ever closer to the reality of what was portrayed as HAL 9000, the supercomputer from 2001: A Space Odyssey—HAL was“the pinnacle in artificial machine intelligence, with a remarkable, error-free performance record…designed to communicate and interact like a human, and even mimic (or reproduce) human emotions.” (Wikipedia) An amazing vision for a 1968 science fiction film, no?

From a User-centric EA perspective, terascale computing, machine learning, and computer inference represent tremendous new technical capabilities for our organizations. They are a leap in computing power and end-user application that have the capability to significantly alter our organizations business activities and processes and enable better, faster, and cheaper mission execution.
Share/Save/Bookmark