Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

October 1, 2009

Conversational Computing and Enterprise Architecture

In MIT Technology Review, 19 September 2007, in an article entitled “Intelligent, Chatty Machines” by Kate Green, the author describes advances in computers’ ability to understand and respond to conversation. No, really.

Conversational computing works by using a “set of algorithms that convert strings of words into concepts and formulate a wordy response.”

The software product that enables this is called SILVIA and it works like this: “during a conversation, words are turned into conceptual data…SILVIA takes these concepts and mixes them with other conceptual data that's stored in short-term memory (information from the current discussion) or long-term memory (information that has been established through prior training sessions). Then SILVIA transforms the resulting concepts back into human language. Sometimes the software might trigger programs to run on a computer or perform another task required to interact with the outside world. For example, it could save a file, query a search engine, or send an e-mail.”

There has been much research done over the years in natural-language processing technology, but the results so far have not fully met expectations. Still, the time will come when we will be talking with our computers, just like on Star Trek, although I don’t know if we’ll be saying quite yet “Beam me up, Scotty.”

From an enterrpise architecture standpoint, the vision of conversational artificial intelligence is absolutely incredible. Imagine the potential! This would change the way we do everyday mission and business tasks. Everything would be affected from how we execute and support business functions and processes, and how we use, access, and share information. Just say the word and it’s done! Won't that be sweet?

I find it marvelous to imagine the day when we can fully engage with our technology on a more human level, such as through conversation. Then we can say goodbye to the keyboard and mouse, the way we did to the typewriter--which are just museum pieces now.


Share/Save/Bookmark

September 26, 2009

The Doomsday Machine is Real

There is a fascinating article in Wired (Oct. 2009) on a Doomsday Machine called “the Perimeter System” created by the Soviets. If anyone tries to attack them with a debilitating first strike, the doomsday machine will take over and make sure that the adversary is decimated in return.

“Even if the US crippled the USSR with a surprise attack, the Soviets could still hit back. It wouldn’t matter if the US blew up the Kremlin, took out the defense ministry, severed the communications network, and killed everyone with stars on their shoulders. Ground-based sensors would detect that a devastating blow had been struck and a counterattack would be launched.”

The Doomsday machine has supposedly been online since 1985, shortly after President Reagan proposed the Strategic Defense Initiative (SDI or “Star Wars”) in 1983. SDI was to shield the US from nuclear attack with space lasers (missile defense). “Star Wars would nullify the long-standing doctrine of mutually assured destruction.”

The logic of the Soviet’s Doomsday Machine was “you either launch first or convince the enemy that you can strike back even if you’re dead.”

The Soviet’s system “is designed to lie dormant until switched on by a high official in a crisis. Then it would begin monitoring a network of seismic, radiation, and air pressure sensors for signs of nuclear explosion.”

Perimeter had checks and balances to hopefully prevent a mistaken launch. There were four if/then propositions that had to be meet before a launch.

Is it turned on?

Yes then…

Had a nuclear weapon hit Soviet soil?

Yes, then…

Was there still communications links to the Soviet General Staff?

No, then launch authority is transfered to whoever is left in protected bunkers

Will they press the button?

Yes, then devastating nuclear retaliation!

The Perimeter System is the realization of the long-dreaded reality of machines taking over war.

The US never implemented this type of system for fear of “accidents and the one mistake that could end it all.”

“Instead, airborne American crews with the capacity and authority to launch retaliatory strikes were kept aloft throughout the Cold War.” This system relied more on people than on autonomous decision-making by machines.

To me, the Doomsday Machine brings the question of automation and computerization to the ultimate precipice of how far we are willing to go with technology. How much confidence do we have in computers to do what they are supposed to do, and also how much confidence do we have in people to program the computers correctly and with enough failsafe abilities not to make a mistake?

On one hand, automating decision-making can help prevent errors, such as a mistaken retaliatory missile launch to nothing more than a flock of geese or malfunctioning radar. On the other hand, with the Soviet’s Perimeter System, once activated, it put the entire launch sequence in the hands of a machine, up until the final push a button by a low-level duty station officer, who has a authority transferred to him/her and who is perhaps misinformed and blinded by fear, anger, and the urge to revenge the motherland in a 15 minute decision cycle—do or die.

The question of faith in technology is not going away. It is only going to get increasingly dire as we continue down the road of computerization, automation, robotics, and artificial intelligence. Are we safer with or without the technology?

There seems to be no going back—the technology genie is out of the bottle.

Further, desperate nations will take desperate measures to protect themselves and companies hungry for profits will continue to innovate and drive further technological advancement, including semi-autonomous and perhaps, even fully autonomous decision-making.

As we continue to advance technologically, we must do so with astute planning, sound governance, thorough quality assurance and testing, and always revisiting the technology ethics of what we are embarking on and where we are headed.

It is up to us to make sure that we take the precautions to foolproof these devices or else we will face the final consequences of our technological prowess.


Share/Save/Bookmark

February 4, 2008

Web 3.0 and Enterprise Architecture

While the Web 1.0 is viewed as an information source, and Web 2.0 as participatory, Web 3.0 is envisioned as Semantic (or the Semantic Web).

MIT Technology Review, March 2007 reports in an article entitled “A Smarter Web” by John Borland that Web 3.0 will “give computers the ability—the seeming intelligence—to understand content on the World Wide Web.” The goals is to “take the web and make it …a system that can answer questions, not just get a pile of documents that might hold an answer.”

In The New York Times, November 2007, John Markoff defined Web 3.0 “as a set of technologies that offer efficient new ways to help computers organize and draw conclusions from online data.”

Not only individuals would benefit from the Semantic Web, but companies too that “are awash in inaccessible data on intranets, in unconnected databases, even on employees’ hard drives.” The idea is to bring the data together and make it useful.

Many of you have heard of the Dewey Decimal System for organizing information. Melvin “Dewey was no technologist, but the libraries of his time were as poorly organized as today’s Web. Books were often placed in simple alphabetical order, or even lined up by size…Dewey found this system appalling: order, he believed, made for smoother access to information.” (MIT Technology Review) Melvin Dewey developed in 1876 what became The Dewey Decimal System, a library classification attempts to organize all knowledge.” (Wikipedia) In the Dewey system, books on a similar subject matter are co-located aiding discovery and access to information.

MIT Technology Review contends that like Melvin Dewey, web browser and search engine companies, like Microsoft and Google, want to help consumers locate information more efficiently.

“By the mid-1990’s, the computing community as a whole was falling in love with the idea of metadata, a way of providing Web pages with computer-readable instruction or labels…metadata promised to add the missing signage. XML—the code underlying today’s complicated websites, which describes how to find and display content, emerged as one powerful variety.” The problem with this was that it was not a systematic way of labeling data, since each developer used “their own custom ‘tags’—as if different cities posted signs in related but mutually incomprehensible dialects.”

In 1999, the World Wide Web Consortium (W3C) came up with the Resource Description Framework (RDF) for locating and describing information. Since then the vision has been for “a web that computers could browse and understand much as humans do…analogous to creating detailed road signs that cars themselves could understand and upon which they could act,” independent of human action. However, the obstacles remain for how to create ontologies that everyday busy people would use to relate data across the web—data that is currently described in myriad number of ways today—so that computers could then read and understand the data.

A second area of doubt on the realism of a Semantic Web is whether computers can truly understand the intricacies (or connotations) of human language. For example, can a computer realistically make sense of a word like marriage that can have subtle distinctions of “monogamy, polygamy, same-sex relationships, and civil unions?”

Despite the perceived obstacles, many remain not only fixated, but enamored with the notion of a Semantic Web that can not only provide amazing amounts of information, but also, like a human being, is able to analyze the data holistically, and provide actionable artificial intelligence (AI).

To enterprise architects, the Semantic Web (or Web 3.0) would be an incredible leap forward enabling organizations and individuals to get more intelligence from the web, be more productive, and ultimately provide for more efficient and effective business processes, supported by a higher order of computing enablement. Additionally, for enterprise architects themselves that deal with inordinate amounts of business and technical data—structured and unstructured—Web 3.0 technologies and methods for better mining and analyzing the data would be a welcome capability for advancing the discipline.


Share/Save/Bookmark

August 28, 2007

Data Architecture Done Right

Data architecture done right provides for the discovery and exchange of data assets between producers and consumers of data.

Data discovery is enabled by data that is understandable, trusted, and visible.

Data exchange is facilitated by data that is accessible and interoperable.

Together, data discovery and exchange are the necessary ingredients for information sharing.

Why is it so hard?

Primarily it’s a coordination issue. We need to coordinate not only internally in our own organization (often already large and complex), but also externally, between organizations — horizontally and vertically. It’s quite a challenge to get everyone describing data (metadata) and cataloging data in the same way. Each of us, each office, each division, and so forth has its own standards and way of communicating. What is the saying, “you say poTAYtos, and I say poTAHtos”.

Can we ever get everyone talking the same language? And even if we could, do we really want to limit the diversity and creativity by which we express ourselves? One way to state a social security number is helpful for interoperability, but is there really only one "right" way to say it? How do we create data interoperability without creating only one right way and many wrong ways to express ourselves?

Perhaps, the future will bring artificial intelligence closer to being able to interpret many different ways of communicating and making them interoperable. Sort of like the universal translator on Star Trek.

Share/Save/Bookmark