Showing posts with label Web 3.0. Show all posts
Showing posts with label Web 3.0. Show all posts

November 16, 2013

Web 1-2-3

The real cloud computing is not where we are today.

Utilizing infrastructure and apps on demand is only the beginning. 

What IBM has emerging that is above the other cloud providers is the real deal, Watson, cognitive computing system.

In 2011, Watson beat the human champions of Jeopardy, today according to the CNBC, it is being put online with twice the power. 

Using computational linguistics and machine learning, Watson is becoming a virtual encyclopedia of human knowledge and that knowledge-base is growing by the day.

But moreover, that knowledge can be leveraged by cloud systems such as Watson to link troves of information together, process it to find hidden meanings and insights, make diagnoses, provide recommendations, and generally interact with humans.

Watson can read all medical research, up-to-date breakthroughs in science, or all financial reports and so on and process this to come up with information intelligence. 

In terms of computational computing, think of Apple's Siri, but with Watson, it doesn't just tell you where the local pizza parlors are, it can tell you how to make a better pizza. 

In short, we are entering the 3rd generation of the Internet:

Web 1.0 was as a read-only, Web-based Information Source. This includes all sorts of online information available anytime and anywhere. Typically, organizational Webmasters publishing online content to the masses. 

Web 2.0 is the read-write, Participatory Web. This is all forms of social computing and very basic information analytics. Examples include: email, messaging, texting, blogs, twitter, wikis, crowdsourcing, online reviews, memes, and infographics.

Web 3.0 will be think-talk, Cognitive Computing. This incorporates artificial intelligence and natural language processing and interaction. Examples: Watson, or a good-natured HAL 9000.

In short, it's one thing to move data and processing to the cloud, but when we get to genuine artificial intelligence and natural interaction, we are at all whole new computing level. 

Soon we can usher in Kurzweil's Singularity with Watson leading the technology parade. ;-)

(Source Photo: Andy Blumenthal)
Share/Save/Bookmark

September 24, 2010

The User-centric Web

David Siegel has written a book called “Pull: The Power of the Semantic Web To Transform Your Business” (Dec. 2009).

The main idea is that businesses (suppliers) need to adapt to a new world, where rather than them “push” whatever data they want to us when they want, we (consumers) will be able to get to the information we want and “pull” it whenever we need it (i.e. on demand).

Siegel identifies three types of data online of which less than 1% is currently visible web pages:

  • Public Web—what “we normally see when searching and browsing for information online: at least 21 billion pages indexed by search engines.
  • Deep Web—includes the “large data repositories that requires their internal searches,” such as Facebook, Craigslist, etc.—“about 6 trillion documents generally not seen by search engines.”
  • Private Web—data that “we can only get access to if we qualify: corporate intranets, private networks, subscription based services, and so on—about 3 trillion pages also not seen by search engines.”

In the future, Siegel sees an end of push (i.e. viewing just the Public Web) and instead a new world of pull (i.e. access to the Deep Web).

Moreover, Siegel builds on the “Semantic Web” definition of Sir Tim Berners-Lee who coined the term in the 1990s, as a virtual world where:

  • Data is unambiguous (i.e. means exactly the same things to anyone or any system).
  • Data is interconnected (i.e. it lives online in a web of databases, rather than in incompatible silos buried and inaccessible).
  • Data has an authoritative source (i.e. each piece of information has a unique name, single source, and specified terms of distribution).

While, I enjoyed browsing this book, I wasn’t completely satisfied:

  1. It’s not a tug of war between push and pullthey are not mutually exclusive. Providers push information out (i.e. make information available), and at the same time, consumers pull information in (access it on-demand).
  2. It’s not just about data anymore—it’s also about the applications (“apps”). Like data, apps are pushed out by suppliers and are pulled down by consumers. The apps make the data friendly and usable to the consumer. Rather than providing raw data or information overload, apps can help ready the data for end-user consumption.

All semantics aside, getting to information on the web is important—through a combination of push and pull—but ultimately, making the information more helpful to people through countless of innovative applications is the next phase of the how the web is evolving.

I would call this next phase, the “user-centric web.” It relies on a sound semantic web—where data is unambiguous, interconnected, and authoritative—but also takes it to the next level, serving up sound semantic information to the end-user through a myriad of applications that make the information available in ever changing and intelligent ways. This is more user-centric, and ultimately closer to where we want to be.


Share/Save/Bookmark

February 4, 2008

Web 3.0 and Enterprise Architecture

While the Web 1.0 is viewed as an information source, and Web 2.0 as participatory, Web 3.0 is envisioned as Semantic (or the Semantic Web).

MIT Technology Review, March 2007 reports in an article entitled “A Smarter Web” by John Borland that Web 3.0 will “give computers the ability—the seeming intelligence—to understand content on the World Wide Web.” The goals is to “take the web and make it …a system that can answer questions, not just get a pile of documents that might hold an answer.”

In The New York Times, November 2007, John Markoff defined Web 3.0 “as a set of technologies that offer efficient new ways to help computers organize and draw conclusions from online data.”

Not only individuals would benefit from the Semantic Web, but companies too that “are awash in inaccessible data on intranets, in unconnected databases, even on employees’ hard drives.” The idea is to bring the data together and make it useful.

Many of you have heard of the Dewey Decimal System for organizing information. Melvin “Dewey was no technologist, but the libraries of his time were as poorly organized as today’s Web. Books were often placed in simple alphabetical order, or even lined up by size…Dewey found this system appalling: order, he believed, made for smoother access to information.” (MIT Technology Review) Melvin Dewey developed in 1876 what became The Dewey Decimal System, a library classification attempts to organize all knowledge.” (Wikipedia) In the Dewey system, books on a similar subject matter are co-located aiding discovery and access to information.

MIT Technology Review contends that like Melvin Dewey, web browser and search engine companies, like Microsoft and Google, want to help consumers locate information more efficiently.

“By the mid-1990’s, the computing community as a whole was falling in love with the idea of metadata, a way of providing Web pages with computer-readable instruction or labels…metadata promised to add the missing signage. XML—the code underlying today’s complicated websites, which describes how to find and display content, emerged as one powerful variety.” The problem with this was that it was not a systematic way of labeling data, since each developer used “their own custom ‘tags’—as if different cities posted signs in related but mutually incomprehensible dialects.”

In 1999, the World Wide Web Consortium (W3C) came up with the Resource Description Framework (RDF) for locating and describing information. Since then the vision has been for “a web that computers could browse and understand much as humans do…analogous to creating detailed road signs that cars themselves could understand and upon which they could act,” independent of human action. However, the obstacles remain for how to create ontologies that everyday busy people would use to relate data across the web—data that is currently described in myriad number of ways today—so that computers could then read and understand the data.

A second area of doubt on the realism of a Semantic Web is whether computers can truly understand the intricacies (or connotations) of human language. For example, can a computer realistically make sense of a word like marriage that can have subtle distinctions of “monogamy, polygamy, same-sex relationships, and civil unions?”

Despite the perceived obstacles, many remain not only fixated, but enamored with the notion of a Semantic Web that can not only provide amazing amounts of information, but also, like a human being, is able to analyze the data holistically, and provide actionable artificial intelligence (AI).

To enterprise architects, the Semantic Web (or Web 3.0) would be an incredible leap forward enabling organizations and individuals to get more intelligence from the web, be more productive, and ultimately provide for more efficient and effective business processes, supported by a higher order of computing enablement. Additionally, for enterprise architects themselves that deal with inordinate amounts of business and technical data—structured and unstructured—Web 3.0 technologies and methods for better mining and analyzing the data would be a welcome capability for advancing the discipline.


Share/Save/Bookmark