Showing posts with label Machine Learning. Show all posts
Showing posts with label Machine Learning. Show all posts

April 2, 2023

If Pharaoh Had AI

Please see my new article in The Times of Israel called "If Pharaoh Had AI."

From manufacturing to customer service to law enforcement and defense, AI could one day be in the driver’s seat while we are off sunning on some remote beach in the Caribbean. As an example, just imagine a future military in which wars are fought by autonomous drones in the air and sea and killer robots on land, led by a master AI core at the Pentagon in control of all global operations, including our triad of nuclear warheads.

In short, the message for Passover isn’t just the tremendous potential of AI for the good or even the threat it poses of becoming too powerful to control, but what happens when the bad guys (dictators, despots, and megalomaniacs), like the Pharaoh of yesteryear, are dangerously using AI to enslave the world to their vision of hate and contempt for democracy, human rights, and freedom for us all?

(Credit Photo: Ilnur Dulyanov via https://pixabay.com/illustrations/square-soldier-green-red-angry-7871431/)
Share/Save/Bookmark

December 30, 2015

Simplify Me

So here's the monitor in the "modern" and beautiful Fort Lauderdale International airport. 

Can you see the number of electrical plugs, wires, connections, input/output ports, etc. on this device?

Obviously, it is comical and a farce as we near the end of 2015. 

Think about the complexity in building this monitor...in connecting it...in keeping it operational.

Yes, we are moving more and more to cellular and wireless communications, to miniaturization, to simple and intuitive user interfaces, to paperless processing, to voice recognition, to natural language processing, and to artificial intelligence.

But we are not there yet.

And we need to continue to make major strides to simplify the complexity of today's technology. 

- Every technology device should be fully useful and usable by every user on first contact. 

- Every device should learn upon interacting with us and get better and better with time. 

- Every device should have basic diagnostic and self-healing capability. 

Any instructions that are necessary should be provided by the device itself--such as the device telling you step by step what to do to accomplish the task at hand--no manual, no Google instructions, no Siri questions...just you and the device interacting as one. 

User friendly isn't enough anymore...it should be completely user-centric, period. 

Someday...in 2016 or beyond, we will get there, please G-d. ;-)

(Source Photo: Andy Blumenthal)
Share/Save/Bookmark

June 15, 2015

Ex Machina Will Even Turn The Terminator

So this was a really cool display at the Movie theater yesterday...

They had this head of the Terminator in a enclosed case and roped off. 

Shiny metal alloy skull, buldging bright evil red eyes, and really grotesque yellowed teeth. 

This certainly gets the attention of passerbys for the upcoming new movie, Terminator Genisys (coming out July 1). 

Anyway, Terminator is the ugly dude especially when compared with the robot/artificial intelligence of Ava in Ex Machina that we saw yesterday. 

The Turing test is nothing for Ava!

She can not only fool them as to her humanity, but also outmanuever them with her wit, sexuality, and a good dose of deceit and manipulation. 

Frankly, I think AI Ava could even turn the terible Terminator to her side of things--my bet is that movie to come in 2017. 

(Source Photo: Andy Blumenthal)
Share/Save/Bookmark

September 29, 2014

Talk To The Hand

So you know the saying "Talk to the hand, because the face ain't home..."?

Well IPSoft has an artificial intelligence agent called Amelia that handles service requests. 

Instead of talking to a human customer service rep, you get to talk to a computer. 

The question is whether Amelia is like talking to a hand or is someone really home when using IA to adroitly address your service issues?

Now apparently, according to the Wall Street Journal, this computer is pretty smart and can ingest every single manual and prior service request and learn how to answer a myriad of questions from people. 

On one hand, maybe you'll get better technical knowledge and more consistent responses by talking to a computerized service representative.

But on the other hand, if the interactive voice response systems with the dead end menus of call options, endless maze of "If you want to reach X, press Y now" along with all the disconnects after being on for 10 minutes already are any indication of what this, I am leery to say the least. 

The Telegraph does says that Amelia can service customers in 20 languages and after 2 months, can resolve 64% of "the most common queries" independently, so this is hopeful and maybe even inspiring of what is to come. 

These days, based on how much time we spend online in the virtual world, I think most people would actually prefer to talk to a knowledgeable computer than a smart alec human who doesn't want to be handling annoying customer calls all day, anyway. 

The key to whether Amelia and her computerized brothers and sisters of the future will be successful is not only how quickly they can find the correct answer to a problem, but also how well they can understand and address new issues that haven't necessarily come up the same way before, and how they handle the emotions of the customer on the line who wishes they didn't have the problem needing this call to begin with. ;-)

(Source Photo: here with attribution to Vernon Chen)
Share/Save/Bookmark

September 24, 2014

Dexterous Drones


Ok, after the da Vinci System that uses robotics to conduct surgeries this many not seem like such a feat, but think again.

While da Vinci is fully controlled by the surgeon, this Drone from Drexel University that can turn valves, or door knobs and other controls, is on the road to doing this autonomously. 

Think of robots that can manipulate the environment around them not on a stationary assembly line or doing repetitive tasks, but actually interacting real-time to open/close, turn things on/off, adjust control settings, pick things up/move them, eventually even sit at a computer or with other people--like you or I--and interface with them. 

Drones and robots will be doing a lot more than surveillance and assembly line work--with artifical intelligence and machine learning, they will be doing what we do--or close enough. ;-)
Share/Save/Bookmark

November 16, 2013

Web 1-2-3

The real cloud computing is not where we are today.

Utilizing infrastructure and apps on demand is only the beginning. 

What IBM has emerging that is above the other cloud providers is the real deal, Watson, cognitive computing system.

In 2011, Watson beat the human champions of Jeopardy, today according to the CNBC, it is being put online with twice the power. 

Using computational linguistics and machine learning, Watson is becoming a virtual encyclopedia of human knowledge and that knowledge-base is growing by the day.

But moreover, that knowledge can be leveraged by cloud systems such as Watson to link troves of information together, process it to find hidden meanings and insights, make diagnoses, provide recommendations, and generally interact with humans.

Watson can read all medical research, up-to-date breakthroughs in science, or all financial reports and so on and process this to come up with information intelligence. 

In terms of computational computing, think of Apple's Siri, but with Watson, it doesn't just tell you where the local pizza parlors are, it can tell you how to make a better pizza. 

In short, we are entering the 3rd generation of the Internet:

Web 1.0 was as a read-only, Web-based Information Source. This includes all sorts of online information available anytime and anywhere. Typically, organizational Webmasters publishing online content to the masses. 

Web 2.0 is the read-write, Participatory Web. This is all forms of social computing and very basic information analytics. Examples include: email, messaging, texting, blogs, twitter, wikis, crowdsourcing, online reviews, memes, and infographics.

Web 3.0 will be think-talk, Cognitive Computing. This incorporates artificial intelligence and natural language processing and interaction. Examples: Watson, or a good-natured HAL 9000.

In short, it's one thing to move data and processing to the cloud, but when we get to genuine artificial intelligence and natural interaction, we are at all whole new computing level. 

Soon we can usher in Kurzweil's Singularity with Watson leading the technology parade. ;-)

(Source Photo: Andy Blumenthal)
Share/Save/Bookmark

August 31, 2012

Can a Computer Run the Economy?

I am not talking about socialism or totalitarianism, but about computers and artificial intelligence.

For a long time, we have seen political infighting and finger-pointing stall progress on creating jobs, balancing trade, taming the deficits, and sparking innovation. 

But what if we somehow took out the quest for power and influence from navigating our prosperity?

In politics, unfortunately no one seems to want to give the other side the upper hand--a political win with voters or a leg-up on with their platform.

But through the disciplines of economics, finance, organizational behavior, industrial psychology, sociology, geopolitics, and more--can we program a computer to steer the economy using facts rather than fighting and fear?

Every day, we need to make decisions, big and small, on everything from interests rates, tax rates, borrowing, defense spending, entitlements, pricing strategies, regulating critical industries, trade pacts, and more.

Left in the hands of politicians, we inject personal biases and even hatreds, powerplays, band-standing, bickering, and "pork-barrel" decision making, rather than rational acting based on analysis of alternatives, cost-benefits, risk management, and underlying ethics. 

We thumb our noises (rightfully) at global actors on the political stages, saying who is rational and who is perhaps just plain crazy enough to hit "the button."

But back here at home, we can argue about whether or not the button of economic destructionism has already been hit with the clock ticking down as the national deficit spirals upward, education scores plummet, and jobs are lost overseas?

Bloomberg BusinessWeek (30 August 2012) suggests using gaming as a way to get past the political infighting and instead focus on small (diverse) groups to make unambiguous trade-off decisions to guide the economy rather than "get reelected"--the results pleasantly were cooperation and collaboration.

Yes, a game is just a game, but there is lesson that we can learn from this--economic decision-making can be made (more) rationally by rewarding teamwork and compromise, rather than by an all or nothing, fall on your sword, party against party, winner takes no prisoner-politics. 

I would suggest that gaming is a good example for how we can improve our economy, but I can see a time coming where "bid data," analytics, artificial intelligence, modeling and simulation, and high-performance computing, takes this a whole lot further--where computers, guided and inspired by people, help us make rational economic choices, thereby trumping decisions by gut, intuition, politics, and subjective whims .

True, computers are programmed by human beings--so won't we just introduce our biases and conflict into the systems we develop and deploy?

The idea here is to filter out those biases using diverse teams of rational decision-makers, working together applying subject matter expertise and best practices and then have the computers learn over time in order to improve performance--this, separate from the desire and process to get votes and get elected.

Running the economy should not be about catering to constituencies, getting and keeping power for power sakes, but rather about rational decision-making for society--where the greatest good is provided to the greatest numbers, where the future takes center stage, where individuals preferences and rights are respected and upheld, and where ethics and morality underpin every decision we make.  

The final question is whether we will be ready to course-correct with collaboration and advances in technology to get out of this economic mess before this economic mess gets even more seriously at us?

(Source Photo: here with attribution to Erik Charlton)

Share/Save/Bookmark

November 5, 2007

Semantic Web and Enterprise Architecture

MIT Technology Review, 29 October 2007 in an article entitled, “The Semantic Web Goes Mainstream,” reports that a new free web-based tool called Twine (by Radar Networks) will change the way people organize information.

Semantic Web—“a concept, long discussed in research circles, that can be described as a sort of smart network of information in which data is tagged, sorted, and searchable.”

Clay Shirky, professor in the Interactive Telecommunications Program at New York University says. “At its most basic, the Semantic Web is a campaign to tag information with extra metadata that makes it easier to search. At the upper limit, he says, it is about waiting for machines to become devastatingly intelligent.”

Twine—“Twine is a website where people can dump information that's important to them, from strings of e-mails to YouTube videos. Or, if a user prefers, Twine can automatically collect all the web pages she visited, e-mails she sent and received, and so on. Once Twine has some information, it starts to analyze it and automatically sort it into categories that include the people involved, concepts discussed, and places, organizations, and companies. This way, when a user is searching for something, she can have quick access to related information about it. Twine also uses elements of social networking so that a user has access to information collected by others in her network. All this creates a sort of ‘collective intelligence,’ says Nova Spivack, CEO and founder of Radar Networks.”

“Twine is also using extremely advanced machine learning and natural-language processing algorithms that give it capabilities beyond anything that relies on manual tagging. The tool uses a combination of natural-language algorithms to automatically extract key concepts from collections of text, essentially automatically tagging them.”

A recent article in the Economist described the Semantic Web as follows:

“The semantic web is so called because it aspires to make the web readable by machines as well as humans, by adding special tags, technically known as metadata, to its pages. Whereas the web today provides links between documents which humans read and extract meaning from, the semantic web aims to provide computers with the means to extract useful information from data accessible on the internet, be it on web pages, in calendars or inside spreadsheets.”

So whereas a tool like Google sifts through web pages based on search criteria and serves it up to humans to recognize what they are looking for, the Semantic Web actually connects related information and adds metadata that a computer can understand.
It’s like relational databases on steroids! And, with the intelligence built in to make meaning from the related information.

Like a human brain, the Semantic Web connects people, places, and events seamlessly into a unified and actionable ganglion of intelligence.

For User-centric EA, the Semantic Web could be a critical evolution in how enterprise architects analyze architecture information and come up with findings and recommendations for senior management. Using the Semantic Web, business and technology information (such as performance results, business function and activities, information requirements, applications systems, technologies, security, and human capital) would all be related, made machine readable, and automatically provide intelligence to decision-makers in terms of gaps, redundancies, inefficiencies, and opportunities—pinpointed without human intervention. Now that’s business intelligence for the CIO and other leaders, when and where they need it.

Share/Save/Bookmark

October 24, 2007

Terascale Computing and Enterprise Architecture

In MIT Technology Review, 26 September 2007, in an article entitled “The Future of Computing, According to Intel” by Kate Green, the author describes terascale computing— computational power beyond a teraflop (a trillion calculations per second).

“One very important benefit is to create the computing ability that's going to power unbelievable applications, both in terms of visual representations, such as this idea of traditional virtual reality, and also in terms of inference. The ability for devices to understand the world around them and what their human owners care about.”

How do computer learn inference?

“In order to figure out what you're doing, the computing system needs to be reading data from sensor feeds, doing analysis, and computing all the time. This takes multiple processors running complex algorithms simultaneously. The machine-learning algorithms being used for inference are based on rich statistical analysis of how different sensor readings are correlated.”

What’s an example of how inference can be used in today’s consumer technologies?

For example, sensors in your phone could determine whether you should be interrupted for a phone call. “The intelligent system could be using sensors, analyzing speech, finding your mood, and determining your physical environment. Then it could decide [whether you need to take a call].”

What is machine learning?

As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn." At a general level, there are two types of learning: inductive and deductive. Inductive machine learning methods extract rules and patterns out of massive data sets. The major focus of machine learning research is to extract information from data automatically, by computational and statistical methods. (Wikipedia)

Where’s all this computational power taking us?

Seems like we’re moving ever closer to the reality of what was portrayed as HAL 9000, the supercomputer from 2001: A Space Odyssey—HAL was“the pinnacle in artificial machine intelligence, with a remarkable, error-free performance record…designed to communicate and interact like a human, and even mimic (or reproduce) human emotions.” (Wikipedia) An amazing vision for a 1968 science fiction film, no?

From a User-centric EA perspective, terascale computing, machine learning, and computer inference represent tremendous new technical capabilities for our organizations. They are a leap in computing power and end-user application that have the capability to significantly alter our organizations business activities and processes and enable better, faster, and cheaper mission execution.
Share/Save/Bookmark