Showing posts with label Human-Computer Interaction. Show all posts
Showing posts with label Human-Computer Interaction. Show all posts

December 30, 2015

Simplify Me

So here's the monitor in the "modern" and beautiful Fort Lauderdale International airport. 

Can you see the number of electrical plugs, wires, connections, input/output ports, etc. on this device?

Obviously, it is comical and a farce as we near the end of 2015. 

Think about the complexity in building this monitor...in connecting it...in keeping it operational.

Yes, we are moving more and more to cellular and wireless communications, to miniaturization, to simple and intuitive user interfaces, to paperless processing, to voice recognition, to natural language processing, and to artificial intelligence.

But we are not there yet.

And we need to continue to make major strides to simplify the complexity of today's technology. 

- Every technology device should be fully useful and usable by every user on first contact. 

- Every device should learn upon interacting with us and get better and better with time. 

- Every device should have basic diagnostic and self-healing capability. 

Any instructions that are necessary should be provided by the device itself--such as the device telling you step by step what to do to accomplish the task at hand--no manual, no Google instructions, no Siri questions...just you and the device interacting as one. 

User friendly isn't enough anymore...it should be completely user-centric, period. 

Someday...in 2016 or beyond, we will get there, please G-d. ;-)

(Source Photo: Andy Blumenthal)
Share/Save/Bookmark

November 16, 2013

Web 1-2-3

The real cloud computing is not where we are today.

Utilizing infrastructure and apps on demand is only the beginning. 

What IBM has emerging that is above the other cloud providers is the real deal, Watson, cognitive computing system.

In 2011, Watson beat the human champions of Jeopardy, today according to the CNBC, it is being put online with twice the power. 

Using computational linguistics and machine learning, Watson is becoming a virtual encyclopedia of human knowledge and that knowledge-base is growing by the day.

But moreover, that knowledge can be leveraged by cloud systems such as Watson to link troves of information together, process it to find hidden meanings and insights, make diagnoses, provide recommendations, and generally interact with humans.

Watson can read all medical research, up-to-date breakthroughs in science, or all financial reports and so on and process this to come up with information intelligence. 

In terms of computational computing, think of Apple's Siri, but with Watson, it doesn't just tell you where the local pizza parlors are, it can tell you how to make a better pizza. 

In short, we are entering the 3rd generation of the Internet:

Web 1.0 was as a read-only, Web-based Information Source. This includes all sorts of online information available anytime and anywhere. Typically, organizational Webmasters publishing online content to the masses. 

Web 2.0 is the read-write, Participatory Web. This is all forms of social computing and very basic information analytics. Examples include: email, messaging, texting, blogs, twitter, wikis, crowdsourcing, online reviews, memes, and infographics.

Web 3.0 will be think-talk, Cognitive Computing. This incorporates artificial intelligence and natural language processing and interaction. Examples: Watson, or a good-natured HAL 9000.

In short, it's one thing to move data and processing to the cloud, but when we get to genuine artificial intelligence and natural interaction, we are at all whole new computing level. 

Soon we can usher in Kurzweil's Singularity with Watson leading the technology parade. ;-)

(Source Photo: Andy Blumenthal)
Share/Save/Bookmark

August 7, 2011

Computer, Read This

In 2002, Tom Cruise waved his arms in swooping fashion to control his Pre-Crime fighting computer in Minority Report , and this was the tip of the iceberg when it comes to consumer interest in moving beyond the traditional keyboard, trackpads, and mice to control our technology.

For example, there is the Ninetendo Wii and Microsoft Kinect in the gaming arena, where we control the technology with our physical motions rather than hand-held devices. And consumers seem to really like have a controller-free gaming system. The Kinect sold so quickly--at the rate of roughly 133,000 per day during the first three months--it earned the Guinness World Record for fastest selling consumer device. (Mashable, 9 March 2011),
Interacting with technology in varied and natural ways--outside the box--is not limited to just gestures, there are many more such as voice recognition, haptics, eye movements, telepathy, and more.

- Gesture-driven--This is referred to as "spatial operating environments"--where cameras and sensors read our gestures and translate them into computer commands. Companies like Oblong Industries are developing a universal gesture-based language, so that we can communicate across computing platforms--"where you can walk up to any screen, anywhere in the world, gesture to it, and take control." (Popular Science, August 2011)

- Voice recognition--This is perhaps the most mature of the alternative technology control interfaces, and products like Dragon Naturally Speaking have become not only standard on many desktops, but also are embedded in many smartphones giving you the ability to do dictation, voice to text messaging, etc.

- Haptics--This includes touchscreens with tactile sensations. For example, Tactus Technology is "developing keyboards and game controllers knobs [that actually] grow out of touchscreens as needed and then fade away," and another company Senseg is making technology that produces feelings so users can feel vibrations, clicks, and textures and can use these for enhanced touchscreens control of their computers. (BusinessWeek, 20-26 June 2011)

- Eye-tracking--For example, new Lenovo computers are using eye-tracking software by Tobii to control the browser and desktop applications including email and documents (CNET, 1 March 2011)

- Telepathy--Tiny implantable chips to the brain, "the telepathy chip," are being used to sense electrical activity in the nerve cells and thereby "control a cursor on a computer screen, operate electronic gadgets [e.g. television, light switch, etc.], or steer an electronic wheelchair." (UK DailyMail, 3 Sept. 2009)

Clearly, consumers are not content to type away at keyboards and roll their mice...they want to interact with technology the way they do with other people.

It still seems a little way off for computers to understand us the way we really are and communicate. For example, can a computer read non-verbal cues, which communication experts say is actually something like 70% of our communications? Obviously, this hasn't happened yet. But when the computer can read what I am really trying to say in all the ways that I am saying it, we will definitely have a much more interesting conversation going on.

(Source Photo: here)

Share/Save/Bookmark

September 19, 2010

The Printer’s Dilemma

There is a lot of interest these days in managed print solutions (MPS)—sharing printers and managing these centrally—for many reasons.

Some of the benefits are: higher printer use rates; reduction in printing; cost saving; and various environmental benefits.

Government Computer News (5 April 2010) has an article called “Printing Money” that states: managed printing is an obvious but overlooked way to cut costs, improve efficiency, and bolster security.”

But there are also a number of questions to consider:

- What’s the business model? Why are “printing companies” telling us to buy less printers and to print less? Do car companies tell us to buy less cars and drive less (maybe drive more fuel efficient vehicles, but drive less or buy less?) or do food companies advise us to buy less food or eat less (maybe eat healthier food, but less food)? To some vendors, the business model is simple, if we use their printers and cartridges—rather than a competitor’s—then even if we use less overall, the managed print vendor is getting more business, so for them, the business model makes sense.
- What's the cost model? Analysts claim agencies by moving to managed print solutions “could save at least 25 percent of their printing expenses” and vendors claim hundreds of thousands, if not millions in savings, and that is attractive. However, the cost of commodity printers, even the multifunction ones with fax/copy/scan functions, has come way down, and so has the print cartridges—although they are still too high priced—and we change them not all that often (I just changed one and I can barely remember the last time that I did). As an offset to cost savings, do we need to consider the potential impact to productivity and effectiveness as well as morale—even if the latter is just the “annoyance factor”?

- What’s the consumer market doing? When we look at the consumer market, which has in many analyst and consumer opinions jumped ahead of where we are technologically in the office environment, most people have a printer sitting right next to them in their home office—don’t you? I’d venture to say that many people even have separate printers for other family members with their own computers set ups, because cost and convenience (functional)-wise, it just makes sense.

- What’s the cultural/technological trend? Culturally and technologically, we are in the “information age,” most people in this country are “information workers,” and we are a fast-paced (and what’s becoming a faster and faster-paced) society where things like turn around time and convenience (e.g. “Just In Time inventory, overnight delivery, microwave dinners, etc.) are really important. Moreover, I ask myself is Generation Y, that is texting and Tweeting and Facebooking—here, there, and everywhere—going to be moving toward giving up there printers or in fact, wanting to print from wherever they are (using the cloud or other services) and get to their documents and information immediately?

- What’s the security impact? Understanding that printing to central printers is secure especially with access cards or pin numbers to get your print jobs, I ask whether in an age, where security and privacy of information (including corporate theft and identity theft) are huge issues, does having a printer close by make sense, especially when dealing with sensitive information like corporate strategy or “trade secrets,” mission security, personnel issues, or acquisition sensitive matters, and so on. Additionally, we can we still achieve the other security benefits of MPS—managing (securing, patching etc.) and monitoring printers and print jobs in a more decentralized model through the same or similar network management functions that we use for our other end user-devices (computers, servers, storage, etc.)

- What’s the environmental impact? There are lots of statistics about the carbon footprint from printing—and most I believe is from the paper, not the printers. So perhaps we can print smarter, not only with reducing printers, but also with ongoing education and sensitivity to our environment and the needs of future generations. It goes without saying, that we can and should cut down (significantly) on what and how much we print (and drive, and generally consume, etc.) in a resource constrained environment—planet Earth.

In the end, there are a lot of considerations in moving to managed print solutions and certainly, there is a valid and compelling case to moving to MPS, especially in terms of the potential cost-saving to the organization (and this is particularly important in tough economic environments, like now), but we should also weight others considerations, such as productivity offsets, cultural and technological trends, and overall security and environmental impacts, and come up with what’s best for our organizations.

Share/Save/Bookmark

September 12, 2010

The Humanization of Computers

The Wall Street Journal recently reviewed (Sept. 10, 2010) “The Man Who Lied to His Laptop,” by Clifford Nass.

The book examines human-computer interactions in order to “teach us about human relationships.”

The reviewer, David Robinson, sums up with a question about computers (and relationships): “do we really think it’s just a machine?”

Answer: “A new field of research says no. The CASA paradigm-short for ‘computers as social actors’—takes its starting point the observation that although we deny that we interact with a computer as we would with a human being, many of us actually do.”

The book review sums up human-computer interaction, as follows:

Our brains can't fundamentally distinguish between interacting with people and interacting with devices. We will ‘protect’ a computer's feelings, feel flattered by a brown-nosing piece of software, and even do favors for technology that has been "nice" to us. All without even realizing it.”

Some interesting examples of how we treat computers like people:

- Having a heart for your computer: People in studies giving feedback on computer software have shown themselves to “be afraid to offend the machine” if they are using their own computers for the evaluation rather than a separate ‘evaluation computer.’

- Sexualizing your computer: People sexualize computer voices lauding a male sounding tutor voice as better at teaching ‘technical subjects,’ and a female sounding voice as better at teaching ‘love and relationship’ material.

- A little empathy from your computer goes a long way: People are more forthcoming in typing messages about their own mistakes “if the computer first ‘apologizes’ for crashing so often.”

It seems to me that attributing human attributes (feelings, sexuality, and camaraderie) to an inanimate object like a computer is a social ill that we should all be concerned about.

Sure, we all spend a lot of time going back and forth between our physical realities, virtual realities, and now augmented realities, but in the process we seem to be losing perspective of what is real and what is not.

Perhaps to too many people, their computers have become their best friends, closest allies, and likely the biggest time hog of everything they do. They are:

- Doing their work at arms length from computers rather than seriously working together with other people to solve large and complex problems facing us all.

- Interacting virtually on social networks rather than with friends in real life, and similarly gaming online rather than meeting at the ballpark for some swings at the bat.

- Blogging and tweeting their thoughts and feelings on their keyboards and screens, rather than with loved ones who care and really want to share.

We have taken shelter behind our computers and to some extent are in love with our computers—both of these are hugely problematic. Computers are tools and not hideaways or surrogate lovers!

Of course, the risk of treating computers as people is that we in turn treat people as inanimate computers—or maybe we already have?

This is a dangerous game of mistaken reality we are playing.

[Photo Source: http://www.wilsoninfo.com/computerclipart.shtml]


Share/Save/Bookmark

September 11, 2010

A Boss that Looks Like a Vacuum Cleaner


This is too much…an article and picture in MIT Technology Review (September/October 2010) of a robotic boss, called Anybot—but this boss looks like a vacuum cleaner, costs $15,000, and is controlled remotely from a keyboard by your manager.



So much for the personal touch—does this count toward getting some face time with your superiors in the office?


With a robotic boss rolling up to check on employees, I guess we can forget about the chit-chat, going out for a Starbucks together, or seriously working through the issues. 

Unless of course, you can see yourself looking into the “eyes” of the vacuum cleaner and getting some meaningful dialogue going.


This is an example of technology divorced from the human reality and going in absolutely the wrong direction!

Share/Save/Bookmark

April 22, 2008

“Consumerized” IT and Enterprise Architecture

“Ergonomics (or human factors) is the application of scientific information concerning objects, systems and environment for human use…Ergonomics is commonly thought of as how companies design tasks and work areas to maximize the efficiency and quality of their employees’ work. However, ergonomics comes into everything which involves people. Work systems, sports and leisure, health and safety should all embody ergonomics principles if well designed. The goal of ergonomics and human factors is to make the interaction of humans with machines as smooth as possible, enhancing performance, reducing error, and increasing user satisfaction through comfort and aesthetics. It is the applied science of equipment design intended to maximize productivity by reducing operator fatigue and discomfort. The field is also called biotechnology, human engineering, and human factors engineering.”

“In the 19th century, Frederick Winslow Taylor pioneered the "Scientific Management" method, which proposed a way to find the optimum method for carrying out a given task… The dawn of the Information Age has resulted in the new ergonomics field of human-computer interaction (HCI). Likewise, the growing demand for and competition among consumer goods and electronics has resulted in more companies including human factors in product design. (Wikipedia)

Despite all the talk of ergonomics, we’ve all had the experience of getting a new IT gadget or using a new IT application that necessitated that we go through reams of instructions, user-guides, manuals (some 3-4 inches thick), and online tutorials, and still often we end up with having to call in to some IT support center (often in India these days) for walking through the “technical difficulties”.

Not a very user-centric architecture.

Well finally companies are waking up and factoring in (and designing in) ergonomics and a more user-centric approach.

The Wall Street Journal, 22 April 2008, reports “Business Software’s Easy Feeling: Programs are Made Simpler to Learn, Navigate.”

Many vendors have ‘consumerized’ their corporate software and online services making them easier to learn and navigate by borrowing heavily from sites such as Facebook or Amazon.com. They have also tried to make their products more intuitive by shying from extraneous features—a lesson learned from simple consumer products such as Apple Inc.’s iPod.”

Other vendors are developing products using “user experience teams” in order to build products that are user-friendly and require minimal to “no formal training to use.”

David Whorton, one of the backers of SuccessFactors, an online software company, stated: “We’ve moved into an environment where no one will tolerate manuals or training.”

Similarly, Donna Van Gundy, the human resources director for Belkin, a maker of electronic equipment said: “Employees just don’t want to be bothered with training courses.”

The bar has been raised and consumers expect a an intuitive, user-friendly experience and a simple user interface.

Go User-centric!!


Share/Save/Bookmark