Why ‘good-enough’ computing could be a fluke

Will the concept of "good-enough" computing be relevant in the future? Or will it vanish as new technologies – and possibilities – arrive?

It’s no secret that worldwide computer sales have been on the decline, and one of the most recognized reasons for this slide is the rise of “good-enough” computing. The Good Enough revolution rode on the popularity of netbooks, making its real debut into tech culture around 2009 when a roster of publications – including Wired, The Economist, Fortune, and PC World – published articles hailing “good enough” as the future of computing. These renowned publications argued that, if given the chance, people would be happy to abandon the breakneck pace of a computer’s upgrade cycle.

There’s reason to believe today’s home computers have reached a plateau. The average home doesn’t need a new quad-core processor to run Word and check email, suggesting that today’s PCs are as powerful as they’ll ever need to be. Is this the permanent state of computing? Or has the hardware simply out-paced innovation?

The interface of the future demands horsepower

The interface you use when you sit in front of your PC today isn’t fundamentally different from that used two decades ago. It’s a two-dimensional graphical user interface (GUI) that relies on fine manipulation of different elements. Recent innovation has replaced the keyboard and mouse with touch, which is only half of a step in a new direction; use of a 2D GUI is still standard.

We’re just now beginning to see the possibilities beyond a standard GUI. They exist in technology like Microsoft’s Kinect, which can recognize gestures and voice as input. Gesture and voice are both rudimentary implementations that can only interact with users in limited ways, but the possibilities only require imagination. The days of computers being used at a desktop may be coming to an end.

But these new technologies need power. The Kinect’s task, simple though it may be, was demanding enough to require the use of a separate processor inside the Kinect rather than the hardware already in the Xbox 360. Even so, it can only handle a small number of inputs in limited situations, and its tracking is not perfect. In fact, motion tracking is a serious computing problem. A computer that’s tracking motion is taking images of an area and using algorithms to compare them and then determine where motion has occurred. Higher precision requires better images and more complex algorithms, both of which need more power. That’s tough enough as it is, but to be appealing, a future gesture-based interface would need to detect motion in three dimensions, which, of course, only increases the power needed.

We’re still years away from gesture-based input becoming a common computing interface, but it’s already being researched by some of the world’s largest tech companies. The only question is how quickly it can be made practical for home PCs.

What are the system requirements of the holodeck?

Virtual reality has been the holy grail of geeks for decades. It’s agreed to be an awesome concept, though, like flying cars and truly intelligent AI, it’s still found in fiction more often than life. VR systems exist, but most of them are expensive simulators far outside the average person’s budget. Plus, the quality of the experience isn’t on par with the life-like virtual worlds found in sci-fi films and shows.

VR differs from flying cars, however, because its only obstacle is technology. Recreating reality – even an approximation of it – needs some form of immersive display paired with advanced inputs (like the motion-tracking we’ve already discussed). Both demand incredible computing power. Even modern 3D graphics are a long way from providing the realism that VR demands.

Oculus Rift

Of course, innovation in this area won’t impact computing if it never reaches the consumer market; but there’s reason to believe it will. Microsoft has submitted a patent application for a 360-degree projector that can be used alongside a television to create a simple virtual environment. Oculus is working on the Rift, a Kickstarter-supported VR headset. And Google is working on a pair of augmented-reality glasses that are small and light enough to wear every day.

None of these technologies are full virtual reality, but they’re an indication that technology is finally mature enough to support exploration of basic mass-market VR. Once we head down this path, we’ll find that computing power will be one of two major obstacles (the other being display technology) between us and even-more-realistic VR. Good-enough computing will only be relevant when humans have literally recreated reality.

Not good enough for tomorrow

It’s not hard to see today’s computers as good-enough if your perspective is tied to their capabilities. Obviously, they’re more than capable of handling today’s tasks. What’s in contention is the assumption that today’s tasks are all computers will ever handle.

I don’t think that’s the case. Today’s home computer will eventually be replaced by a home that is a computer. It will be able to activate pre-programmed functions when you come home from work. It will let you navigate a recipe book via gestures while you’re cooking. It will envelope you in 360-degree entertainment and games. The possibilities are endless and exciting. 

This future relies on the evolution of technology that’s available today. Refining all of these elements into a complete vision, however, requires computing power and efficiency far beyond what’s available in the today’s consumer PCs. Plateaus are often large, but they’re not infinite – and this one is only the base of the next summit. 


Source : digitaltrends[dot]com

Post a Comment

It's free
item