29 August 2007

Why "Good Enough" Is Good Enough

From BusinessWeek September 3, 2007

NEWS & INSIGHTS/Commentary
By Stephen Baker

Why "Good Enough" Is Good Enough
Imperfect technology greases innovation--and the whole marketplace

Say you have a crucial conference call in an hour and your phone goes dead. What do you do? A generation ago, this wasn't much of an issue, at least in the U.S. Phones in the days of the Bell monopoly were engineered to be "mission critical." You picked up one of those heavy receivers back then, and the dial tone was as prompt and reliable as water from the tap. It worked.

Yet these days, even as we pack global multimedia in our pockets, phone service sometimes seems to march backward. Andy Beal was one of 220 million subscribers to Skype, the cut-rate Internet telephony service owned by eBay (EBAY ), who saw the service go dark on Aug. 16. A software glitch kept it down for the next two days. Founder of the Raleigh (N.C.) Internet marketing consultancy Marketing Pilgrim, Beal learned that Skype was out an hour before clients were to call him from Holland. He had to message them in a hurry, telling them to call his tenuous backup: the cell phone. "It was embarrassing," he says. But at least the cell phone worked--which isn't always the case.

Are communications getting worse? Not by a long shot. We're surrounded by miraculous machines and services, most of them calibrated to a level software engineers have long called "good enough." In the right circumstances, good enough is great for the entire economy. A marketplace that's not hung up on fail-safe standards is open to risk and innovation, and drives down prices. Ever since the dawn of the PC--the archetype for a good-enough machine--inventors have been freer than ever to piece together and launch their visions. Some are brilliant, some are half-baked, many are a blend of the two. A precious few are up and running 99.999% of the time--Bell's old standard. But they cost far less to build.

The rise of good-enough technology raises different questions for do-it-yourselfers and major corporations alike. It's no longer whether we can afford a technology, but more often whether we can afford the disruption if and when it fails. Is it critical? Do we have backup in place? Many of us face this question every time we venture from our office with a cell phone. We don't have "one machine that works all the time," says Dave Morgan, chairman of Tacoda Inc., a New York advertising company. "We have lots of alternatives that work most of the time."

The upside of this sloppy status quo is enormous. Consider Andy Beal. He pays Skype about $60 a year, plus a couple cents for foreign calls. This gives him global telephony wherever he wanders with his laptop. He calls the service "seamless." He recently switched most of his office work--including e-mail, contacts, and calendar--to free Web services. This, of course, entails risk. In late July, an electrical outage in San Francisco brought some of the biggest sites, from Craigslist to Second Life, crashing down for 12 hours.

Beal's data reside on Google (GOOG ). The search giant is in fact an example of a major corporation that, like so many small fry, bets its business on good-enough technology. Google's data centers, the heart of the company's operations, consist of hundreds of thousands of commodity computers wired into a vast global network. These computers are little more reliable than yours or mine. Many die and are replaced every week. It could be that one of them at this very minute is issuing its dying blinks--and taking down Andy Beal's contact data with it. But if Google is working as designed, it links customers to another copy of those files or Web pages stored elsewhere on the network. Every computer has a legion of backups. Success, in the good-enough economy, means racing ahead even as the machines supporting us sputter and break.
I am fortunate to have the the name Steve. So when i remind myself to K.I.S.S, it means Keep It Simple Steve. But how simple can you make something and still achieve the required objectives? After a little thought, boiled the question down to the word "Precision". How "Precise" does something have to be?

Sometime later, was sitting in a tavern, famous for its Red Door, talking with a fellow IT guy about computer stuff. We each read several weekly "rags" and would exchange thoughts on what had "Legs" and what was "Flash". Asked him for his thoughts on how precise all this tech needs to be to work.

He answers, "Aristotle."

He informs me that i am couple thousand years behind and that one of the original founders of the first "SAP" had been down this road. He is brief but explains, sometimes "less is more." Maybe the time is right for a cell phone that only makes calls. I have one. It resides in the console of me truck, turned off.

Or here is a digital solution that does everything I want (or need) and nothing else to clutter its beauty. I use about 2% of what is in any of the operating systems and applications on the machines in the lab. Ninety plus some percent of available resources is not needed, nor wanted, but you get it regardless. Sometimes people may be asking for the wrong thing when they exclaim, "Why can it not just work?', when they should ask, "Why can i not get just what i need?" Sometimes more is less.

Until the next post,

Steve


Really Big Clusters Made From Lots of Small Servers

Is the real interest in "Greener" data centers to lower energy costs (good for the environment and bottom line) or is it that more efficient ways of computing require less energy? Maybe some of both?

From InfoQ:

Greener datacenters through Millicomputer clusters?

Posted by Johan Strandler on Aug 28, 2007 10:07 AM


One big problem with current large scale enterprise computing and data centers is power consumption, and a lot of effort is made in the industry to reduce the power need in current server platforms. Adrian Cockcroft is defining a new type of enterprise computing platform where he addresses this problem by defining a new type of computer: The Millicomputer - a computer that requires less than 1 Watt. The idea is to build enterprise servers out of commodity components from the battery powered mobile space. He presents a way to build an enterprise server using about 100 such Millicomputers in a cluster on a single 1U rack. This server only consumes less than 160W which is much less than comparable 1U rack enterprise servers of today. Cockcroft calls this disruptive innovation and he makes a prediction for 2010 that there could be a market for about 100,000 Milliclusters at $10K each, where each Millicluster packages 100 Millicomputers into an Enterprise Server.

The Millicomputer and MilliCluster hardware is developed as "Open Hardware", which means that the hardware design won't be owned by a single vendor. The Millicomputer is using LInux as the operating system and the hardware is based on a Freescale i.MX31 System-on-a-chip component using microSDHC flash memory. While the Millicomputer doesn't require much power by itself, external ethernet connections do. In order to save power, Cockcroft introduces the concept of "Enterprise MilliCluster", which allows 14 Millicomputers to be load balanced behind one Twin 1GB Ethernet external interface ethernet port by connecting by connecting them through a USB switch using Linux USBNet transport. The form factor to such a MilliCluster makes it possible to put 8 clusters plus a power unit on a single 1U rack, which consumes less than 160W - probably much less.

By comparing a MilliCluster based 1U server with Suns x4100 Operon and T1000 Niagara servers Cockcroft says:

"For the same 1U package size and similar cost per package power is much less than a Niagara, less than half of an Opteron system. Total RAM capacity is similar, the raw CPU GHz is double, worst case GHz per Watt is six times better than Opteron, three times better than Niagara. Flash storage is 1000x faster for both random and sequential IOPS.

Applications suitable to run on Millicomputers include:

Applications that can be broken into small chunks, small scale or horizontally scalable web workloads, legacy applications that used to run on 5 year old machines, graphical video caves and storage I/O intensive applications are the best candidates to run on Millicomputers."

Although in very early development, Millicomputing appears to be quite a paradigm shift; could this be the enterprise hardware platform of the (greener) future?

Until the next post,

Steve

19 August 2007

FCC Sets Spectrum Auction

From PC World:

The long-awaited auction of 700MHz 'beachfront property' is scheduled to begin in January.

Stephen Lawson, IDG News Service

Saturday, August 18, 2007 3:00 PM PDT


The U.S. Federal Communications Commission will begin its long-awaited auction of 700MHz radio spectrum on Jan. 16, 2008, the agency said Friday.

The sale is expected to take in US$10 billion or more in bids for what has been called "beachfront property:" licenses for frequencies that can carry mobile data and voice services over long distances and through walls much better than current cellular spectrum. The frequencies are currently used by analog television stations, which are scheduled to turn their channels over in 2009 as they move to digital broadcasting.

Google Inc. and others asked for rules in the auction that would help new entrants get into the national wireless business, such as a requirement that the winner sell some of the spectrum wholesale to other service providers. The FCC finally watered-down rules for openness, including that one part of the band can be used by any device or application.

The agency is seeking public comments on the auction, designated Auction 73. They are due by Aug. 31.
Choice is good.

Until the next post.

Steve

Eric Schmidt talking about Web 2.0 vs 3.0

From Youtube dot com.

When Eric speaks a lot of people listen.




Until the next post,

Steve

Stephen Dukker, CEO NComputing, comments on OLPC

NComputing has for more than a year been one of the top three searches that bring users to this site. I personally operate their units at two sites with out a problem.

From: Stephen Dukker, NComputing CNET News.com

Published: 07 Aug 2007 17:59 BST

via news.zdnet.co.uk

The last few years have witnessed an increasing focus on creating inexpensive, affordable computers for users in the developing world.

At the forefront of this movement is Professor Nicholas Negroponte, founder and former director of the MIT Media Lab. His not-for-profit One Laptop per Child (OLPC) project has been developing a laptop (targeted at $100 (£50) but currently struggling to break $200) suitable for use by every child in the developing world. Recently, Intel joined the board of OLPC and will even contribute funding to the project.

Helping people in the developing world cross the digital divide is a fundamental act of decency and generosity — and even self-interest — as these new markets grow, consumers spend and productivity surges. The need for technology among the under-served is so urgent, hopeful thinking goes, that even a computer with no commercial viability — no distribution channels, maintenance, training, programming services and, in fact, virtually no IT ecosystem at all — can meet that market's need.

As laudable as this dream is, the ideal unfortunately runs counter to a fundamental fact of life: a computer cannot exist independent of basic economic realities.

A computer is, rather, a creature of connectivity and collaboration. And, given the economic realities in the developing world, $200 computers cannot generate the profit essential for the creation of a robust IT ecosystem, which is essential to ensure successful deployment, ongoing operation and maintenance.

The price of a base-level personal computer today is about $400. That hasn't changed much in the last 10 years, although the power this computer delivers has increased profoundly. As a result, however, the world computer user base has been stuck at a largely saturated 850 million users for years. Unfortunately another billion potential users — most in developing and under-served markets like education — cannot afford the requisite $400. If we can merely squeeze down the price tag, have we solved their problem?

Only if you believe that OLPC and Intel's $200 laptop, with their PDA-like, seven-inch screens and obsolete processors are the answer. But the developing world is not just "village kids", but rather motivated, ambitious people engaged in business, agriculture, commerce, healthcare, finance and education.

As laudable as this dream is, the ideal unfortunately runs counter to a fundamental fact of life: a computer cannot exist independent of basic economic realities

Stephen Dukker, NComputing

For PCs to be productive in this business and educational landscape, they require a host of supporting services, plus reasonable features and capabilities. A PC must communicate, which mandates connectivity. That, in turn, demands configuration, maintenance, professional services, technical support, hardware and software upgradeability. Without a healthy ecosystem, a PC is not worth even $200.

Here in the developed world, the PC hardware makers have put up with profitless computing for years as a result of operating in a saturated, upgrade-driven market. We know our industry is in sick condition and we have now driven down the cost of "real PCs" as far as they can go.

However, not everyone needs their own PC. What they do need is access to the functionality and benefits that the PC provides, delivered in an affordable and efficient way. That's where I believe multi-user computing fills the void.

This multi-user model is not new. During the 1960s, when computers were all mainframes and cost millions, multi-user computing, in the form of time-sharing (where we rented access by the hour using low-cost "dumb terminals"), was our first tool for expanding the market from the "Fortunate 500" to the rest of us. This model continued through the 1970s, with $100,000 and, ultimately, $10,000 minicomputers further expanding the market. In the 1980s came the PC and the world changed; ultimately, we all got our own computers.

Although the last 10 years have seen very little movement in the price of low-end PCs, technology advances have turned the 2007 entry-level PC into a very muscular piece of technology whose gigapower is more than 1,000 times that of a $400 box built in 1998. Only a fraction of today's PC users, such as computational scientists, extreme gamers, graphic artists and industrial designers use more than a few percent of what these mainframes on a desk can offer.

As a result, the vast majority of those CPU cycles are wasted, burning energy (150 to 200 watts per box) which is costly and scarce in these markets and becoming ever more costly to own. So why not harness and share this extra capacity and resurrect these proven techniques and technologies from the past to take today's "mainframe on a desk" and put its power to work?

Enterprise computer users have been benefiting from the PC version of multi-user computing since 1990, something our industry has dubbed "server-based computing". Blade computing and virtualisation are the latest twists on this same multi-user concept.

However, these enterprise software and hardware components are expensive. The software licences alone often add up to more than the cost of the full or stripped-down PCs being used as the access terminals. These terminals (thin clients) are themselves as expensive as low-end PCs. It has been, thus far, a technology for the rich and fortunate.

A number of new firms, including my own company, NComputing, have reincarnated the thin client with non-CPU-based access terminals. Access terminals are being built today at costs as low as $11 and sold for well under $100 per user. At the same time, they provide manufacturers, distributors, resellers and maintenance partners with full commercial margins. The expensive software and high-end servers have been replaced by low-cost or free software and desktop PCs. These multi-user environments tap the power of low-end PCs to support 10 or more concurrent users, with power consumption of under six watts per user.

All the evidence undercuts the widespread technology assumption about how best to liberate emerging regions of the globe from the energy-wasteful business model which is being foisted upon them today.

Stephen Dukker is chief executive of NComputing. He is also a founder and former chief executive of eMachines.

If your are really interested, perform a search on "NComputing Ndiyo Teradici". Leave me a comment and i will forward you the name of a computer manufacture that is also interested.

Until the next post,

Steve

Clearwire and Sprint partner to build out WiMax in US

Cleaning out the email box and came upon this article. T1 speed with wireless. Covers about 300 million users in the US. I like it.

From: Stephen Lawson, IDG News Service

Sunday, July 22, 2007 4:00 PM PDT

With wider national coverage than either company could have had on its own, Sprint Nextel Corp. and Clearwire Corp. say they can achieve on their joint WiMax network some of what Google Inc. and others want to see in the prized 700MHz band.


The companies announced Thursday they will link their respective WiMax wireless broadband networks to give subscribers a seamless roaming experience across territories that eventually will cover 300 million U.S. residents. The network will deliver between 2M bps (bits per second) and 4M bps downstream and about half that speed upstream, they said.

Sprint and Clearwire plan to use WiMax so that subscribers can choose among a wide range of devices built to the open standard on which the technology is based. In addition, they intend to let users access any application or service on the Internet, said Atish Gude, senior vice president of mobile broadband operations at Sprint.

The upcoming auction of 700MHz radio spectrum around the U.S. has sparked a fierce debate between traditional carriers and Google Inc., Frontline Wireless LLC and others over how that spectrum should be used. Current mobile operators generally sell a limited set of devices locked to their networks and favor their own applications among the offerings their customers can access on their phones. Google told the FCC on Friday it won't bid unless the government requires any-device, any-application networks. It also wants a rule forcing the winners to sell wholesale network access to other service providers. Sprint doesn't have plans for wholesale access.

Sprint, which announced its WiMax plans last year, said Thursday it owns spectrum licenses for the WiMax band that cover 185 million U.S. residents. Clearwire has spectrum in the same band to serve 115 million people. The combined network should be fairly comprehensive, covering urban, suburban and rural areas across the country, which today has a population just over 302 million, the Census Bureau estimates.

After "soft" launches in Chicago and Washington, D.C., at the end of this year and commercial availability starting next year, the companies together aim to reach 100 million people by the end of 2008. This is the same 2008 goal Sprint had given previously by itself, but it was an aggressive goal then and is now a conservative estimate, Gude said. The companies did not estimate when the full network would be completed.

Given the higher frequency Sprint and Clearwire plan to use, at 2.5GHz, their network is likely to need more base stations than a similar network using 700MHz, which travels over long distances and through walls more easily.


Video about Xohm

Clearwire already operates a wireless broadband service and has been planning to convert it to standard mobile WiMax, which is only now emerging as a commercial technology. The company is backed by heavy hitters including Intel Corp. and Motorola Inc.

Sprint, struggling against larger rivals AT&T Inc. and Verizon Wireless Inc., could use Clearwire's helping hand. The deal may let Sprint realize its WiMax dream at less expense, said IDC analyst Godfrey Chua. There seems to be little overlap between the two carriers' licenses, so the partnership won't really hurt competition and is likely to win government approval, Chua said.

The first users will access the network with standalone modems, notebook add-on cards or PCs and smaller Ultra Mobile PCs with embedded modems, Sprint said. But it sees mobility as the key driver of the network and believes WiMax handsets will arrive by 2009, Gude said.

However, Chua thinks ever-faster cellular technologies have the edge for mobility. The Sprint-Clearwire network will compete mainly against DSL (digital subscriber line) and cable modem services, with the advantage that subscribers can set up a notebook away from home and enjoy the same service. It could significantly boost broadband competition, he said.

"It's making the world a little bit more interesting now," Chua said.
Until the next post,

Steve

01 August 2007

Received in my in box this article from RED HERRING.

PC Killer?

Desktone snares $17M, vies to convert big companies to virtual desktops.
July 30, 2007

By Ken Schachter


Desktone, which aims to get companies to junk their PCs in favor of thin clients with virtualized desktops, has landed $17 million in a series A funding round announced Monday.


Highland Capital Partners and Softbank Capital led the round, which also included China-based Tangee International and strategic investor Citrix Systems.


Desktone, whose software is designed to tie together client devices, operating systems, storage, applications, servers and network technology, is likely to face stiff competition. Some corporate IT departments prefer to build the virtualized desktop on their own. Startups like 2-year-old Kidaro, a New York City-based company backed by Genesis Partners, Storm Ventures and Opus Capital Ventures, also compete for a share of the market.


Microsoft, whose Windows operating system dominates the corporate workplace, in 2006 acquired Softricity, whose chief executive was Harry Ruda, now the chief executive of Desktone.


Peter Bell, a partner at Highland Capital and a Desktone board member, said Desktone, based in Chelmsford, Massachusetts, offers the IT managers of large corporations a respite from the complexities of managing thousands of networked PCs.


“Their software platform is removing some of the physical complexity where you might have thousands of desktops,” he said. Desktone will provide the IT organization a single management console.


Ron Fisher, also a board member at Desktone and managing partner at Softbank, the cost of maintaining a personal computer in a corporate environment can be several times the cost of acquisition.


“When you move people’s PCs, the cost of getting them set up is several hundred dollars,” he said.


Rather than charge companies one large fee for its software, Desktone is selling “software as a service,” charging companies a per-user fee per month or per year.

That model appeals to companies seeking to improve their return on assets, said Desktone Chief Operating Officer Paul Gaffney, who has served in senior positions at superstore chains Staples and Office Depot.


One-year-old Desktone already has some undisclosed corporate customers and intends to focus initially on financial services firms in the Boston-New York City corridor, Mr. Bell said.


Fort Lauderdale, Florida-based Citrix Systems’ infrastructure is used in some virtualized desktops to deliver software applications.


Though some companies will want desktop users to have “thin clients,” allowing the IT department to serve and store all information, others will be content to coordinate software patches and updates through a “virtual desktop” on PCs, Mr. Fisher said.


Until the next post,


Steve