Technology News Technology News



Another Perspective



BEARS' TURNS

You're not alone if you've become skeptical about the near term prospects for banks that have overpaid for assets. The banks are having trouble selling these assets, because they can't find trusting buyers. With so many people asking hard questions about bankers and financiers, this nosy, impolite attitude is becoming infectious. It is even spreading into other areas, such as information technology. Some customers are subjecting vendors' claims about value to serious examination, and while this prudence might be practical, it isn't pretty.

We are not only in a bear market for many kinds of securities, we seem to be in a bear market of faith in regard to financial institutions and regulatory authorities. One reason we are in a bear is that we are not in bull market, and there are only two choices. There are no bunny markets or sparrow markets or butterfly markets. Markets are tough, so they're named for tough creatures. It's not entirely clear where the terms bull and bear began, but it's easy to show they've been in use for more than two hundred years . What is probably more important right now than the etymology is the way people are experiencing connections between events on Wall Street and those on Main Street; the bear market is creating stress for individuals and businesses from the financial capitals as they try to conduct their ordinary practices.

With so much going so wrong, it's not surprising that corporate bigwigs are looking for any excuse to duck out of the boardroom but instead of heading for the golf course some of them are visiting the computer room. Whatever their intentions, chances are they will cause some mischief. Vendors seem to have spotted this phenomenon, and the wandering directors have become a new audience for sales pitches with a value theme. Incumbent lead vendors are suffering fresh attacks from rivals who claim their systems can deliver more for the money than the ones that large enterprises now have in place. If the incumbent has become complacent, it could be evicted.

New York's bronze bull
Bullish New York
This bronze bull proclaims Gotham's belief in
the ultimate prevalence of rising financial markets

The company with the most to lose, the one that has in the past had big wins, is IBM, which is the incumbent prime vendor at the majority of large enterprises. While IBM's enterprise customers use all its platforms, System z mainframes are the most prominent central machines, as they have been for decades.  (A number of big shops use System p and System i servers, and we suspect more will do so in the future.  Typically, the non-z shops brought in these other IBM systems to replace small mainframes that IBM no longer seems to love.)

Whatever the user's main platform, IBM's business case for the central server complex generally involves showing how Big Blue's virtualization technologies combined with very high utilization of server resources get the user organization more bang for the buck.  Not surprisingly, IBM's rivals, particularly Sun and HP, disagree.  Each says that its own mix of architectural features and virtualization technologies will give a customer better value than an IBM alternative, particularly an IBM System z mainframe.  To the extent they are right, IBM's rivals are pointing to a problem with the IBM value model that could undermine the trust customers have in Big Blue's big iron, much the way the slice of the mortgage market that's euphemistically called subprime but really should be called junk has brought the surrounding credit market to a state of ignominy.

Sun, HP, and commentators who believe mainframe alternatives almost always deliver better price/performance like to go after IBM's flagship z line because it is such a big, lucrative target.  Like Willie Sutton when asked why he robbed banks, IBM's rivals chase mainframe shops because that's where the money is.

Consequently, it is hardly surprising that big iron bears, hoping the tumultuous times gets their case a more favorable hearing, are highlighting the uncertainties and weak spots in IBM's business case and combing the market for situations in which they can talk to not only information processing executives but also general management and corporate directors.  Unlike the people who are closer to the computers, and who probably selected the systems that are in place and under fire, general managers and directors don't have to defend their past choices.  This means the contenders' criticisms of IBM's cost modelmight get a better reception than it would if it only reached ears that live in the glass house.

There's more to this than just one vendor claiming to be better than another.  Sun, HP, and others who say the mainframe is overpriced are sometimes right about issues that involve a lot of money and talent, issues that can actually affect a company's ability to deliver revenue and earnings.

Everyone in computing, whether on the buy side or the sell side, knows that there are sites where the mainframe is a great value and sites where it's not.

The problem for incumbent vendors like IBM is that under good conditions they can usually boil things down to what is probably an oversimplified analysis.  The more complex reality, exposed during a crunch, such as the one happening right now, is that most big shops have a mix of jobs that might best be hosted on a mix of platforms.  But that is in an ideal world.  Unfortunately, in the real world, user organizations simply don't have the diverse talent pool it takes to properly support a variety of disparate servers.  So some servers, particularly IBM central systems, are well supported while satellite servers may not get the same superior staffing.  This makes them look bad, and that is one reason IBM's centralization pitch can succeed where by some reckoning it ought to fail.

When an information technology department makes a choice that doesn't work out, even if a technical solution is at hand there may be no political solution.  Computing executives have a hard time explaining to corporate managers that some old problems might have to be solved.  It can be a lot easier to bury unsatisfactory results than to tear down inefficient systems and replace them with better ones.  Computer folk in this corner have a lot in common with bankers who have erred; it's nothing to be proud of.

When mortgage lenders began the practices that led to the current debacle, they knew from their history that most of their clients would pay off their mortgages without a hitch.  If a borrower hit a bad patch or some property lost its worth, well, then deals could turn sour.  But bad deals were a small minority.  The lenders' data showed that the it was possible to forecast with some accuracy the number of defaults might occur under various conditions if for any reason the benign economic climate changed for the worse, as long as lenders stuck to tried and true practices when they first arranged their loans.

Twenty years ago, lenders were pretty careful about how much they loaned and to whom.  In part this was because they were held to high standards and in part it was because they were tightly regulated.  More recently, when the US government, including most prominently the Federal Reserve Bank, decided that as a matter of policy is was going to encourage more people to put money into homes, the standards changed.  As low interest rates made mortgages cheaper buyers started thinking house values could only go up up up.  And for a while this is just how events unfolded.

Now the housing bubble has burst and financial instruments based on mortgages have turned south.  This kind of bubble might have arisen under any set of rules, but for a long time, at least in the USA, many financial institutions were barred from practices that ultimately contributed to the current mess if the practices had to do with mortgages on homes, farms, and factories. 

Maltese Falcon
The Glass-Steagall
This 1933 law imposed regulations
to protect the public from banks
and banks from their own worst inclinations,
but in 1999 the USA decided its financial markets
had become pretty much foolproof
and it was repealed

Until 1999, the United States had a law that militated against certain kinds of developments in financial markets, such as the current real estate bubble.  The 1933 Glass-Steagall Act, which among other things created the Federal Deposit Insurance Corporation, defined groups of financiers.  It included rules that constrained the activities of savings and loan institutions, it set somewhat different limits on the conduct of commercial banks, and provided other rules for merchant banks (or stock brokerages if you prefer that term).  S&Ls and to a considerable extent commercial banks were by one or another means basically prohibited from creating, investing in, or selling the kinds of mortgage mash-ups and corporate debt mash-ups that have lately become smash-ups.

What happened in 1999 was this: The USA got so smart that it knew it could never experience an economic storm like the one that began in 1929 and led to the Great Depression.  Because the USA had become so smart, the rules that prevented it from really making the most of its money, its financial institutions, its communications networks, its computers, and above all else its horde of energetic bankers armed with Atlas Shrugged, mood elevating chemicals, and hubris seemed silly.  So the rules were abolished.  Nine years later, a lot of wealth is getting abolished.

What is going on in finance has also happened in computing, which has produced bubbles of its own.  There was one at the turn of the century, when dot com outfits were setting up zillions of web sites.  When investors in startups that became shutdowns where shacked.  The dot com suppliers, such as Sun, took a beating, too.  IBM mainframes were barely visible in the dot com boom, which used other platforms.  IBM realized it had been very fortunate to dodge two bullets.  First, it did not get badly hurt in the dot com collapse.  Then, after the bubble burst, the people who were capable of getting client/server technology to work alongside and instead of mainframes rarely got a chance to use their skills for this purpose.

Nevertheless, IBM didn't count on staying that lucky in the future.  The company put a really big effort into making Linux a credible guest on all its server platforms and supported the environment with improved virtualization and specialty engines. 

But IBM is still pitching its data processing cost model even when it uses words that are common in the client/server world.  This is nothing IBM should be ashamed about, because it is the point of view that makes Big Blue look best.  But it's not the only way to see things.  The result is an opening for IBM's rivals, if they can get a chance to pitch their wares, so they try to identifyenterprises where computing is heavily interactive with large swings between peak and trough workloads.  Some of these companies may conclude that the models favored by Sun and HP might be more useful than the IBM data processing model.

Even where workloads are best analyzed using IBM's methods, knowing a bit about both models is important.  If nothing else, an educated customer can see why it is so compare the two models.  There's no formula that translates mainframe data processing capacity measured in, say, MIPS, into Unix, Linux, or Windows capacity measured in, say, X86 or Power or Sparc GHz.  On the other hand, once you restrict your competitive analysis to realtime interactive jobs under Linux or Unix or Windows, you do have a chance to see relevant benchmark results and possibly to run some actual tests.

Some observers, including the provocative Paul Murphy, take a critical look IBM's z Linux claims and raise most of the key points that users have to consider.  Murphy's published work was done before the z10 was announced; some of it even refers to the old Multiprise 3000, and it has to be read with that in mind.  For back-of-the-envelope calculations, though, it's safe to follow Murphy and try to compare Power speeds with X86 speeds by multiplying Power clock rates by two.  But if you do, be extra careful.  With the Power6 and z10, clock rates doubled compared to the z9 but engine capacity increased by something like fifty percent, not a hundred percent.  So by all means double the clock rate to make a comparison, but remember that you are probably overrating the engines in the z10.  At the same time, you might want to give IBM the benefit of slightly reduced cost for z10 memory; it's about $6,000 per GB, compared to at least $8,000 on a z9.

Frankfurt's bull and bear
Frankfurt Frankness
Germany's financial capital is as pleased as New York
to host some major markets, but its public sculpture
reminds anyone who cares to notice that
both bulls and bears prowl its avenues

What this means is that a very crude comparison of z Linux with Linux on another platform might begin by equating a single 4 GHz z10 core to 8 GHz worth of X86 core.  That amount of X86 power can be found on a quad-core 2 GHz Opteron or Xeon.

Sun's attack on IBM's numbers highlights the way IBM claims z Linux can get high utilization while Linux on other platforms should be priced out at relatively low utilization.  If you can disregard Sun's understandably pugnacious attitude and what we at first believed was a pseudonymous posting (it's on a blog by Jeff Savit; savit is Hindustani for sun) you might be able to see a valid point.  [Weeks after we wrote and posted the piece we heard from Mr Savit, who assured us he and his name are real and also pointed out that our web site Contact page had zits.]  Still, the Sun piece has at least one big flaw, and it's in IBM's favor: Sun seems to price IFL processors on the z box without allowing for the price of memory and channels, and that may underestimate the cost of a z Linux system by as much as 50 percent.  (This would not be the case if a user got a big hardware discount from IBM.) To see the possible error, suppose IFL needs 20 GB of memory to deliver the kind of performance IBM suggests each engine can provide.  If so, that would bring the memory cost up to the price of a Linux core.  If that seems like a lot of memory, bear in mind that the system will be used to support dozens or hundreds of Linux images.  A four-way X86 server running a demanding virtualized workload would probably need 16 to 32 GB of main memory, too.

(Sun hasn't run all the numbers to compare its X86 boxes with IBM's p and i lines.  Chances are, the comparisons would not be so dramatic, because IBM's pricing of p and i systems is generally lower than the what it charges for the same capacity on a mainframe.)

Any critical look at IBM central systems would not be complete without bringing in HP's pitch for Itanium servers, last revised during the z9 generation.  Like Sun, HP confines a lot of its competitive arguments to the IBM mainframe, largely ignoring IBM's p and i product lines.  When HP tries to show that its Itanium boxes can run Linux cheaper than IBM can run z Linux, it does manage to hit one target that IBM might wish it missed: power consumption.  HP says its Itanium systems are greener than z9 mainframes and that would make them greener than z10 boxes, which can draw more than 30 Kw.  Itanium systems are not on the same price/performance curve as X86 platforms, but they are often still competitive.

IBM says that its IFL and z Linux business is growing very nicely, which seems to indicate that a lot of enterprise customers are comfortable, whether or not they believe in the mainframe model of computing costs.  IBM has not been as vocal about Linux on it p and i lines, but we have to give Big Blue the benefit of any doubt and guess that its sales pitch goes over in those segments, too.  Sun and HP have not smashed holes in IBM's glass house installed base.  So their arguments, however well constructed, have not yet won the hearts and minds of that many loyal IBM central systems customers.

Still, it's probably wise for users of big IBM computers to be careful when estimating the cost and performance of Linux on a machine that was installed primarily to perform data processing.  IBM's rivals and critics are not likely to be entirely wrong.  IBM's large systems, particularly z boxes, might choke on some Linux workloads.  If this happens, and the crowd from the boardroom is hanging around when the mainframe starts looking like busted banker, computing executives could end up feeling pretty vulnerable.  They probably ought to consider how they might respond to challenging questions about computing costs and central system value that come from the boardroom.  The only way to do this is to evaluate the arguments made by both incumbents and challengers, which at large enterprises means studying the case presented by IBM and also taking seriously the kind of analysis presented by Sun and HP as well as their outside evangelists.

It can be a lot of work, but these days it just might be something that needs to be done.  Whether a careful cost analysis wins the hearts and minds of corporate bigwigs on the hunt for cost savings in the glass house will come down to individual cases.  For computing professionals, it might also depend on how much bull one can bear.

— Hesh Wiener April 2008


Copyright © 1990-2014 Technology News of America Co Inc.  All rights reserved.