IT 3.0

For as long as I can remember, the field in which I work has been called Information Technology (IT).  And while it is true that other terms have also been used (e.g. Data Processing), it is difficult to dispute the ubiquity (and longevity) of IT as an official term.  When I think of the term literally I can’t help but feel that it is only in this decade that the field of Information Technology has truly arrived in a mainstream way.  In other words IT has finally put “Information” front and center. Consider Google.  They are not a hardware company, and their “software” offerings, while compelling in concept, are not how they got to where they are today.  They are, as they put it, all about organizing the world’s information.  Sure, computer-based technology has always been about dealing in information since its inception but there were always these other assets to sell such that the information itself would somehow take a back seat.  If you look at three broad shifts in the IT landscape, and how the dominant players made money, the point becomes somewhat clearer.  The first dominant player, IBM, made its hay with the hardware.  Sure the hardware had software on it which they too provided and serviced but the “iron” was typically out in front.  Perhaps this focus on hardware is what let Microsoft essentially walk right in and own the software space.  Sure IBM is still quite relevant in a huge way, but there is no secret about who won the IBM v. Microsoft software war.  By the time IBM realized that the hardware market had matured and that the next real battle was in the OS and software space, it was too late.  Knowing all too well the lessons of history, Microsoft was intent on not being left behind when the World Wide Web caught fire.  So much so that they were not going to let the upstart Netscape win the browser war.


Microsoft won the browser war alright but it seems the real battle was elsewhere.  Sure there was lots of talk about the browser platform and its capabilities but what ended up being important was simply all of the information that was being created as a result of the WWW explosion.  Maybe this was crystal clear to Brin and Page on day one, maybe not.  In any case, their interface spoke volumes; one textbox, one button (though the latter has since doubled – Moore’s Law I suppose).  In other words Google was saying, as simply as technically possible, “tell me what you’re looking for.”  Perhaps that’s what started to first get the eyeballs away from Yahoo.  After all Google didn’t seem too wrapped up in all of the talk about “portals” that seemed to be consuming Yahoo, Netscape and, yes, Microsoft.  For Google, it was (and still is) all about the information.  Everything about their DNA was about getting the most relevant information to the end user as fast as possible, UI be damned.  Perhaps that was the differentiator between Google and the other established players in the space.  For the other guys, this concept so tersely labeled as “search” almost devolved into simply one of many features in the cluttered software and portal spaces.   For Google, “search” wasn’t a feature but the entire platform.  And the measure of the platform was neither hardware specs nor software features but quite simply, information.

So now that Google is the de facto leader in information brokerage, I wonder now what’s the next tectonic shift.  Sure we hear about cloud computing, mobile computing and the semantic web (i.e. Web x.0) but I can’t help but feel that these are all evolutions within the hardware, software and information stages.  Maybe that’s simply where we go from here or maybe IT just needs a new name.


Interest in the Java Cloud?

Last Thursday, I sat in on an informal idea-session with some industry colleagues and, as a result, had the pleasure of meeting Tim Bray, currently at Sun Microsystems.  Like most tech thought leaders, he was instrumental in drawing out of the group lots of ideas and opinions (sometimes to amusing effect).  Somewhat expectedly, the topics of virtualization and cloud computing came up during the discussions.  Like most in the industry I am fascinated to some degree by the still-untapped potential of each, albeit more so with the latter.  The majority of my career has been spent with large institutional customers with sprawling heterogeneous data centers, all suffering from capacity fragmentation. All now use virtualization at some level.  A subset of these also have employed a compute grid of some kind.  In each case, the shortcomings have been almost as apparent as the significant benefits of each. In the case of virtualization, the impending virtual sprawl and being limited to merely “slicing up” hardware were some of the obvious drawbacks.  With Grid, the proprietary nature of the APIs is what struck me most as a limiting factor.  Notwithstanding the enormous benefits and game-changing ability of each technology (not to mention the claims of the leading vendors in the space), I am still left wanting by what exists currently (yes, techies are an insatiable bunch).

In the case of virtualization, I have often wondered what if the hypervisor model could invert itself.  In other words what if it could also aggregate nodes into a cloud space while still offering the “commodity” image of an unmodified OS?  Would this not be the “chocolate and peanut-butter” moment for virtualization and grid?  While not to minimize the complexity necessary to accomplish such a thing (and my somewhat limited view into what it would involve), I can’t help but think that we should be close.  Node-binding technologies are everywhere these days and many of them would play a role. Technology like Infiniband, and perhaps more specifically RDMA (with or without InfiniBand), I would think must be building blocks.  We have clustered databases becoming mainstream such that I would imagine a cross-node, commodity-based hypervisor has to be in a lab somewhere.

I proposed this idea during the discussion last week and was a little surprised that the reaction from the group, while positive at times, was still somewhat mixed.  That is until I fully understood the reasons for the alternate views.  Essentially there was a suggestion that maybe the OS as we know it is not necessary to get to the next game-changer.  I then modified my view somewhat to be a bit more inclusive.  “Okay,” I said, “so what about a Java cloud?”  That question was greeted a bit more positively.  I was then taken back to a moment I had experienced with one of my customers about 18 months ago.  I recall the look on his face when I had mistakenly suggested that Azul Systems was based on commodity hardware. It was the “let’s get it in a lab” look. You see, this customer had already looked at Azul Systems before and had essentially turned his attention elsewhere largely because it was based on “proprietary” hardware.  Hearing the misleading suggestion that it had gone the “commodity” route had renewed his interest for another look (NB: use of quotes for both “proprietary” and “commodity” – these definitions can be philosophical rat-holes). If this customer could fire up a few nodes of his own to kick the tires, he would certainly divert resources to put the product through its paces.  While one may argue whether the proprietary vs. commodity argument was a good enough one or not, there’s something to be said about wanting a low barrier to entry and not being tied to a single vendor’s hardware.  What if there was a JVM that could work much the same way as Azul Systems’ JVM, except on hardware already existing in most data centers?  Sure, you would still need some first-class engineering to get it right (as in the case of the clustered DBs) but think about the up-take in this case.  Then imagine a service offering where you’re given your own slice of a JVM cloud (living on who knows how many nodes) and dropping your entire application, unmodified – web (e.g. Jetty), database (e.g. Derby) and whatever else it needs – into the Java cloud.  You’d still likely need some JMX based services for management, some handle to your file system for your Java DB and other file-based content (ZFS might play a role here) as well as some encapsulation of how the network is exposed to you but if it were done right, such a service offering would be compelling.  This is especially true when one also considers that a JVM is not just about Java anymore (as Tim pointed out by citing the DaVinci project).  I would also imagine that since a cross-node JVM has already been made to work on one type of hardware, it should be possible to make it work on another (e.g. x86_64, Sparc CMT, etc.). I also can’t help but think that this might be easier to achieve than the “unmodified” cloud OS (again limited-view disclaimer applies – and  by “unmodified” I mean that the dependent bits do not have to be recompiled to take advantage of the service).

So whether you believe the unmodified cloud OS or the unmodified cloud JVM are the next real game-changers (I see roles for both), or you believe in something entirely different on the horizon, it would be great to hear from you.

Hello world!

For the digitally inclined, the etymology of “hello world” is well understood.  Being a proud owner of a well-marked-up, first-edition of K&R (from my college days), I count myself amongst the aforementioned.  So one might say that the auto-generated title (courtesy of WordPress) of this, my primordial blog post, is quite apropos for setting the tone of where the majority of my blog posts will veer.  I am a proud geek by trade, prone to occasional rambling, so it is somewhat surprising in a sense that Dec 5th 2008 would be the date of my first ever public blog posting.  I’ve been in the geek trade for quite some time – about 20 years – and though public blogging is not as dated, it is safe to say I am a bit late to this party.  That said, I will suffer the messy dip and make a go of this.

Which brings me to the first topic of discussion – the title of the blog itself.  The day before yesterday, while talking to my oldest son I suffered an instance of this blog’s name-sake.  My son had recently come to discover a hidden talent for the art of picture taking.  The photography teacher in his school was suitably impressed with his portfolio that she urged us to get him a film camera to help him “develop his eye.”  I dug out my old Minolta SLR with its 28-200 zoom lens and showed it to my son.  He looked it over, thumbed through the manual and really started to warm to it.

“So where do I preview the picture?”

“Um, you really can’t do that with these.”

“Are you serious?! How do I know how the picture came out?”

“Well, you have to get them developed when you’re done with the roll and then you’ll see.”

“Done with the roll?”

“Yeah, there’s a roll of film that gets developed in a dark room which by the way means you can’t expose it to sunlight…”

And on and on.  Eventually he sort of got it, but was of course shaking his head at how “lame” it must have been to take pictures “back in the day.”

And so was born the analog moment.  While this was technically not the first such moment for me (the vinyl/CD discussion had taken place years before), it was nonetheless the first moment of mine so named (notwithstanding the possibility that others might have independently arrived at the same epiphany).  I had heard how aging could be marked by “senior moments” of brief forgetfulness but felt comfortable/hopeful that mine were some years away.  The conversation with my son was an ironic age marker, especially considering my very digital trade and the fact that my son is favoring the fine arts as a field of study. There I stood, the analog dinosaur, working in the very trade that makes these analog moments possible.  Across the baby boomer to gen X continuum lies a large group of us for whom there exists the very full experience and understanding of both worlds.  Perhaps we are the only ones who will ever make such a claim.

So join me in saluting the analog moment by sharing some of yours.  Because after this post, my musings will be taking on a more distinctly digital flavor.