If you lump traditional data center virtualization, cloud computing, and SaaS together, it becomes clear that there are still a lot of opportunities in the area of "shared infrastructure". And that certainly includes the hardware.
In my case, the relationship between TelemetryWeb and companies like Feedlogic provides a good example of how the benefits of virtualization can go beyond the simple reduction in the requirement for expensive hardware on-site (for the farmer, in this case). There are a number of twists and permutations to how we use "shared infrastructure to lower barriers to new innovation. Virtualization tends to introduce something akin to a compounding effect that lowers cost and accelerates innovation at a level beyond the simple cost equation.
- The farmer is leveraging shared infrastructure that reduces the not only the need for on-site storage of operational data on the farm, but also allows lots of analysis and other work to be done off-site as well. The cost and complexity of the actual widget that gets installed in the barn is significantly reduced.
- The device manufacturer itself leverages shared infrastructure for the cloud portion of their overall solution in the form of TelemetryWeb. So introducing a network means they were not only able to build a less complicated widget, but with TelemetryWeb they were able to significantly reduce their own overhead and nearly eliminate their up-front R&D cost for the rest of the solution, too.
- TelemetryWeb also leverages shared infrastructure in our own service delivery, which in turn allows us to deliver our services at lower cost.
It is worth noting, though, that the effectiveness of virtualization and nearly every other form of shared infrastructure all boils down to the impact of the network. For example, I find that the most interesting about what we call a "smart" sensor or device is that it is actually becoming more "dumb". We usually call something "smart" when it becomes network-aware. But TelemetryWeb is built on the premise that, as the availability, capacity, and speed of a given network improves, functionality will always continue to be aggregated/centralized/
In a perfect world, a sensor will consist of nothing but the ability to observe some sort of physical phenomena/input, and transmit that to a centralized location where all meaningful processing is done. Think about it: At some point, faster processors at any given point in the system are really just a (poor) substitute for insufficient connectivity.
More evidence of this trend was delivered just the other day by Amazon. Their new Kindle uses the cloud services of EC2 to perform the bulk of the rendering of web pages. This reduces the work that needs to be done on the device, lowers its cost, and lets them provide an Android tablet for $200.
For those of you who are having flashbacks to Sun Microsystems' vision of Network Computing from way back in the day, it all sounds familiar. But what is interesting is that the networks have expanded to make it start to become more viable. I'm sure someone who used to work at Sun is talking right now about how they were doing this over a decade ago. Ideas are only able to be valued within the context of the current era, eh?