Tuesday, October 4, 2011

What does TelemetryWeb have in common with Tintri?

My friend Mark Gritter at Tintri and I were recently talking about the idea of > 100% virtualization. It is a pretty interesting concept. If you start virtualizing tasks in software which used to be performed in hardware, you're achieving more than full virtualization. The example in the traditional IT data center is to replace a PBX (enterprise phone system) or disk controller with pure software.

If you lump traditional data center virtualization, cloud computing, and SaaS together, it becomes clear that there are still a lot of opportunities in the area of "shared infrastructure". And that certainly includes the hardware.

In my case, the relationship between TelemetryWeb and companies like Feedlogic provides a good example of how the benefits of virtualization can go beyond the simple reduction in the requirement for expensive hardware on-site (for the farmer, in this case). There are a number of twists and permutations to how we use "shared infrastructure to lower barriers to new innovation. Virtualization tends to introduce something akin to a compounding effect that lowers cost and accelerates innovation at a level beyond the simple cost equation.
  • The farmer is leveraging shared infrastructure that reduces the not only the need for on-site storage of operational data on the farm, but also allows lots of analysis and other work to be done off-site as well. The cost and complexity of the actual widget that gets installed in the barn is significantly reduced.
  • The device manufacturer itself leverages shared infrastructure for the cloud portion of their overall solution in the form of TelemetryWeb. So introducing a network means they were not only able to build a less complicated widget, but with TelemetryWeb they were able to significantly reduce their own overhead and nearly eliminate their up-front R&D cost for the rest of the solution, too.
  • TelemetryWeb also leverages shared infrastructure in our own service delivery, which in turn allows us to deliver our services at lower cost.
If any of those layers of abstraction and shared infrastructure didn't exist, the cost of the equipment manufacturer's web-connected solution would likely rise beyond viability. Rewind the clock a bit, and you start to see this pattern repeats itself thousands upon thousands of times throughout the technology stack used in the solution. Many people take it for granted that things like network infrastructure, communications protocols, integrated circuits, and operating system software are accessible and inexpensive to leverage. But that wasn't always true, and prior to each of these things it was harder and more expensive to bring any given innovation to market.

It is worth noting, though, that the effectiveness of virtualization and nearly every other form of shared infrastructure all boils down to the impact of the network. For example, I find that the most interesting about what we call a "smart" sensor or device is that it is actually becoming more "dumb". We usually call something "smart" when it becomes network-aware. But TelemetryWeb is built on the premise that, as the availability, capacity, and speed of a given network improves, functionality will always continue to be aggregated/centralized/
virtualized. This reduces the need for domain-specific logic at the end points, reduces cost and complexity, and therefore lowers barriers to new innovations.

In a perfect world, a sensor will consist of nothing but the ability to observe some sort of physical phenomena/input, and transmit that to a centralized location where all meaningful processing is done. Think about it: At some point, faster processors at any given point in the system are really just a (poor) substitute for insufficient connectivity.

More evidence of this trend was delivered just the other day by Amazon. Their new Kindle uses the cloud services of EC2 to perform the bulk of the rendering of web pages. This reduces the work that needs to be done on the device, lowers its cost, and lets them provide an Android tablet for $200.

For those of you who are having flashbacks to Sun Microsystems' vision of Network Computing from way back in the day, it all sounds familiar. But what is interesting is that the networks have expanded to make it start to become more viable. I'm sure someone who used to work at Sun is talking right now about how they were doing this over a decade ago. Ideas are only able to be valued within the context of the current era, eh?

Friday, July 29, 2011

Yours Truly is Opening Act for TiE MN "How Did You Do It" Series

Just a quick post to mention that I'll be delivering a brief presentation as an opening act for Phil Soran, CEO of Minnesota-grown Compellent, which was just recently acquired by Dell for (...GASP...) $960 Million.

TiE Minnesota is kicking off a series called "How Did You Do It," which showcases successful entrepreneurs and asks them to explain how they got to where they are.

In addition to the "successful" entrepreneur, TiE will be inviting a "start-up" entrepreneur to kick things off with a short presentation of what life is like in the beginning, when the thought of a $960M exit provides extreme motivation, but sometimes feels impossible to achieve.

(In case it wasn't obvious, I'm the "start-up" in this equation!)

Wednesday, August 10, 2011 - 6:30pm - 9:00pm

University Hall, McNamara Alumni Center, U of M
200 Oak street SE, #35
Minneapolis 55455
United States

Come check it out! Registration and more details here:

http://minnesota.tie.org/event/34/how-did-you-do-it-series-inaugeral-event-phil-soran-ceo-dellcompellent-1

Wednesday, July 27, 2011

The Economic Reality of Mobile Device Security

The smarty-pants over at iSEC Partners demonstrated a GSM network hack to break into a car, presumably one equipped with OnStar or a similar system.

http://news.cnet.com/8301-27080_3-20083906-245/expert-hacks-car-system-says-problems-reach-to-scada-systems/

This is a really important conversation that I've been tracking since 2005. They nailed about 1/3 of the problem with this statement:
One problem is that vendors are using smaller chipsets to save money and they don't have enough code space to handle large number cryptographic processing.
The simple economics are that faster chips are more expensive, require more electrical power, and tend to be larger in size. So if you really want to squeeze the last dollar out of an embedded system (which you do if you're going to build millions of them and sell them at the biggest margin possible), you use the cheapest chips possible. Similarly, if you want the device as small and/or low-powered as possible, you typically have to make compromises on speed as well. Combine all three of these requirements, and you tend to end up picking a pretty wimpy little CPU.

Encryption is really hard work, even for a computer. So much so, that for many embedded sensor devices or other systems, the actual work being done on the device is far less complex than the task of encrypting the data for transmission or storage. So it becomes a very expensive proposition to do encryption.

The other problem that they failed to mention is that encrypted messages tend to require MUCH more bandwidth. Sending a given message in an encrypted format requires more bytes than sending it without encryption. And when you want to scrunch the data down into the smallest possible M2M wireless data plan, encryption becomes very expensive here as well. Even if you have sufficient horsepower on board, the monthly overhead of a bigger data plan can represent millions of dollars added to the cost of your product. For something as wide-spread as OnStar, this might be tens or even hundreds of millions of dollars.

The reason I think that the chip itself is only 1/3 of the problem is because the bandwidth costs are recurring and never-ending. Going from a $10 chip to a $20 chip might increase the cost of your product by $10 up front, but going from a $5/month data plan to a $7/month data plan will increase the cost of your product a lot more over time.

But the fundamental problem with security is that it is what we in the software world call a "non-functional requirement".

Product requirements break down into two categories: functional, and non-functional requirements. Functional requirements represent the stuff your product needs to do in order to fulfill its intended purpose for your customer. "Unlock the door on the car" is a functional requirement. Non-functional requirements are requirements that don't directly meet the customer's need, but are required in order to deliver the product. They represent "criteria that can be used to judge the operation of a system, rather than specific behaviors." (Wikipedia)

Unfortunately, when you start figuring out the cost of a system, the non-functional requirements are the first to take a hit. You can get the job done without it. Whether or not you can do the job well, or properly, may be up for debate. It is the difference between "must" and "should".

Most mobile embedded systems can do everything they need to do with a less powerful computer chip and a cheaper mobile data plan. When you do the cost analysis, meeting stronger non-functional requirements around security becomes really expensive. But they're a business, so they want to bring the product to market at a competitive price, with the highest margins they can manage. There's no laws covering this kind of information, and the consumer hasn't demanded stronger security. So they simply don't do it.

Like almost everything in the world, it all comes down to money and risk. If people really cared about security, they would demand it. But then the cost of many services we take for granted today would go up significantly. Some might no longer be economically feasible. So...how much do you want to pay for someone to say "Hello, this is OnStar, how can I help you?" at the push of a little red button?

The only good news in this is that both chips and bandwidth are becoming cheaper. Who knows...by the time the public starts demanding secure systems, it might actually be feasible to deliver them.

Friday, July 1, 2011

Stopping to Smell the Technological Roses

Interesting times. Here it comes again. Another cheezy "ain't life awesome?" moment from Neal. You've been warned...continue at your own risk.

I'm sitting on a plane writing this post using the brand new Motorola XOOM I just got recently, while listening to Andrew Bird's latest album in CD quality stereo. I spent an hour or so scribbling some design notes for the next big feature to be implemented in TelemetryWeb, and just got done watching a couple of TED Talk videos in HD that I downloaded before the flight. I'm doing all this while travelling hundreds of miles per hour, 30,000 feet above the surface of the planet, in total comfort. Sure, I'm flying coach. But it sure beats trying to go from Atlanta to Minneapolis in a stagecoach on a dirt road.

As a technologist, these are my favorite moments. Those times when the power and potential of everything the human race is building crashes over you like a giant wave.

One of the TED Talks was by Ed Boyden. He showed how his team is beaming rays of light directly into a mouse's brain cells to alleviate problems ranging from depression to blindness. I like to think that I'm working on some pretty bleeding-edge stuff, but his technology is just plain nutty.

I dare anyone to watch a few TED talks and NOT feel good about all the smart people doing amazing things out there in the intersection between technology and society. Most of the technology we interact with simply didn't exist 150 years ago. Air conditioning. Internal combustion engines. Recorded sound and video. Computers. Airplanes. Space travel. The percentage of people globally who own a cell phone is staggering...even in some of the poorest parts of Africa.

Sure, there's a lot of bad stuff going on too. Sony's PlayStation Network got hacked. Heroku was under a DDoS attack. Amazon's cloud went down. Credit card information is being stolen from someone as we speak. An exploit kit for SCADA networks has been published to the wild. And the government still sucks at cyber security and protecting privacy. I'm not even going to start talking about recent wars and nuclear disasters.

It is true: the more we build, the more problems we create for ourselves. And maybe someday we'll create a problem so big that it will be the end of us all. But so far technology has an outstanding track record for fixing more problems than it makes. Our lives today are longer and more comfortable overall than any previous time in history, by just about any measure you can judge. The planet's capacity for producing food 100 years ago couldn't possibly support the world's population today, yet the majority of humans worry less today about finding their next meal than ever before.

We're going to have to double our capacity again by 2050 to support the world's population growth, and figure out how to do it all with the same amount of water that we have today. These are BIG problems to solve.

But take a step back and appreciate where we are, how far we've come, and how rapidly innovation and technology is accelerating. Give yourself just a moment to believe that we will find solutions for these problems. It'll make you feel good...at least until you turn on the 5 o'clock news.

Thursday, June 9, 2011

Defining The Cloud: You Say Tomato...

I was just reading this article about the current hacker attempts on cloud computing vendors. Interesting topic, for sure, and they touch on some items which I'll be posting about in the future.

But aside from the security discussion, their description of "the cloud" brought me back to a discussion I had with a potential business partner the other day. We were debating what it means to be "in the cloud". Someone had been making a big deal about the fact that Company X was hosting their own SaaS application on their own servers, and that because they weren't using Amazon, Force.com, or Google AppEngine to host their product, they shouldn't be able to declare themselves "in the cloud".

So I decided to consult the all-powerful, 100% accurate, ultimate-and-unquestioned authority on all matters in the world: Wikipedia.

http://en.wikipedia.org/wiki/Cloud_computing

(See, I may not be able to speak clearly with my tongue firmly in my cheek, but I can still type properly!!)

The only thing that seems to be consistent is that it simply implies use of remotely-hosted applications. In any case, it certainly does not seem to restrict itself to being only those vendors who leverage "Platform-as-a-Service" (PaaS) resources such as Amazon.

So by this definition, SaaS vendors who host their own product could certainly be "cloud". Heck...by Wikipedia's definition, even a single-instance-per-customer (non-multi-tenant) application could be "cloud" simply because it is hosted off-premise.

To me, "The Cloud" is clearly still mostly a marketing term. It is trendy and hip to say it, so people use the term as much as they can for anything that smells close. Ultimately, this reduces its value to the point where it has no inherent/direct value by itself.

What's important is describing the value proposition of your solution. That may include the service delivery model, for sure. But the most important concern is to meet a need. For most applications, once the customer realizes they don't have to run the IT infrastructure themselves, there just isn't really a business reason to care deeply about who's logo is on the underlying hardware or what OS they're running...as long as it works...

Disagree? Tell me what you think.

Wednesday, April 20, 2011

And the Maker Shall Inherit the Earth


Nerds are cool. Or, at least, I like to believe that. And like nearly every other belief someone might have, I'm biased in my faith. In this case, because I like to think of myself as a nerd, too.

There's a relatively new term that is becoming popular for a certain kind of nerd: "Maker." A maker is someone with a DIY personality who likes building physical stuff. That could mean a bicycle-powered washing machine. But often it also implies some amount of electronics, which in turn frequently requires some software skills too. IMHO, it is the ultimate combination of three completely different, but complimentary, skills: Mechanical engineering, electrical engineering, and computer science. In this modern age, you can combine those three things and build just about anything you can imagine.

I was born to be a maker. It is in my blood. One of my grandfathers was a machinist who made parts for anything and everything, including the Space Shuttle. My other grandfather was a radioman for the Navy, and later for Western Union. He helped build, develop, and test things like "hot-lines" (yes, at one time that wasn't just a marketing term) and satellite up-links. My father was a mechanic/electrician in the elevator industry, and my mother was a computer programmer.

LEGO sets were obviously standard fare, but my earliest birthday presents as a toddler also included real tools like screwdriver sets. Apparently within hours I'd taken apart the door knob to the basement at Grandma's house. My mom started teaching me the BASIC programming language on an Apple III when I was 6 years old, and my father had taught me to solder by age 9. I was expected to keep my bicycle in good repair myself, and I had to fix the body and the brakes of the wrecked Ford Fiesta my father found at a garage sale before I was ever even allowed to drive it at 16.

These days, there's a perfect storm brewing. Integrated circuits and microchips have advanced to the point where you can build widgets that have really neat capabilities for only a few bucks. Meanwhile, open source software is making it easier and cheaper to make those chips do something useful. It is simply amazing what you can do with a PC and a soldering iron.

But even better stuff is on the way. MIT's "Fab Lab" concept has put together all the tools you need to do real, advanced, electrical, mechanical, and software engineering into a package that fits into a single room and only costs about $50k. They can design a widget, machine the physical parts, build the circuit boards, and develop the software all in the same room. I participated in a tour of the Century College Fab Lab a couple weeks ago, and found myself geeking-out with the students who had built their own CNC mill, long after the tour group had disappeared down the hall. Someday, I want my own Fab Lab in my garage.

But this all remains just a bunch of geeky/nerdy stuff if the "normal humans" out there don't understand what it means. Investors in early-stage companies are now becoming comfortable with what you can achieve rapidly and at low cost with open source software. But hardware device start-ups are still viewed as an incredibly expensive, high risk, and difficult business to launch. And that is becoming less and less true.

This is why I'm positively giddy to see that Wired Magazine's current issue is all about makers. Limor Fried, uber-maker and founder of Adafruit, has a kick-ass picture on the cover. Read the article: she already knows this is going to be the thing that brings the next round of technology innovation and start-ups out of the garage. This revolution is how small businesses can really compete.

I've been a customer of Adafruit for a while now, and we've used some of their products in the process of prototyping and testing the product I'm currently building with FeedLogic. Wired is still a tech magazine, sure. But they're far more "mainstream" than most. Their attention means that the maker phenomenon is gaining significant traction toward popular culture.

TelemetryWeb was built on this philosophy. We want to be part of this ecosystem. One of the remaining hurdles in the world of bringing widgets to market is connecting them to the Internet. Sure, you can build a website that takes some data from a device and shows it on a graph. But things like scalability, availability, security, and real functionality are still really difficult for anything other than a non-software company to achieve. If we can enable small and mid-size companies to bring innovations to market and scale, we can eliminate another huge chunk of the cost in bringing Internet-connected devices to market.

And, most importantly of all, I get to play with some really cool stuff.

Thursday, March 31, 2011

The Internet of Missing-Some-Things: Open Standards

Can't believe my last post was over a month and a half ago. Where did all the time go? Of course, I know exactly where the last six weeks have gone, but I'm still amazed by how quickly they disappeared.

Insert all the usual cliches here about how life has been a crazy lately, and so on. But it has been fantastic, really. Customers are building neat-o devices that are going to change the world, and we're helping them do it. In particular, I have to give FeedLogic a little plug. They're building a whole new range of products which will revolutionize the livestock industry, and I just held the first pre-production device in my hands. I must say it is really cool. They've also been invited to present at the first ever MinneDemo to be held outside of the Twin Cities, and they're one of only a few non-software companies that has ever been invited to present. This one will be in St. Cloud, MN, and I'm really excited for them. Go check it out, and cheer 'em on.

But back to the topic at hand: The Internet of Missing-Some-Things. We continue our coverage with a discussion about open standards.

This is a brand-new market, and companies like TelemetryWeb seem to be springing from the ground faster than the green shoots of grass in my lawn. But it is still very much the Wild West. There are no standards in this market that meet both the necessary criteria. They must be both OPEN, and USABLE.

It is probably too early to expect that there would be, but just like the lack of semantics that we covered last time, there's no way we'll ever get to the ultimate potential of this technology without them. Sure, Pachube has published EEML as an open spec. I'm happy that they feel it works for them, but it is a long way from being really useful in my humble opinion. If publishing semantics can be summed up with a single tag called "private" that is set to "true" and "false", they must be dealing with a very different usage scenario than TelemetryWeb has encountered.

But on the flip-side, you have the Open Geospatial Initiative. This group has solved every problem known to humanity with sensor platforms, going back to NASA missions that launched 20+ years ago. There's no doubt that lots of smart people have contributed to this standard. We're talking about rocket scientists here, and I'm not even exaggerating. But the end result is that it is completely unusable...it is only usable by rocket scientists.

I'm not just talking about data here. I'm talking about devices, too. The ZigBee protocol has some promise, but the licensing is not open, so everyone who wants to make a device that implements the protocol stack has to "become a member". This feels like my first experience of going to a bar in a dry county. "Wait...you mean you CAN get a beer in this town?" Sure, but only "members" can buy it. Digi has some great stuff in their XBee devices, and they're a clear home-town favorite. But they're another layer on top ZigBee, so you start with a non-open platform and then add more proprietary stuff on top of that.

There's a reason HTTP has become so widely adopted. It was both OPEN and USABLE. If devices are powerful and smart enough, they can simply use HTTP. But most of the "things" in the Internet of Things world are not that powerful...

Saturday, February 12, 2011

$6B for Rural Broadband: Its About Time

If you've followed my Internet of Missing-Some-Things series, you know that one of the primary hurdles to the next phase of the M2M industry is a lack of "ubiquitous connectivity". Even if technology now allows for some sort of connectivity anywhere on earth, it is still too expensive to get a (good) Internet connection in a majority of the world. Even in the good 'ol US-of-A.

Broadband networks are still out of reach in much of deep rural America. Cellular towers are spreading, but they're expensive. And satellite is not only massively expensive, in most forms it is also extremely cumbersome to use and limited in its capability to operate as a normal network connection.

But this will change, and the Obama Administration recently gave it a big shove in the right direction in the form of a six Billion dollar proposal for rural broadband.

I don't care which side of the political fence you're on: It is pretty gosh-darn hard to argue that the FCC's Universal Service Fund hasn't played a huge role in America's technology leadership over the past century. The ability to build the first large-scale telephone networks would not have provided all the benefits it did if we had only included the urban population. Bringing phone lines to farmhouses in the middle of nowhere was simply too expensive even for a regulated monopoly like AT&T to handle back then.

But the world has moved on. I don't even HAVE a phone line at my house any more! Broadband is where everything is going.

Or has the world moved on? As someone who is currently trying to help a customer put smart devices in livestock barns, I've been re-introduced to the RJ-11. I can't remember the last time I worked on a system that used a regular modem. But while some of their customers can get a broadband connection, many still cannot.

As social/market pressure and new government regulation start forcing the agricultural industry to adopt more sophisticated mechanisms of tracking our food supply, getting data off the farm is going to be crucial. But the good 'ol phone line isn't going to cut the mustard. It is time to stop putting government money into an antiquated technology.

It is about time that this happened. Be sure to cheer on your elected officials.

Tuesday, January 18, 2011

The Internet of Missing-Some-Things, Part 2: Semantics

I got a little side-tracked with other good topics over the past month, but the 2011 tech predictions podcast by Minnov8 reminded me that there's still much to talk about with regards to the "Internet of Missing-Some-Things." In the podcast, Steve Borsch puts the Internet of Things as his #1 item for 2011, but a discussion follows which asks the question, "why hasn't it already taken off?" I firmly believe one of the reasons is that there is a lot of hidden complexity in the problem area. Hence my "...Missing-Some-Things" series of posts.

Even before I started building TelemetryWeb, I knew there were a lot of very hard problems to solve when trying to combine the semantic web with the M2M market ("Machine-to-Machine"...the term for the market that existed before IoT became the buzz-word).

The Minnov8 gang really hit on a number of great points, but one of the best is that humans get frustrated with the rigid structure of taxonomies/ontologies. They can figure out how to manage semantic information with tags.

Google, Facebook, and other companies have done pretty amazing things with semi-structured information, to be sure. Search engines are getting really good at finding relevant websites or potential connections between people. But these systems still rely on a human to look at the list of results and determine which of them are valid.

But machines require much more structure in order to produce reliable results. And reliability is what you expect when you are creating a closed-loop system between, say, industrial controls and monitoring sensors.

Despite the significant evolution of computers, humans still make computers look pretty dumb by comparison. We can combine and correlate lots of semi-structured information, all in real-time, all in our heads. But someone has to tell the machine that "C", "Celsius", "Centigrade", "Deg-C", etc. all mean the same thing before the computer can effectively correlate temperature data of the same type, or do the necessary conversions to "F", "Fahrenheit", "Kelvin", and so on. What? You mean this particular "C" is the unit of measure of amperage capacity for a lithium-polymer battery? Uh-oh!

The greatest potential in the Internet of Things is the ability to utilize the data in ways that were not previously anticipated. Once the data is freed from the physical constraints of the device, it can provide tremendous benefit (if utilized correctly/responsibly, of course).

But without these sorts of semantics in place, the data in the cloud is locked into purpose-specific software. That software is traditionally written with one goal in mind and makes use of a lot of assumptions regarding the devices and the data they send. The only way you correlate disparate data across systems like these is through difficult, expensive, point-to-point system integrations. That model simply doesn't scale effectively enough to provide the kind of value people are envisioning.

There are potential solutions for these sorts of problems. I happen to think I have a few in the works. But until someone figures out how to merge the semantic web within the context of machine-to-machine communication, we're going to be limited to single-purpose, non-integrated, hardware-plus-software solutions.