Monday, December 27, 2010

Smart Grids - Data, Policy, and Privacy. Oh my!

A couple weeks ago, I attended a seminar on smart grids at the University of Minnesota Digital Technology Center. The presentation was by William L. Glahn, Director, Office of Energy Security for the State of Minnesota.

The presentation was informative, and I was very happy to learn that our legislators are aware that there are very important security, privacy, and policy issues that will need to be addressed. Mr. Glahn, laid out the issues across three categories of "policy challenges."
  • Security 
    • The system must be secure from outside attack and manipulation
    • Personal information must be protected
  • Consumers
    • Costs must be low and there must be perceived value for any costs incurred
    • Nobody wants "big brother"
    • People want to know who owns the data, and how the data will be used
  • Utilities
    • Costs and investments must be recovered
    • Utilities do not want another mandate
    • Who "owns" the customer is important...utilities do not want to give up control of their regulated monopoly or find that policy mandates effectively subsidize their competition
    • There is a likelihood that a new business model must be developed
Besides the fact that all those issues can be boiled down to "fear of change", the thing that strikes me the most is how the utilities are clearly trying to figure out how to cling to an out-dated business model...or at the very least control the speed of evolution. Their behavior seems awfully similar to that of the RIAA, but they have the added benefit of being a utility, rather than entertainment. The country runs on their power, and they know it. In the vast majority of the US, they have a captive audience. The power companies want no more costs or mandates from the regulators. But they're all for the regulations that keep them as the sole source of power for their consumers, and they're going to leverage that position to own as much of the smart grid as they can.

Most of the experiments with the "smart grid" that I'm aware of have been driven by the power companies themselves. And thus far consumers have had a relatively lukewarm reception to it. The reason for the lukewarm reception is simple: If all the technology and the devices are being pushed by the power company, nobody will buy the argument that the smart grid is there for the consumer.

But what if people bought smart grid appliances the same way they do any other electronic appliance, from PC's to DVD players to (for the most part) cell phones? What if power optimization features were simply built-in to every electronics device you bought, and didn't rely on any infrastructure investments from the power company? HVAC systems, lighting, and other power-hungry systems could particularly benefit from managing their own power consumption and being "internet enabled". But there's more to it than that. If standards were built and consumers had choice and control over the smart grid products they use, rather than being told what to use by the power company, it could help solve a number of issues.

First, the consumer bears the cost of the smart grid. They'll buy devices when they feel it is in their best interests to do so. Like any technology, you'll have early adopters, a chasm, a mainstream adoption, another chasm, and the late adopters. It will take time, but it would nearly eliminate the capital investment by the public or power companies.

Second, the consumers would be free to select brands they trust. Privacy concerns with the Google thermostat? Security issues with Microsoft controls on your air conditioner? Get the one from GE or HP instead.

Finally, it could result in a more "peer-to-peer-style" model in the smart grid, making the system as a whole more loosely coupled. This has the benefit of being more resilient to abuse and failure...whether that's from Big Brother, terrorists, or plain 'ol software bugs.

I think the power industry will drag their feet on adoption of the smart grid for as long as possible. They'll pay it lip service because green is trendy. But expecting the power companies to go green is at least a little like seeing the tobacco companies lobbying to remove their own ability to market in places where they might influence kids. The primary motivation is simply because they feel that the social pressure of being green is something they can't ignore.

Naturally, the power companies will approach this process in the way that benefits them most. For example, in many places you can have them install a device to turn down/off your air conditioner during peak loads. It's good for power consumption, of course...but only when it benefits the power company. If the power company can reduce the height of the peak load, they can operate a cheaper infrastructure at a higher level of efficiency. Green doesn't really have all that much to do with it...unless by "green" you mean "money".

In the end, smart device manufacturers will produce devices the consumers want. There isn't all that much the power companies can do to stop it. But expect the power companies to try like heck to find a way to build the Smart Grid around themselves...using their technology...on their terms. They know they don't move very quickly, and they know the average consumer perceives that everything power-related has to come from the utility company. So if they push the smart grid too fast, consumers will find someone else who can supply them with the smart widget to control their air conditioner.

If that happens, the power company will be out of the loop. They'll lose the revenue for selling the device. They'll lose revenue from any sort of subscription-based services. They'll lose the massively important data they can collect off devices like this. And most importantly, they'll lose "ownership" of the customer. At best, they'll have to actually compete for any value-add services they want to sell. Assuming they're not well-equipped to do that (they aren't today, at least) they'll be reduced to "only" supplying power, which is subject to the economics of a regulated commodity and therefore lower potential profits. But if they pull off owning the smart grid, they'll have the safety of a regulated monopoly and the profits of all those value-added smart grid services.

I'm not entirely sure what Mr. Glahn's perspective on the issue is. I was hoping the presentation would be a little more forward-looking than it was, but it was mostly a recap of the industry's past and didn't have much in the way of policy recommendations for the future. Some of the things he said during the presentation lead me to believe that the power companies are simply stuck trying to preserve their business model. But when asked about the potential of consumer choice, he briefly stated  that Best Buy would obviously love to start selling consumer smart grid electronics, and then took the next question.

Either way, I'm sure there's a lot of "strategery" going on...

Wednesday, December 15, 2010

Executable Speech: When Words and Action Are the Same Thing

Yesterday, Read Write Web asked the question a lot of other people are asking: What if Operation Anonymous Attacked City Infrastructures & Power Grids? The SCADA systems used by power grids have been known to be horribly insecure for over a decade, and yet many people agree that precious little has been done about it. Thankfully, Operation Anonymous only launched some irritating DDoS attacks at credit card networks and other targets.

Read Write Web is correct when they point out that the only people who really got hurt were innocent bystanders. But they hit the nail on the head when they start discussing the new forms of "civil disobedience" that are coming to life on the Net.

Is a DDoS attack really like a hippie-style sit-in at the front door of a bank? If it isn't, what is? Is a DDoS attack really "violent"? Nothing at all was broken or destroyed, and laptops are allowed through airport security, so it is hard to call them weapons.

If a large group of people wanted to get their point across in a non-violent way, how would they do it on the Internet? Creating a home page for your cause on Facebook and having a million people "like" it might be the rough equivalent of a petition, but probably doesn't qualify as civil disobedience.

One of the things that is so fascinating about computer technology is that computers really don't understand the difference between words and action. Most programmers like myself try to organize our software into "code" and "data", but the reality is that those distinctions are purely for our own convenience. The computer simply doesn't know the difference, or care.

Software code itself is simultaneously speech and action. The computer just stores bits and bytes. Some of those bytes are content, and the rest of those bytes are code that describes what to do with the content. If you add a few more bytes, you can even turn the content into code, too, by simply adding more instructions that tell the computer how to interpret it.

Software development technologies have continued to narrow the gap between the thoughts in your head and the actions of a computerized system. It is faster and easier than ever to write a computer program that does something useful. The better programming languages get and the faster computers become, the closer we get to the Star Trek scenario, where Geordi La Forge speaks aloud, "Computer...make me a ham sandwich," and one actually appears...automatically toasting it just the way he wants and adding a little Dijon mustard based on his preferences.

But if the words in a book are considered speech, and those words can become digitized and stored inside a computer, at what point does speech stop being speech? And how can speech be free if computers automatically start performing actions when words are uttered? Our legal system already recognizes the confusion by offering both copyright and patent rights to computer software. Unfortunately just about everybody agrees that our intellectual property laws are completely inadequate.

I'm awfully torn on this issue, myself. The power and convenience of computers is likely to keep this trend continuing, and I don't see how this can't someday end up being discussed in the Supreme Court. There are a few issues we'll need to figure out. In the very least, as the guy writing these words, I will need some way to identify that they're supposed to be just words. And there will have to be a legal test established for consistency and standardization purposes. That won't solve all the problems, but at least we'll have an agreed upon definition of "speech" to base the debate upon.

And if you really want to see where this is going, start contemplating how we'll deal with forms of "speech" that don't even involve writing or speaking.

Thursday, December 9, 2010

The DoD's PlayStation Condor: When Sony Was So Almost-Cool

The Department of Defense just unveiled their latest (not secret) super computer, which is based on 1760 PlayStation 3 gaming consoles.

(Official Announcement)

A friend sent me a link where John Herman interviews Mark Barnell about his involvement and the project's background.

Cool as heck. But kind of sad and anti-climatic in a way.

They mention it is built on the "original" PS3, not the more recent "slim". This is because they removed several key hardware components required to install an alternative operating system. And the latest software update for ALL PS3's removes the ability completely, even on the older one. The newest games and features require the latest PS3 software, so they're forcing everyone to decide whether they want a hackable PS3 or GT5 and streaming Netflix. And apparently you can't change your mind and downgrade later.

Sony started with such a promising idea when they released the PS3, promoting openness and "hackability" for projects just like this. Universities were buying up lots of PS3s, and there were several Linux distributions built specifically for the PS3, such as YellowDog Linux. But it appears they've back-tracked toward their proprietary ways.

I suppose game consoles are a much bigger industry than super-computing these days, but it is too bad they have to be mutually-exclusive. I've never seen an explanation from Sony as to why they decided to do this. But I can guess.






If you're familiar with Sony's past history of proprietary technology formats (MiniDisc, MemoryStick, Beta, and others), it seems at least somewhat plausible that the only reason they offered this ability on the PS3 was to do everything they could to boost Blu-Ray sales. Look at other things they were doing at the time as well. The PS3 was not only the most powerful gaming console available, but for quite some time it was also the cheapest Blu-Ray player you could by. So if you wanted a gaming console, you got a free Blu-Ray player. And if you wanted a Blu-Ray player, you got a free gaming console! I'm a long-time Grand Tourismo junkie, but I have to admit, that's the biggest reason why there's a PS3 in my living room.

They won that war (even a blind squirrel...right?), and so now they have no obligation to sell anything but the cheapest gaming platform possible. Remove the special hardware, and stop paying the cost of maintaining and testing the firmware/software code. I'm sure they were able to simultaneously increase their margin while reducing the price of the game console. But still...it's a shame.

Monday, November 29, 2010

Nerd Comics Might Save Humanity, Too

It looks like Randall Munroe is back in action at writing XKCD comics again, which hopefully implies good news.

But seeing his latest post reminded me of this one, which I briefly referenced back in July.



One of the things that I like about XKCD is that he puts some extra stuff in the "hover text". It only appears when you hover your mouse over the image of the comic. So you can only really experience it by going to the actual site (http://xkcd.com/728/)

These extra bits of hover-text are usually very funny. They've become a bonus that you eagerly await after reading the comic itself. Kind of like the prize at the bottom of the Kracker Jack box. But even when they don't initially seem completely related to the comic itself, they can throw a whole different twist onto the message. In this case, the comment turns what was already an awesome comic into one that will probably be my all-time favorite single comic strip:

"Maybe we're all gonna die, but we're gonna die in *really cool ways*."


This was supposed to be part of the Sci Fi Writers Will Save Humanity post, but it really deserved a post of its own.

Why is this one comic strip so important to me? Well, for one, I tend to be a bit visionary...to a fault. I'm the girl standing behind the chair. I enjoy thinking about where the world is going, but this comic is also a reminder to me that I should live in the present, too. After many years of being told this by people I trust, I'm finally learning that I have to live in today's reality in order to have any chance of helping build the grand future I see.

But the hover-text is the real gem, here. In one sentence, it has solidified my approach to a future that is incredibly exciting, but simultaneously frightening and filled with potential for disaster.

The future will come. That much is certain and unavoidable. Like it or not, there's a possibility that it will be really, really, bad. But if you take an active role in shaping it, you'll (a) hopefully help avoid the really bad stuff, and (b) have one heck of a ride while it all unfolds.

So I chose to be excited about it all, while doing the best I can to ensure it comes out in a good way. If it doesn't turn out the way I want, I'll still enjoy being part of this moment in history.

Reminds me of something my father said long ago, the day I came home from school after the Space Shuttle Challenger exploded. "You know, everyone has to go sometime. But if you could pick your ending, what an awesome way to go!"

Monday, November 22, 2010

Where Does Scientific Data Go to Die?

John Timmer has a fantastic series of articles going over at Ars Technica. It has really drawn me back into the thoughts that initially triggered my decision to start TelemetryWeb.

I don't want to regurgitate all of the information in his articles, but the gist of it is that there are no good solutions for capturing, storing, and archiving scientific data. It isn't hard to imagine the massive amounts of data that have been lost on floppy diskettes that got stashed in some research professor's desk. And even if you had the disk, do you have the complete technology stack required to read it? You'd need the disk, the correct drive, the right kind of computer, and a program to read the bits.

Switching gears a bit (but not really all that much, as you'll see), initiatives like Data.gov are really cool, because they encourage scientists to make their data available online. Data.gov serves as a directory for scientific data sets. Want to download raw data about the migrations of Canadian Geese? You might find a link to it there.

But that's part of the problem, too: All you'll get is a link. It is up to the research project to find a place to put the data online, and maintain it for all eternity. How often do researchers get grants to keep their data online? A friend who works at the University of Minnesota School of Agriculture says that doesn't happen very often. In fact, one of her recent projects had a five-year plan. The first four years were the bulk of the research, and the fifth year was building a system to get the findings online. Funding was suddenly dropped after the fourth year. So a publicly-funded institution spent four years doing some really useful research, which could help farmers save millions of dollars and reduce the amount of chemicals they use to combat disease. But all that research is sitting in a drawer somewhere. Unused.

But let's say that you found something on Data.gov that is actually available. Great! What then? Do you understand the format of the data? Do you need a proprietary software package to read it? Is there any information about how the data was collected? What instruments or techniques were used? Is the data applicable to the work you are trying to do? What are the error factors and quality metrics? Alas, Data.gov doesn't address those issues.

TelemetryWeb has thus far been focused on commercial applications simply because lots of smart people have told me that there's no viable business model in the scientific research community. They may be correct, but I'd love to have an opportunity to prove them wrong. But in either case, I'd love to see TelemetryWeb used to support scientific research. I've always been a bit of a science nerd, and started out as a physics major. It is simply a personal interest of mine, and it would make me feel good.

But the thing is, I really don't see the problems of the scientific community as being significantly different from the commercial problems that TelemetryWeb is trying to solve, anyway. Long-term data warehousing, good meta-data catalogs, owner control over data sharing or publication, and the ability to collaborate across geographical and organizational boundaries are all challenges that I've personally faced in my work developing commercial applications.

There are certainly a lot of scientific applications where TelemetryWeb won't always be a good fit, at least as it is designed currently. But I've already spoken with several people about scientific research projects, and I'd be happy to chat with more people on the subject.

Wednesday, November 17, 2010

Sci Fi Writers Will Save Humanity

I wrote a few weeks ago about the latest IBM Internet of Things video, called System of Systems, on YouTube. If you didn't read that post, the thing that surprised me the most was how the tone changed from their first video, clearly trying to address the fear that many people have when they first start comprehending the Internet of Things.

The worst thing that the Internet of Things industry can do is to ignore or trivialize these fears. Even if you believe that these fears don't have the power to stop the continuing evolution of technology (they don't), it doesn't mean that these fears should be allowed to simply exist without discussion, or that we can't learn something from them.

Queue the sci fi novelists! Depending on your definition, science fiction has existed for hundreds of years. The Wikipedia article also describes the genre as being difficult to define. But one item that seems a likely common ground for identifying a work of science fiction is that the author takes a "what if" approach to technology. What if humans could travel faster than light (Star Trek)? What if we lived in a world of continual surveillance (Brave New World)? What if we could actually travel under the sea for long distances and long periods of time (Twenty Thousand Leagues Under the Sea)?

These may be works of fiction, but they have an uncanny knack for picking out how technology impacts individuals and society. In fact, that's kind of the whole point of the genre, and what makes it interesting.

Of course, that doesn't mean they're going to be correct in their predictions! In fact, they probably aren't. For one thing, very few people seem to be very good at telling the future. But for another thing, we need to remember that these books are written to tell a story. They're entertainment, first. There's a double-edged sword in effect here. The best stories are about how the hero saves the world from a wide-spread evil. And that evil is usually either technology itself, or empowered by technology.

So we have to be careful about understanding where the realistic and interesting ideas stop and the story-telling begins. But that doesn't mean they're useless. The best ones frequently identify non-obvious interactions between societal traits and how the technology amplifies them. The way a good writer can trigger the imagination is very powerful in both shaping how we think and in giving other people new ideas.

But the most important aspect of the science fiction novel is how we can all relate to them. The popularity of the genre over the past fifty years has given the average person (in the affluent countries, at least) a keen awareness to technology that simply didn't exist ever before in history. We still have a long way to go to educate the public about the impacts of technology on privacy and other rights, but without science fiction, it is likely we wouldn't be able to hold a meaningful conversation on the topic at all.

Of course, a futurist is likely to have more meaningful and potentially accurate information about how technology will shape our lives. But unfortunately, they just don't usually tell a very good story!

Saturday, November 6, 2010

The Internet of Missing-Some-Things, Part 1

The Internet of Things buzz seems to be all over the place. It is certainly on the cusp of becoming a big market, and there's no doubt in my mind it will be huge. But before it can really rocket to the #1 technology revolution that it promises it can be, someone needs to solve a couple of key issues. Until then, it will still be the "Internet of Missing-Some-Things."

The IoT revolution was technically possible decades ago. When you can buy a device that can send data to the Internet, the technical possibilities expand greatly. But it wasn't economically or practically possible. It was too expensive to buy lots of devices like that. Even if you could afford the devices, the bandwidth was expensive and difficult to find, and you needed a pretty hefty power source to power it. So applications of devices that connected to the Internet were typically (1) relatively expensive, like PC's and web servers, and (2) limited by proximity to high bandwidth, like...well...PC's and web servers installed on a wired network, and (3) limited by high power demands, like...you guessed it.

The three big drivers of IoT as it stands today are the huge downward cost pressures on "smart" device hardware, huge downward cost pressures on the cost of connectivity, and a relatively large increase in availability of network connectivity. Now that you can buy a cheap device that is smart enough to talk on the Internet, and the Internet is available in lots more places, IoT can become a reality.

But the real IoT revolution is still out of arm's reach. There's still two missing pieces of the puzzle: True ubiquitous connectivity, and ubiquitous electrical power for small devices.

As a server-side software developer who lives in a metropolitan area, it seems like there is power and connectivity everywhere. But talk to anyone who is building an embedded device, and you'll see that power and network connectivity are still relatively difficult to come by. Rural areas are an obvious example here. Farmers are still relatively limited in the monitoring devices they can use, simply because cell networks and power lines often don't cover their fields.

But even in places where you'd think it'd be easy to plop down a device, it isn't. It is still a relatively difficult conversation to convince a building owner to run power and Ethernet to the bowls of the elevator shaft, or into every corner of the warehouse.

Until power and connectivity is ubiquitous, the IoT revolution will always be somewhat limited. Fortunately, people are already working on those problems.

Friday, October 22, 2010

IBM on Systems of Systems - Combating Paranoia?

 IBM's first Internet of Things video was pretty impressive. I thought it was a really good overview of the core concepts behind IoT. It had a creative presentation and I took it for what it is: A 5-minute video describing the sort of world we're creating. It doesn't answer any of the interesting questions that come to mind, but it clearly doesn't pretend to, either. Heck, my only real criticism of the video was that it didn't even ASK any of the really important ones. But it was still a neat presentation.

A few weeks ago, IBM introduced their next video in the series: Systems of Systems. Another pretty good video. It didn't seem to have the same creative vibe as the first one, but started asking (and preemptively answering) real questions. Note the particular attention paid to two messages: First, that IoT is simply the next evolution of what has been going on throughout history. And second, that computers can never take over the world or go out of control and cause total havoc and pandemonium, so IoT is nothing to worry about.

Before we even get to the discussion of those issues (which will be the topic of future posts), the main question to ask is why they took such a turn of direction on this one. The answer is simple: Read the comments posted for the first video. There's a pretty overwhelming percentage of posts by people who find IBM's IoT message to be just a bit naive and more than a bit scary. And most of the people who aren't saying that are asking why we would want such a thing to begin with.

The reality is that this is how technology (r)evolutions go. There is a vision for the future in the Internet of Things, and that vision is perceived as pretty scary by people who either don't understand it or get fixated on the potential negative sides. But that doesn't mean those fears are irrational or unfounded. In fact, it's critically important that these people speak out and make the technologists pay attention. If they didn't force a discussion about the tough questions, we'd surely be worse off.

Wednesday, July 7, 2010

ReadWriteWeb's 5 Key Trends of 2010

ReadWriteWeb has posted their "Half-Year Report for The Web". The five key trends?
  • Augmented Reality
  • Internet of Things
  • Mobile
  • Real-Time Web
  • Structured Data
If you haven't followed these trends, each one is pretty cool by itself. But the brain-bursting part of it all is what you get when you combine all five of them. Seriously, think about it. Your head will explode.

I'm not sure whether we should be terrified of the possibilities, or excited, or both. Makes me think of a recent XKCD comic that has become something of a personal anthem lately: http://www.xkcd.com/728/. (Be sure to hover your mouse over the image, too, and read the alt-text.)

Maybe it's just my biased perspective (partially due, no doubt, to spending months on the problem as part of building TelemetryWeb.com), but the issue of structured data is simultaneously the least glamorous and also the biggest enabler to unlimited possibilities. Point solutions are possible in all the other categories. You don't necessarily need a particularly complex set of standards to make them work. For example, much of the social media genre accomplish amazing things in the real-time web using super-simple data and information architectures. Twitter in particular has done some unbelievable things with nothing but random strings that are only 140 characters long. But in order to truly make use of the data across individual applications in complex "plug-n-play" scenarios, we'll need something more sophisticated than tags.

Discuss amongst yourselves. (or start a conversation below...)

Friday, June 25, 2010

EU & China on the IoT Bandwagon. Where's the US?

Maybe I'm biased, seeing as how I seek out this kind of stuff. But it seems to me as though chatter regarding the Internet of Things is building everywhere.

Everywhere, except in the US.

Governments have even been jumping on the bandwagon. The Chinese Premier himself, Wen Jaibao, has been calling for rapid development in IoT technology since last August, and apparently mentions it regularly in speeches. The latest news is that the European Parliament has endorsed development of the Internet of Things and called for increased funding for pilot projects.

So, why hasn't the US paid more attention? I'd have to guess it has something to do with fighting two wars, trying to figure out how to stop oil spewing out of the Gulf, and struggling to recover from one of the all-time worst economies.

But that's not the most important roadblock we face in development of IoT technology. The biggest challenge is fear.

The US Congress is still in terrorist-fighting mode, discussing bills that would allow the government to "shut down the Internet" in an emergency situation. And the US people, still trying to figure out how to deal with poor stewardship of personal information by companies like Google and Facebook, are highly concerned about the privacy issues that are involved when we start putting network-aware sensors on all the stuff we use every day. While the rest of the world is striving to innovate, we're trying to stop the train.

That's not to say that these fears are unfounded, irrational, or unreasonable. Quite the contrary! We desperately need to be concerned about these things. They are critically important to democratic society. But trying to put the brakes on technology development is not going to solve the problem.

The only thing we know fore sure is that the technology will continue to advance. Within the next 10-20 years, the rest of the world will be powering their economies off the Internet of Things. The US simply needs to decide whether they want to lead technological innovation in this space, or let someone else do it.

But what's the right way to go about it? We need to take these issues head-on. We need to design networks that are resilient to attack without having to pull the plug on the whole thing. We need to get this privacy dialogue rolling and start making some proactive decisions, rather than complaining after the fact when companies don't protect our data in the way we think they should.

This is a legislative issue, and it should be on the agenda. But we can't wait for Congress to get there, and we can't expect them to suddenly overcome their horrible track record of keeping pace with social impacts of technology on society. We need to continue to set expectations for companies to use data in a responsible manner. If we don't, we have no one but ourselves to blame.

Wednesday, June 9, 2010

An Isle of Soup Cans and The Internet of Things

Picture an elementary school boy, maybe about six years old, standing with his father in the soup aisle of a grocery store, circa sometime in the very early '80s.

Back then, a supermarket wasn't quite the massive demonstration of industrial efficiency they are today. But even then the concept of a large store of mass-produced provisions was already going on 50 years old, so it was still impressive to look at. If you weren't a six-year-old boy, of course. For someone who grew up in the United States, it was...well...just a boring old grocery store.

"You see all those soup cans?" Says Dad.

"Uh, yeah." Says the boy, wondering what Dad was going to lecture about this time.

"All the ingredients in all those soup cans came from somewhere far away, and required huge amounts of effort to put in those cans."

The boy stood there, trying to figure out what he meant. It was just a can of soup. It was so cheap, it cost practically nothing, even to a blue-collar family.

"The noodles in that chicken noodle soup were once wheat that grew in a field somewhere. Someone had to plant the wheat, help it grow, harvest it, grind it up into flour, and make noodles out of it. Someone else had to grow corn, which was used to feed chickens, which were hatched and fed and looked after for a long time before they could be taken to the soup factory. Someone else at the factory cooked the chickens and combined them with the noodles and all the other ingredients to make the soup.

"Then they put the soup in cans, which are made of metal that someone else dug out of the ground somewhere and melted down into cans. And then the cans of soup were loaded on a truck, which was also made of metal, and was powered by fuel that was made from oil that someone pumped up from a hole deep in the ground on the other side of the earth. The driver of the truck drove hundreds of miles to deliver the cans of soup here, so yet another person could put those cans on these shelves. All so you and I can buy one and eat the soup for lunch tomorrow.

"And then, after we've eaten the soup, we'll throw the can away. Someone will take our garbage to the landfill, where it will be thrown in a big pile along side all the other soup cans that were eaten by little boys for lunch. And everything else that everyone throws away. Can you imagine the mountain of metal you could make with all the soup cans that everyone has ever eaten?"

Even as a six-year-old boy, putting it in those terms helped me begin to appreciate the scale of human-kind's impact on the Earth. The funny thing about it all is that my father has never been someone you'd consider a big environmentalist. He's no Anglo-American neo-Buddhist, or a follower of Zen philosophy. Heck, he doesn't even have a college education. He just has an uncanny sense for how everything is interconnected, and the sheer enormity of it all. And it is one of those little life lessons that just seemed to always stick in my head from that moment on.

What does this have to do with software and the Internet of Things? Simply everything.

Imagine being able to provide more than a general description of how everything is interconnected. Contemplate being able to track our impact on the Earth with great precision, and see, in real time, how changes in our activities reflect in our ability to "tread lightly."

Software like Google Power Meter and devices like Current Cost's ENVI are just the beginning. Some people are estimating that there will be 100 Billion intelligent sensor devices deployed across the Internet in the next 5-10 years. Those devices will be measuring efficiency of factory equipment, the fuel mileage of trucks and tractors, crop yields, fertilizer usage, animal feed volumes, and countless other bits and pieces of data.

If we can figure out how to correlate all that data, we can begin to apply real, hard metrics to complex system interactions. Maybe we can appreciate even more what it really took took to get that can of soup to the grocery store shelf, and what it will take to do something with the waste after a six-year-old boy eats lunch.