Google Glass is going to need a new kind of cloud computing and Google won’t be able to satisfy all the demand. If Google Glass is as big a deal as I think it will be, humans will generate much more data than they do today. Either because of sensor tracking to do things like play location-based games, or do health tracking, or more. Think about Waze, a traffic app, on Google Glass. The new developers will need new cloud computing, more on that later. Plus, I see Glass as part of a contextual system, one that uses an Internet of Things, but also brings data from your own businesses in along with big data computation that will find new patterns to display on our glass.
So, Glass and other mobile sensor technologies are going to generate so much data that needs to be managed that companies like Scoble’s Rackspace will be in serious commercial danger were they to ignore the commercial opportunities associated with the coming data deluge.
Not only will there be so much data floating around that even Google can’t handle it — thereby opening up market opportunities for companies like Rackspace — but the need will arise for new data management technologies that go beyond the traditional database management architectures at the root of successful companies like Oracle.
Having dealt with many different types of data during my career, my perspective on this might be viewed as quaint by those who envision a world built on “all data all the time” access concepts. As much as I love data and working with data, though, I have some serious questions about the coming “new data order”:
- Will all the data we generate as we go about our daily lives really be worth saving?
- What kinds of data management skills will be needed in order to avoid further disparities between the “data haves” and “data have-nots”?
- How much is all this going to cost, and who’s going to pay? (Broadband data caps, anyone?)
I don’t know what the answers are to these questions. For me, matching data to the needs of the anticipated user has always been a normal part of most systems development life cycles. That’s a design principle based on simple economics and efficiency, i.e., “Spend money only on the data that people will need to use.” Is that changing as we move into a world of “big data”?
Yes, things have changed. Data are now generated as a byproduct of many more types of activities by human as well as nonhuman (for example, industrial) “actors.” Corralling and making sense of these new data volumes is an emerging business that can justify the examination and use of new data management paradigms.
Still, one thing I have learned from a career involved with data is that usually makes sense not only to anticipate how people will use the data from the system but also what they need to know in order to use the data.
As we move into a new age of always-on and always-connected citizens, will people know what to do with all the data they can now access? Some very smart people I know, for example, still have a hard time dealing with the basic concepts like file naming and nested folders on a computer. Will the systems (and apps) that emerge in our new age of data be usable, but only for a small number of “elites” who are comfortable with how to navigate masses of data?
I can imagine new roles emerging with names like “data wrangler,” “data steward,” “data guides,” “data agents,” and the like. These will be professionals who, for a price, will cut through the clutter of vast amounts of competing and contradictory data sources to locate, organize, and deliver services ranging from answers to specific questions to ongoing advice, handholding, or even decision-making.
Science fiction writers such as Neal Stephenson have written about such roles and schools are offering courses devoted to “data science.” People who are able to do all of this by themselves will be at the top of the heap and, because of their skills and agility, will benefit entrepreneurially from their skills.
There will also be those who must rely on others with such skills. In some cases services will be performed as commercial transactions, some will be provided as subsidized public services (for example by public libraries), and for another category of user — the “data poor” — no such services will be available.
Also, of course, a market will develop for illegal uses of big data; the people who are really good at this will make a lot of money!
- Transparently Speaking, Are Bad Data Better than No Data At All?
- Recouping “Big Data” Investment in One Year Mandates Serious Project Management
- What Will Forrester’s ‘Top 15 Emerging Technologies’ Mean to You?
- When Are “Open Data” and “Hiding in Plain Sight” Synonymous?
- Why I’m Uneasy about “Big Data” and Government Programs
Copyright (c) 2013 by Dennis D. McDonald, Ph.D. Dennis is a Washington DC area consultant specializing in digital strategy, collaborative project management, and new technology adoption. His clients have included the US Department of Veterans Affairs, the US Environmental Protection Agency, Jive Software, the National Library of Medicine, the National Academy of Engineering, Social Media Today and Oracle, and the World Bank Group. His experience includes the management of projects involving the conversion or migration of financial and transaction data associated with large and small systems. Contact Dennis via email at firstname.lastname@example.org or by phone at 703-402-7382.