All in Data

While it is true that a platform such as GitHub is not really designed to be as user friendly as, say, Facebook, the fact is that the sharing of technical expertise among mid-level IT staff and data administrators in different governmental agencies has probably been at least as important to open data progress as the Administration’s top down support.

Recommendations for Collaborative Management of Government Data Standardization Projects

Standardizing how the U.S. government collects, manages, and publishes budget and expenditure data, as required by the DATA Act currently before the U.S. Congress, is an example of a long-term and complex project. It will be require careful planning, management, and sufficient resources to be successful.

How Important Is ‘Total Cost of Standardization’ to the DATA Act?

Last week I attended a meeting in DC sponsored by the Data Transparency Coalition, PwC, and Intel. Representatives of the Federal agencies likely to manage implementation of the evolving DATA Act presented their thoughts on implementing the Act’s requirements for standardizing and reporting on Federal financial data.

When Does a Public Data Good Become a Private Data Resource?

It may be that the greatest challenge facing private entrepreneurs in developing new and valuable information products and services based at least partially on public data will be public resistance to paying for information, no matter how new, innovative, or unique these producrts or services are.

Perspectives on the NSA and PRISM: What Dyson Missed

In NSA: The Decision Problem George Dyson lays out a credible historical view on why he thinks what Snowden revealed about the NSA and PRISM was inevitable. While it’s certainly a pleasure to read a discussion of these issues that is so thoughtful, I think he misses one major theme that helps us understand our current predicament over what to do about Snowden.

Learning from the World Bank’s “Big Data” Exploration Weekend

If you’re serious about data analysis there’s probably no substitute for getting “down and dirty” with real, live, messy data. Sometimes you just have to sift through the numbers with your “bare hands” if you really want to extract meaning from descriptive statistics, predictive models, and fancy visualizations.