Category Archives: Uncategorized

Surviving the Cyber/Location Nexus

Original published in CSO Online

When the science fiction author William Gibson popularized the term “cyberspace” in the early 1980s, it was a reference to an other-worldly domain, a parallel universe, embedded in a context of networked computing in which people’s real world lives had become inextricably linked. In these intertwined universes, Gibson had envisioned a domain in which nefarious actors could harness the cyber-domain to manipulate, harm and even destroy economic, social, and cultural value that has been either created or stored within cyberspace.

Surely, in some important respects, we have seen Gibson’s vision come to pass. Our society’s dependence on cyber-infrastructure, and the extent to which every-day people have become beings embedded in (and reliant on) networked computing and communication, has raised the issue of cyber-security to a Presidential priority. In the first several months of the Obama Administration, we have seen the Leader of the Free World make a top priority of the need to protect what Gibson, some 30 years ago, presciently painted as dominant threads in the fabric of our civilization’s future.

Gibson’s story operates at what I like to call the cyber/location nexus. He artfully wove several location-enabled story lines together, which not only emphasized the extent to which his original vision of the cyber-domain has come to pass, but which also shined a light on how real world location can be the anchor for cyber content and cyber experiences. In the reverse, Gibson also showed how events in the real world, through location-enabled technologies, can be tracked and monitored in cyber-space. This melding of the two worlds, that of the cyber domain and the real world (where geospatial location matters), not only made for great science fiction and great intrigue, but it also marked an historic inflection point at which the cyber domain and the real, geospatial world were understood to converge in popular culture.

The Dual Revolutions Fueling the Nexus

While Gibson is a recent arrival to the cyber/location nexus, the geospatial revolution actually began just when the cyber revolution began. The Cold War (and its corollaries found in the Space Race, Keyhole satellite reconnaissance, and nuclear command and control) was the impetus for massive investment by the U.S. Department of Defense (DoD) and Intelligence Community in large scale waves of technology, two of which can be thought of as the Cyber Revolution and the Geospatial Revolution. While these dual revolutions are each complex and involved stories of families of complementary and intersecting technologies, it is instructive to examine the investments in ARPANet and in the Global Positioning System by the US defense and intelligence community.

In the wake of Sputnik, the DoD founded in the Advanced Research Projects Agency (ARPA) in 1958, to ensure American military technological dominance. One of ARPA’s early projects was ARPANet, an experiment in computer networking and communications that had the promise of providing resilient nuclear command and control. Google’s Chief Internet Evangelist, Vint Cerf was the ARPANet program manager and responsible for both the 32bit IPV4 namespace ( anyone?) and the TCP/IP protocol that are at the core of today’s cyber infrastructure. In important ways, Cerf was Gibson’s muse. ARPANet planted a seed which has grown into the global cyber infrastructure that permeates modern life. During that period of growth, the imaginations of some of the world’s brightest minds were captured by the template that Cerf and his team created and by the future patterns of life that Gibson saw Cerf’s template enabling. This has led to four decades of innovation that have fundamentally reshaped modern life, and their fruits have become so essential to everyday people and the basic institutions of civilization that the defense of this cyber infrastructure has become a top priority of the President of the United States. The Cyber Revolution was played out in a very public manner, with all of Western Society watching with great anticipation.

Every revolution is different. It is helpful to view the Geospatial Revolution through the lens of the U.S. Global Positioning System, the space-based “position and timing” solution was designed and deployed by the U.S. defense and intelligence community. GPS was conceived to underpin a wide array of American Cold War capabilities including precision geopositioning for Keyhole spy satellite imagery, precision munitions, satellite positioning, missile guidance, and military navigation. GPS served as a “secret sauce” for so many Cold War capabilities, because location matters acutely in matters of national security.

What is important to grasp, but which is commonly misunderstand, is that a GPS receiver does not tell you where you are. The receiver derives your location from information that is streamed by a constellation of 24 satellites which derive their own location from atomic clocks and a whole lot of math based on Einstein’s General Theory of Relativity. So, it is a broad complex of “position and timing,” communications, and computing technologies that let you figure out your location, let alone broadcast your position to the world. Just imagine the complex of technology, and all the graduate level mathematics, required to understand the position and orientation of a spy satellite and the physics of its sensor so that an intelligence analyst could derive where exactly on the face of the earth a particular bad guy is located, when she identifies the bad guy’s vehicle in a satellite image.

But what was once impossibly complex national security technology rapidly, though stealthfully, became the underpinning of everyday life. After the USSR shot down a commercial airliner which strayed into prohibited airspace, President Reagan issued a directive making GPS freely available for civilian use. Today, as everyone knows, location permeates modern life with GPS-enabled phones, cars, personal navigation devices, cameras, and sensors of all kinds.

The Geospatial Revolution, compared to the Cyber Revolution, was relatively quiet. GPS was designed for use in the US national security community, and only slowly made its way out. Certainly geospatial technologies of all kinds had been around, supporting a relatively small community of geospatial specialists. One can Google the professions of photogrammetry, geodesy, remote sensing, and GIS to see that digital mapping and geospatial technologies have a long and rich history. But, it is a relatively recent that everyday life has been transformed by location- or geo-enablement. And, this is due to the extensive, and increasing embeddedness of GPS in everything. The GPS constellation has become a public utility, and GPS chips have become commodities that designers are increasingly apt to add to everything by default.

Two Revolutions Walk Into a Bar…

If we reduce the Cyber Revolution to the ubiquitous IP-enablement that has seized the modern world, and if we reduce the Geospatial Revolution to the eponymous PBS documentary’s quip “the location of anything is quickly becoming everything,” we are then left to ponder the impact of these revolutions combining forces. Because, indeed, these dual trends of IP- and Geo-enablement are colliding. It seems that everything will eventually be IP-enabled, bringing the inexorable logic of the Cerf/Gibson paradigm to a near fusion of the cyber and physical world. But, it is the fact that all of these cyber- or IP-enabled things will also be location- or geo-enabled that will complete this fusion. These two revolutions are unwittingly combining forces because cyber-connectivity and location-awareness independently have functional value to us in national defense, business, life and love. This obvious value has necessitated both public and private investment of epic proportions. Yet, as these dual trends converge to a nexus, something new is happening.

In Spook Country, Gibson introduced the notion of locative art (also known as locative media)—think of large modern art installations that are conceived and crafted in virtual space and projected onto the real world terrain, oriented and anchored by GPS, but only viewable through a special set of GPS-enabled goggles. Imagine something so culturally rich, which could be hacked and either defaced or destroyed, and the value that would be lost to a particular geography (e.g., a property, community, etc.), if only to the people jacked into cyber-space through those location-enabled goggles.

While not as literary, the vulnerabilities that exist at the cyber/location nexus are much more disturbing as they reach out to us in the real world—the world in which our corporeal bodies live, and love, and die. One can think of all of these IP- and Geo-enabled devices as sensors, each capable of making some sort of observation over some part of the Earth, and perhaps attached to some sort of control point or process. This might be as trivial as an IP-webcam which has a GPS-derived location, and a gyro divining the pitch, yaw, roll and angle of view that together characterize a very specific, and geospatially precise chunk of the Earth. Or networked thermostats installed across a corporate campus, marshalling HVAC resources to various locations based on the occupant’s designated settings. It could be network accessible imagery satellites, Predator UAVs, physical security access controls, stream gauges, traffic monitors, ocean buoys, automobiles, Supervisory Control And Data Acquisition (SCADA) systems, asset management systems, or mobile computing devices. Yes, it could be your Blackberry or iPhone. As IP- and Geo-enablement proliferate, this list simply gets longer. As they are IP-enabled, each one becomes vulnerable to hacking. But, when they are also location- or Geo-enabled, they become susceptible to “space-time hacking”.
The Dawn of “Space-Time Hacking”

Many everyday users of the World Wide Web have stumbled across a website capable of taking their IP address and telling them the city they live in. So, to some extent, it has seeped into the popular consciousness that one could locate an individual based on their unique address. But, until now, this mapping of the cyber to the physical world has been of little consequence unless you are worried about stalkers or law enforcement. The worst consequence a hacker could met out is the loss of your data, the unresurrectable death of your computer, the theft of your identity or the denial of a cyber-space service that you depend on. While admittedly these are consequences that one would vigilantly seek to avoid, the threats at the cyber/location nexus make these look mild.

Lets look at the world once it sets firmly at the cyber/location nexus—from the perspective of a nefarious actor. The cyber domain will have evolved into a medium through which bad actors can reach every IP-enabled resource (which at this point would be virtually everything that matters) within any particular geography, with precision geopositioning, and choose to either exploit, manipulate or destroy each individual or class of resource. Such foes will in effect have the ability to harness the functional power of this convergence, the convergence of IP- and Geo-enablement, against anyone at a time and place of their choosing.

Just imagine being able to exfiltrate, undermine, alter or end all networked computing within any arbitrarily small or large geography, at any moment in time, for any period of time, particularly during moments at which vital interests are at stake. In a national security context, one might term this “denial of mission.” But with such a rich context to hack, imaginations could reel in the definition of a nomenclature for all of the opprobrious acts bad actors could then perpetrate against commercial and non-commercial activities in the private sphere.
In essence, the cyber/location nexus serves as a comprehensive, geospatially-enabled “reverse-lookup” targeting infrastructure that allows an asymmetric adversary to quickly marshall all IP endpoints (which are all cyber vulnerabilities), fixed and increasingly mobile, within any arbitrary geography at any moment in time. Along with these endpoints, the adversary will be able to quickly gather sufficient information about these assets to categorize and prioritize them within an Order of Battle specially designed for his particular purposes. It will not just be a brute force denial of mission. It will allow for scalpel-like sophistication in the attacks.

At the nexus, one can easily imagine a hacker denying mobile communications to response personnel within a geography before shutting down a mission- or business-critical facility by toying with its HVAC and setting off its alarm system, shutting down the traffic signals on some key chokepoint intersections, complicating the personnel evacuation, all while monitoring the manufactured event over his target’s surveillance cameras, while streaming spoofed camera footage to the target’s security forces, in order to maximize the casualties that might arise from a remotely controlled chemical attack in that exact location. It doesn’t take much effort to imagine something much worse.

Widening the Cyber Aperture

Clearly, in this day and age, cyber security should be a major priority of an American President. The recent White House 60 day cyber security review, while a good start, has not envisioned the world as it will be when the cyber/location nexus comes to full fruition. The Gibson/Cerf paradigm has now evolved to encompass the cyber/location nexus, and as their complimentary worldviews have done in the past, they will inspire a new wave of innovation that public policy can only hope to keep up with. It will also inspire a new wave of villainy. In this context, it is important that the President adopt a strategy in tune with the ways in which the cyber domain will serve as a pathway by which our adversaries will be empowered to fight a war, or just cause a whole world of hurt, at a time and place of their choosing.

The Watchmen and the Scientists: Reconciling the Tribes of Space-based Reconnaissance and Scientific Earth Observation

Originally published in Science Progress.


Screen Shot 2014-11-11 at 9.26.27 PM


A comprehensive approach to developing, deploying, and utilizing our eyes in the sky can revolutionize national security and environmental sustainability.

Since the beginning of the American Republic, science and national security have had a long history of interplay. While the nature of their interaction has evolved over the centuries, the premise that strength in science is key to strong national security has long been broadly accepted. Traditional formulations of this premise have often focused on the role scientific advances and science-based technological progress played in supporting warriors with superior weapons and weapons platforms. However, science has also long played a role expanding military capabilities in what experts call “intelligence, surveillance, and reconnaissance” technologies, or ISR.

Particularly in the post-WWII era, the United States witnessed an explosion of complex, science-based aerospace platforms (both airborne and space-based) and remote sensing technologies. This milieu, however, quickly diverged into two different communities: the Watchmen, who monitored the Earth to ensure security, and the Scientists, who monitored the Earth to develop a better fundamental understanding of Earth processes such as the atmosphere, weather, oceans, land dynamics, and later, meta-level phenomena like climate change. It is important to understand how these two communities emerged, diverged, and that they hold the potential, if they ally, to address some of the world’s most vexing problems. It is a ripe opportunity to adapt American institutions to better exploit the synergies between the Watchmen and the Scientists.

It’s time for the Obama administration to leverage the independent reviews recently conducted for both NASA and the National Reconnaissance Office, or NRO, to develop a comprehensive Earth observation strategy. A comprehensive approach to developing, deploying, and utilizing our eyes in the sky can ensure more effective and efficient use of precious intellectual and financial resources as we struggle to address traditional national security challenges, the array of transnational threats that plague us, as well as the complex, looming menace posed by global climate change. But this will require significant attention paid to national security reform, the governance of Earth science, a fundamental rethinking of the programming and budgeting process, and—not least of all—leadership.

The Emergence of Science-Based Intelligence, Surveillance, and Reconnaissance

Scientists first tackled the ISR challenges of warfare during World War I. The United States military sponsored the development of infrared-sensitive photographic plates for the purposes of improved aerial photography capable of differentiating between camouflage and the vegetation that it was designed to imitate. Another WWI application was based on British/Canadian research, which led to what ultimately became known as SONAR, enabling the Allies to combat the rising threat of submarine warfare. The interwar period saw a maturation of such technologies, and a continuation of military-sponsored R&D into ISR capabilities. Based on such successes, President Franklin Roosevelt created the Office of Scientific Research and Development in June of 1941 in order to coordinate scientific research for military purposes during WWII. Vannevar Bush, OSRD’s first director, reported directly to the president and was given effectively unlimited resources to help the nation meet a seemingly unconquerable threat. Many of the notable OSRD innovations were weapons systems, like the famed Manhattan Project. ISR technology did, however, flow from OSRD sponsorship, including innovations like RADAR.

Based on the successful contribution of American science to both advanced weapons and ISR techniques and technologies, Bush, in his famous treatise Science—The Endless Frontier, argued to President Roosevelt for a post-war social compact with the science community that called for the long-term federal underwriting of science, with no strings attached. In return, science would yield unspecified and, under Bush’s mental model of basic science,unspecifiable benefits to society. Standing on this foundation, American basic science enjoyed a post-war expansion in support largely undirected by the state.

At the same time, even greater resources went into mission-oriented science and technology, particularly when the mission had benefits to the national security enterprise. The term “space race,” which is widely thought of as a response to the Soviet Sputnik launch, actually began with the American acquisition of Werner Von Braun and his German compatriots from the crumbling Nazi regime during the last days of the Third Reich, under the auspices of Project Paperclip. From their perch at Redstone Arsenal near Huntsville, Alabama, which continues to this day as a center of excellence for the American space enterprise, Von Braun’s team served as the center of a vast space R&D network that simultaneously supported the American drive toward intercontinental ballistic missiles, the quest for space-based military/intelligence reconnaissance, the desire to better understand the science of the Earth’s processes, and ultimately President John F. Kennedy’s goal to land a man on the Moon.

The resources driving these mission-focused programs rapidly eclipsed the resources allocated to Vannevar Bush’s compact with the basic science community. The aerospace-industrial complex managed to institutionalize this resource bias, as everyone reached for space. The coterminous rise of both space-based ISR and space-based scientific Earth observation demonstrate this dynamic quite clearly.

In the wake of Sputnik’s October 1957 launch, the American Cold War rush to space resulted in many scientific successes, as a blinding array of satellites (Explorer 1, Vanguard 1C, Explorer 3, Explorer 4, Vanguard 2D, Vanguard II, Explorer 6, Vanguard IIIc, Explorer 7) demonstrated the ability to observe trapped radiation of various energies, galactic cosmic rays, geomagnetism, radio propagation in the upper atmosphere, solar x-ray radiation and its effects on the Earth’s atmosphere, the near-Earth micrometeoroid environment, and—with TIROS I (Television and InfraRed Observation Satellite)—the Earth’s cloud cover and weather patterns from space using television cameras.

During this same 1957-1960 period, some 12 military/intelligence photo reconnaissance satellites failed to demonstrate operational success, a streak that ended with Discoverer 14 / CORONA 9009 / KH-1 in August 1960. Though enormous resources were spent, the military/intelligence community could only claim the success of a data relay satellite, an electronic intelligence, or ELINT, sensor (the Galactic Radiation Background Experiment), and a two navigation satellites (TRANSIT).

Launches for both scientific and military/intelligence satellites continued throughout the 1960s, leading to the curious episode, in 1972, of the Earth Resource Technology Satellite, later renamed Landsat.

The Divergence of ISR and Earth Observation

Scientific Earth remote sensing and space-based military intelligence ISR have been rivalrous twins from the start. In this light, the Landsat story is instructive. The U.S. Geologic Survey, a bastion of science within the Department of Interior, decided it needed space-based spectral land imaging that could provide a ground truth characterization of the human-scale processes driving change in the Earth’s landscape—e.g., land use and land cover.

In the face of bureaucratic foot dragging, the USGS convinced Secretary of the Interior Stewart Udall to simply announce their intent to design and launch the first multi-spectral land imaging satellite, though they had absolutely no relevant experience or capability to do so. The ploy succeeded in teasing out the reluctant support of NASA and the Department of Defense for Landsat. And, while at the beginning the National Security Council, the CIA and DOD did not believe that civilians should be capable of observing change on the Earth’s surface, they reversed course when Landsat demonstrated that all of their maps were out of date, and they promptly became the system’s heaviest users.

The absence of vocal support from the defense and intelligence community, along with equivocation from the White House Bureau of the Budget and NASA on the value of Landsat Earth observation, stalled the program. This silence from the defense and intelligence community was a problem, as it led to the serious mischaracterization of Landsat demand while the Carter and Reagan Administrations attempted to privatize Landsat—a distraction that placed the program in suspended animation for over a decade. It was only after DOD’s acknowledgment of the role that Landsat played in Desert Storm that the program received legislative support, a Defense Landsat Program Office, and a comfortable home at the intersection of civilian remote sensing and the national security ISR community. Disagreements over funding and frequent changes in NASA’s overall remote sensing plans saw DOD withdraw from Landsat in 1994. And while the American national security enterprise remained perhaps the heaviest user of Landsat data, the DOD and intelligence agencies have never again served as an advocate for civilian multi-spectral land imaging.

Sources of the Schism

So why did these rivalrous twins diverge in the United States? And why have other countries recognized and embraced the natural synergy between these two domains?

Some might argue that it comes down to “phenomenology,” a term that both Watchmen and Scientists use to describe the nature of the particular sensor that they launch into space, the way it works, and what it allows them to observe. More specifically, some argue that the schism is due to the U.S. defense and intelligence communities’ narrow, though not exclusive, focus on high-resolution, electro-optical imagery—the spy satellite imagery that average citizens identify with.

The Scientists have been first to field a wide variety of complex phenomenologies in space, based on decades of scientific research. In this context, many have observed the limited success of the American national security community to develop exploitation workflows that embrace complex sensor phenomenologies, as they require scientific knowledge. Instead, the Watchmen fall back on what they know best and what they can easily train their workforce to use—e.g., high-resolution, electro-optical imagery.

While doctrine and rhetoric in the national security community have evolved substantially to embrace such sensor capabilities, training and organizational standard operating procedures, by and large, have not. Others would suggest that there exists a culture within the U.S. national security community which views with suspicion any technology that is “not invented here.” Surely, the relatively larger bank account available to the DOD and intelligence agencies enabled them to simply go their own way, with no mandate from the White House or Congress to maximize resources under a “dual-use” regime. The sine qua non of the defense and intelligence ISR community, their high security clearances, also played a major role in this divergence. While a small number of individuals held adequate security clearances (which provided access and standing) to span both the worlds of Watchmen and Scientists, the policy, budgeting, program management, scientific, and technical communities were profoundly divided because of security concerns, real or imagined.

Though the exact source of the schism is unclear, it is quite clear that the Watchmen would benefit greatly if they could manage to exploit the power of the Scientists’ tools.

In the meantime, outside the United States, increasing involvement by other countries and private industry in the remote sensing domain evolved along a strong “dual use” path—e.g. the French SPOT1 satellite in 1986 and the Canadian Radarsat in 1995. It is notable that the dual-use concept is baked into the program name the European Union chose for its Earth observation program back in 1998: Global Monitoring for Environment and Security. A notable exception in the United States that did embrace a dual-use strategy was the CIA’s MEDEA program, a joint CIA-private sector environmental task force involving academics and environmental scientists who were allowed to study environment issues, including global warming, with U.S. spy satellite imagery.

During this same period, the NASA remote sensing portfolio expanded greatly, forming what is referred to as the “A Train” constellation of satellites—comprised of Terra (1999), Aqua (2002), Aura (2004), CALIPSO (2006), as well as EO-1 (2000) on a separate orbit—while defense remote sensing acquisition fell into disarray. Shortly thereafter, a coalition launched the Group on Earth Observations in response to calls for action by the 2002 World Summit on Sustainability and the G8. In this context, many have recognized that collaboration across the international remote sensing community is essential if decision makers around the globe are going to be equipped to deal with an increasingly complex world stressed by natural disasters and crises in health, energy, climate, water, weather, ecosystems, agriculture, and biodiversity.

At the dawn of the 21st century, the U.S. defense/intelligence ISR community made it their goal to achieve active, purposeful “persistent surveillance,” if not globally, then at least over their major geographies of interest. The idea of persistent surveillance is exactly what it sounds like—the goal of “staring” at a particular geography and watching everything that happens—rather than simply collecting an image every so often. The disarray the U.S. defense/intelligence ISR community finds itself in, in terms of technology acquisition, has made this goal seem almost unachievable. Unfortunately, even in its ideal state, this thrust was still very limited in its phenomenologies.

Meanwhile, the global proliferation of both commercial and civilian scientific remote sensing capabilities has put the global community on an aggressive path toward what one might call “passive” persistent surveillance that spans a rich range of phenomenologies. This notion of passive persistent surveillance is admittedly different from the idea of staring at a particular geography. But with the sheer number of sensors globally scheduled for launch over the next decade, it certainly would be difficult to escape their gaze.

This passive persistent surveillance, if successfully coordinated by a vision such as that animating the EU’s Global Monitoring for Environment and Security program, will have enormous positive benefits not only for the “societal benefits areas” highlighted by the Group on Earth Observations, but also for the United States, Commonwealth, and Coalition security posture. That is, if everyone decides to share and participate in such a framework. Unfortunately, the Watchmen have a very bad history of sharing. Indeed, they have bred much distrust and enmity from their historic international partners who, once dependent on the United States for such support, are now coming of age, launching a vast array of remote sensing resources in dual use frameworks.

Reconciling the Tribes, Solving the World’s Problems

So, here we sit with NRO and NASA at huge respective crossroads. What if the deep pockets of the U.S. defense/intelligence ISR community were applied to the acceleration of this emerging model of dual-use, passive, persistent surveillance? What if the Watchmen recognized that we have reached an historic inflection point where their immense resources might better be spent underwriting the world’s quest to achieve a global Earth observing system of systems that could serve both security and environmental monitoring goals? After all, even our national security leaders have come to the conclusion that understanding and dealing with global climate change is perhaps one of our largest strategic national security challenges. Moreover, perhaps the Watchmen could see this as an opportunity to overcome the ongoing “resolution versus coverage” paradox inherent in reconnaissance systems (e.g., the higher the image resolution, the smaller the geographic coverage of the image), which can only be resolved with cross-cueing between both broad area and high-resolution sensors. In this context, doesn’t it only make sense for the Watchmen to make nice with the Scientists, and possibly even learn something that in turn could help national, indeed global security?

At the same time, Scientists could invest more vigorously in learning about national security missions, and how they might better support them. Leaders at NASA, the National Oceanic and Atmospheric Administration, and other science organizations could organize proactively to approach the Watchmen with a unified front—a rapprochement, which effectively integrated the Scientists into the increasingly complex national security mission, without surrendering their scientific mission.

Long gone are the days when the Watchmen could lean on their vast financial reserves, in near complete isolation, setting their own ISR priorities and investment strategies. Long gone are the days Scientists can launch Earth observation platforms and sensors outside of a coordinated national and international strategy. The next wave of our national investment for monitoring the planet—for scientific Earth observation, military/intelligence ISR, and increasingly for commercial and civil applications—will require a stark departure from the decision-making processes of the past.


Obama Should Finish What Nixon Failed to Do

Originally published in Directions Magazine

Little does the Obama Administration know it, but one of its most innovative policy, programming and budgeting initiatives is at risk of failure due to Richard Nixon’s hasty decision to resign the Presidency of the United States nearly four decades ago.

On August 11, the White House issued a memorandum to the entire Federal government entitled “Developing Place-Based Policies for the 2011 Budget” (pdf). While the issuance of this memorandum has escaped most everyone in official Washington during the intense August debates over healthcare and Afghanistan, the memo has spread like wildfire across the geospatial (technology, data, operations and policy) community which understands the profound and positive implications of this policy memo. Who’s in the geospatial community? They are the people who brought you MapQuest, Google Earth/Google Maps, Bing Maps, the Global Positioning System, your Garmin and your TomTom, satellite imagery, Zillow, and those city/county websites your spouse uses to monitor your neighbor’s declining home values, and so much more. This community has lived the Geospatial Revolution, understands that “the location of anything is becoming everything,” and that this especially holds true for a Federal government responsible for people, programs and assets strewn across our nation and the world.

“Where” programs are implemented and the places that they are intended to impact often represent the single most important dimension of a public program. Unfortunately, the current, sad state of the institutions and policies governing the Federal government’s geospatial data and capabilities put Obama’s “place-based” planning and programming initiative at risk of failure. The Federal government lacks a functioning governance or operational structure to coordinate, deploy or utilize spatial data in a manner that could make “place-based” decision making effective. This fact has been documented in numerous studies and reports by Congress, the Government Accountability Office, the National Research Council and the National Academy of Public Administration. And, Richard Nixon is to blame.

This White House memo offers a sweeping vision of how to use “place” or location to focus public investment to achieve the most effective conceivable outcomes:

“Place-based policies leverage investments by focusing resources in targeted places and drawing on the compounding effect of well-coordinated action. Effective place-based policies can influence how rural and metropolitan areas develop, how well they function as places to live, work, operate a business, preserve heritage, and more. Such policies can also streamline otherwise redundant and disconnected programs.”

But, how is this supposed to be achieved when the management and operation of our “National Spatial Data Infrastructure” (as established in Executive Order 12906 in 1994, and OMB Circular A-16 in 2000) is splintered into a thousand pieces, and coordinated only by the Federal Geographic Data Committee (FGDC), a creature of OMB, which successive administrations have virtually ignored? So unnoticed is the FGDC in the grand scheme of official Washington that at no time was it even asked to comment on the new White House “place-based” programming and budgeting initiative. As it was formulated by the National Economic Council, Domestic Policy Council, Office of Urban Affairs and Office of Management and Budget, the FGDC was not consulted to see whether the Federal government had the requisite geospatial technology, data and programmatic capability in place to succeed in this new initiative. Obama’s OMB has just signed on to the geospatially enabled management of the Federal government – something new in the history of western civilization. This will be an undertaking which has the potential to fundamentally remake how the public sector does business.

Once Peter Orzag (OMB), Melody Barnes (DPC), Adolfo Carrion (OUA), and Larry Summers (NEC) realize that their new place-based initiative is at risk of failure if something is not done soon with regard to the Federal geospatial operations and management, they might be well served to reflect upon the Nixon Administration. In 1973, OMB empanelled the Federal Task Force on Mapping, Charting, Geodesy and Surveying. This study team was convened on the heels of a defense and intelligence reorganization that resulted in the consolidation of place-based activities into the establishment of the Defense Mapping Agency (now known as the National Geospatial-Intelligence Agency, NGA). The OMB Task Force concluded with a recommendation that the US establish a central civilian mapping, charting and geodesy agency, which it proposed be called the Federal Survey Administration (FSA). The decision memorandum to implement this reorganization was on President Nixon’s desk the day he resigned on August 8, 1974. Nixon signed his resignation first, and left the FSA memo in his in-box.

Geospatial technology and data have become so much more central to western industrial democracies than they were in 1974, rendering the idea of the FSA almost quaint. But, with Nixon’s failure to sign the Task Force memo before resigning, to this day we don’t even have an FSA. The American Federal geospatial domain is in disarray. Whether they know it or not, this disarray will directly impact the Obama Administration’s ability to realize its bold vision of place-based planning and programming. President Obama should finish what President Nixon didn’t. It is time for a new national geospatial governance structure in the Federal government.

Where is – A basic tenet of good government is knowing where our ‘stuff’ is

Originally published in Federal Computer Weekly

We have seen and, but where is Clearly, the Obama administration understands the power of place because it has already thrown interactive maps into the first two applications. Its commitment to place has even resulted in a powerful new approach to budget planning and programming, outlined in an Aug. 11 memo titled “Developing Effective Place-Based Policies for the FY 2011 Budget.”

So why do citizens, civil servants, our uniformed service members and political decision-makers, including the president of the United States, need to go to so many mapping portals to see where things are, only to come up short?

From the lowliest citizen to the president of the United States, we should all be empowered to fire up an application I will call At that portal, you could draw a bounding box on a map, declare a slice of time and instantaneously discover everything our government knows about that place. And we should be able to marshal that data instantaneously to support our needs.

When bad things happen, they happen in places and at times you cannot anticipate. The ability to instantaneously achieve situational awareness is essential. Knowing what risks you face and the resources you have at your disposal at a specific location brings an immediate cost savings in less time spent, fewer errors made and opportunity costs not incurred. Even outside a crisis environment, we are discovering that the location of anything is quickly becoming everything.

Knowing the location of our “stuff” is a basic ingredient of good government. The Obama administration came to Washington with a clarion call for transparency, accountability and transformation of how government does business. could help achieve those goals.

It would quickly and clearly demonstrate to everyone which government organizations can properly locate their people, assets, mission challenges and the services they provide — and which cannot. The portal would immediately strike a major blow to the out-of-sight, out-of-mind habits of Washington. Our successes and failures would be placed on the map and made accountable to open and democratic processes, which would inevitably empower people to demand better, more responsive government and encourage public/private partnerships that could lead to a better tomorrow. It would be the ultimate Sunlight Foundation.

Vivek Kundra and Aneesh Chopra, our new federal chief information officer and chief technology officer, respectively, are barnstorming the country advocating the rapid transformation to a government that uses open standards and cloud computing. I couldn’t agree more. could take their impulse and transform it into concrete guidance to agencies, telling them to publish all their data to the cloud via Open Geospatial Consortium standards, with security as appropriate.

That approach would not be limited to traditional geospatial data. The guidance would finally communicate to agency leaders and their chief financial officers that a basic tenet of good government and effective management is knowing where your stuff is and understanding the places on which your mission must be focused. President Barack Obama understands that place matters in a fundamental way. It’s time for

U.S. national security in the Digital Age: White House officials should rethink technology challenges of national security

Originally published in Federal Computer Weekly

It is heartening to many of us in the defense and intelligence communities to see such strong White House commitment to having a cyber czar who will report directly to the president and serve on the National Security Council. That is truly a step forward for our country as digital technology engulfs every aspect of our public and private lives.

However, cybersecurity is only one dimension of the strategic technology challenges that face the U.S. national security enterprise. And therefore, it is high time that the White House rethought the role NSC should play in leading the charge in addressing those technology challenges. Indeed, it is high time that a deputy adviser for technology spearheaded national security technology issues. To not do so would be a huge missed opportunity and the source of a long-term, strategic competitive disadvantage for the U.S. national security posture.

The deputy adviser for technology should have a steady hand and a vigilant eye on the convergence of five critical, complementary technologies at the core of U.S. national security: information and communications technologies (ICT), cybersecurity, sensors, platforms (i.e., space-based, airborne, mobile) and geospatial technologies.

Why those five? Many people have made the case for cybersecurity technology very well — including President Barack Obama, who has highlighted the cyber vulnerabilities in the ubiquitous ICT infrastructure on which the U.S. national security enterprise fundamentally depends. Beyond cybersecurity, however, the implementation of our ICT infrastructure is so disjointed that we suffer from chronic information-sharing and collaboration problems.

Atop that ICT infrastructure, there are enormous volumes of data generated by and derived from largely stand-alone sensor technologies of all sorts. They have been deployed across the U.S. national security enterprise under the names signals intelligence, measurement and signature intelligence, and geospatial intelligence.

The sensors make their observations from a dizzying array of sophisticated technology platforms operating in space and air, on the seas and ground, on our soldiers, and in our networks. All that sensor data — and information from open-source intelligence, human intelligence, all-source analyses and even everyday operations — should be commonly anchored both geospatially and temporally.

All those technologies ultimately converge in the president’s — that is, the commander-in-chief’s — ability to instantaneously marshal all the information related to a particular national security issue in a particular geographic region and related to a particular moment in time. But to the surprise of many movie-going Americans, that cannot be done with the president’s Situation Room map. And if he can’t do it, then you can be sure that it is no mean feat for any analyst or operator in the U.S. national security enterprise to do his or her job in a time-dominant fashion.

Without a deputy national security adviser for technology and without something as critical as the president’s map to serve as the place where the rubber meets the road, those technologies will not converge to achieve the transparency, accountability and transformation needed across our national security enterprise.