Assuming that Moores Law still holds, by 2020 an off-the-shelf computer will provide more than 30 times more raw computing power than todays price equivalent. By 2030 the increase in computing power per system will be more than a thousand-fold with a similar increase in storage capacity and network bandwidth. These highly powerful systems (for todays standards anyway) will be pervasively deployed across the more developed regions of globe using embedded software on mobile platforms. They will have the ability to aggregate their capacity and build ad-hoc networking on demand and to provide it as a commodity to various types of consumers ranging from individual users to large organisations. Such aggregation and acquisition of computing resources will be available to all infosecurity practitioners for both defensive and offensive purposes.
Systematic discovery and exploitation of vulnerabilities will be commonly acknowledged and accepted as part of the cost of conducting business and individual social interaction.
Distributed computing and distributed data storage will be a standard capability of even the simplest application rendering the distinction between data-at-rest and data-in-transit as irrelevant as the definition of a network perimeter. Building a one-to-one correspondence between data assets and computing resources will be impossible, building many-to-one mappings may not be useful at all or feasible in the time necessary to assess risk and deploy avoidance or mitigation mechanisms. On demand real-time transferring of risk will thrive.
In an environment of relative abundance of computing power, bandwidth and ubiquitous data, information security operations will be mostly focused on supporting continuous and intelligent acquisition and maintenance of the Quality of Computing Services capability, an organisations ability to draw in real time aggregated computing power and data in an economically efficient manner from a multitude of seemingly opaque providers.
While the distinction between data assets at rest or in transit may not be relevant and the attempt to enforce access control policy on them completely meaningless the protection of the intellectual property used to create Data Derivatives of second order (information differentials) and third order (information about information differentials) from suitable and readily available data and computing resources will be of greatest importance.
In that context, any definition of Information Security Risk as the function of a set of threats, vulnerabilities and assets identifiable and quantifiable at given point in time will be either obsolete or severely limited. Todays risk management tools based on (at best) simple linear regression models with additive risk calculations over that set will seem as rudimentary as using an abacus for financial forecasting.
By 2030 an organisations or individuals information security risk posture will be better described as a probable trajectory in an global multi-dimensional risk landscape constantly evolving and the tools used to measure and manage risk will be built using the foundations of modern physics, evolutionary biology, economic modeling and social sciences rather than technology-dependent abstractions. Effective global situational awareness capabilities will certainly be a major function of risk management systems.
Sign up for Computerworld eNewsletters.