Why invest in purchasing, housing and maintaining a set of computers when you can outsource all that worry to someone else? This has been the oft-used marketing slogan of cloud computing. And it works. It is much easier to offload the data hassle and focus your resources (especially if they’re limited) on your core operations.
During their astounding growth in the middle of the last decade, technology companies such as Amazon and Google built huge infrastructures to power their ever-growing needs. It is estimated that Amazon, for example, has more than 2m servers around the world, while Google is estimated to have 10 exabytes of data storage space. That’s 10 million terabytes, or 10 billion gigabytes.
Over time, they learned how to manage all their software and hardware assets in these infrastructures without significantly increasing costs. They also realised that these infrastructures could be leased out to external companies for them to use as and when they wish. This significantly lowers the amount of capital expenditure required by these businesses to build server set-ups and also allows them to scale this up and down as dictated by their needs.
This was the birth of cloud computing, so called because computer specialists commonly use a cloud cartoon in schematic diagrams to refer to parts in the system that are opaque. But while we know that it works – the global cloud computing market is forecast to reach $127 billion in the next two years – we are less sure exactly how it works.

What’s in store for our data? r2hox/flickr, CC BY-SA
Data in the wind
For example, we know that cloud providers typically store your data in different locations for reliability, but we don’t know where exactly or how many copies of it they keep. In fact, identifying the exact location of all your data in the cloud is a near impossible feat. Only a few cloud providers allow users to choose which countries their data is stored in, although more providers are slowly catering to such needs. We do know that the highest density of cloud servers are located in the United States and Ireland.
This means that it is subject to various changing national and international laws. Data held in the EU, for example, is subject to the EU Data Protection Directive, to which companies transferring data in and out of the EU must conform. Until recently, the EU-US “safe harbour” agreement made this straightforward, but Edward Snowden’s revelations regarding US surveillance led to it being invalidated by the European Court of Justice.
The matter of who actually owns your data is also quite complicated. The short answer is that you own the data you create, but the cloud service provider has ultimate control over it.
This is reflected in many providers’ terms of service which state that they can hold on to the data to comply with legal regulations. They can also pass on the data to government organisations if requested (for example, DropBox). On the upside, providers are responsible for securing the data they hold on your behalf against misuse, especially if it relates to credit card information – although there have been a number of large-scale data breaches.

Where do the wires end? Dave Herholz/flickr, CC BY-SA
Whose data is it anyway?
Moreover, many service providers, such as Facebook and DropBox, say that there may be a delay before your data is deleted upon your request, but they do not specify how long this delay would be. A lot of this is likely to change, though. The European Commission is in the process of updating its regulations to provide more transparent control of personal data in the cloud.
Expanding the cloud model beyond massive data centres and integrating it within the fabric of residential and business buildings can present great opportunities. Some refer to this as fog computing. Battery operated micro-clouds would also act as important hubs to post and relay information within and between communities that have been cut off due to natural disasters (floods and earthquakes, for example) and security crises (terrorist attacks and riots). But how big the cloud will become and how we will all navigate our way around it remains a murky topic.
Yehia Elkhatib does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Yehia Elkhatib, Lecturer in Distributed Computing, Lancaster University
This article was originally published on The Conversation. Read the original article.



OpenAI and U.S. Defense Department Update Agreement to Clarify AI Usage Terms
U.S. Deploys Tomahawks, B-2 Bombers, F-35 Jets and AI Tools in Operation Epic Fury Against Iran
Booked to travel through the Middle East? Here’s why you shouldn’t cancel your flight
AI is already creeping into election campaigns. NZ’s rules aren’t ready
Trump Orders Federal Agencies to Halt Use of Anthropic AI Technology
Australia Targets AI Platforms With Strict Age Verification Rules
Coupang Reports Q4 Loss After Data Breach, Revenue Misses Estimates
BTC Blasts +$3,500 to $66,300 High — ETF Inflows Spark Institutional Comeback, Bulls Target $75K
Meta Encryption Plan Sparks Child Safety Concerns Amid New Mexico Lawsuit
Anthropic Refuses Pentagon Request to Remove AI Safeguards Amid Defense Contract Dispute
Samsung and SK Hynix Shares Hit Record Highs as Nvidia Earnings Boost AI Chip Demand
Does international law still matter? The strike on the girls’ school in Iran shows why we need it
Synopsys Q2 Revenue Forecast Misses Expectations Amid China Export Curbs and AI Shift 



