The sheer amount of information that computer systems are generating is becoming difficult to store and manage. Every day, humans produce about 333.2 billion emails (in 2022), over 500 million tweets, and 4 petabytes of Facebook data.
This year’s estimates for the amount of data created and consumed worldwide is roughly 97 zettabytes. For reference, 1 zettabyte is 1 billion terabytes – which means data management is going to be a challenge for enterprises these next few years.
One organisation working to meet the demand for this glut of data is Seagate, an American data storage company. Being around since the late 1970s, Seagate remains one of the top producers of hard disk drives today.
We recently contacted Ravi Naik – its Chief Information Officer and Executive Vice President, Storage Services – to ask him about this data glut, how their Lyve cloud storage platform works, what data management will be like in a few years, and more.
What kind of data management challenges can businesses expect to face when it comes to applications like AI, especially with the current explosion of data? How can businesses address these concerns?
As the world becomes more interconnected and companies adopt more emerging technologies like AI, they will be faced with unprecedented data growth. By 2025, IDC predicts the global datasphere will grow to 180 zettabytes. Moreover, by 2025, 44% of data created in the core and edge will be driven by analytics, AI, and deep learning, and by an increasing number of IoT (Internet of Things) devices feeding data to the enterprise edge.
CIOs like myself are facing a data problem in this environment. There is more data created than ever before, and more use cases to unlock value from that data. However, many CIOs report that they can find no easy ways to store and activate data at scale in a secure and cost-effective way.
‘The more data we store, the less it should cost,’ they tell me. But that is not the case. Multi-cloud freedom is elusive, and data value suffers as a result.
There are exabytes of data trapped in silos that remain unutilised, either in customers’ own data centres, or in segregated cloud environments. These architectures lock in customers’ data and charge them for not only storing their data, but also accessing and moving this data.
Customers are facing a situation where cloud economics fail to deliver at scale because of:
- Ever-increasing cloud bills;
- Data access fees dominating storage costs;
- Dependence on proprietary data enablement services that customers must pay for.
Businesses must rethink data management strategies and deploy technologies that allow them to capture the right data at the start of its lifecycle, to safely store the data, and to access it seamlessly when needed. They need data solutions that not only enable free movement of data, but also effective consumption of data via an open-source ecosystem.
Could you talk to us about the technology behind your Lyve cloud storage platform? What makes Lyve’s technology different from other storage-as-a-service platforms currently available for enterprises?
Lyve Cloud is Seagate’s cloud-storage-as-a-service platform which offers an option to store as much data as possible, and migrate large volumes of data on or off the cloud. It aims to help businesses and organisations manage mass data sets created from the datasphere involving AI, ML (machine language), and 5G that are unstructured, siloed, and underutilised.
The value of Lyve Cloud is its predictable economics. Customers pay for storage and nothing else. All data access, egress, operations, and retrieval charges are already included. There are no extras, there are no hidden fees, there is no small print. We also provide support and professional services at no extra cost.
Lyve Cloud is vendor-agnostic, interoperable with other software, hardware, and services. This means businesses can move data into, around, and out of Lyve Cloud seamlessly.
Crafted with data security and privacy in mind, Lyve Cloud offers object immutability, which defends data from ransomware, corruption, or deletion. Object immutability enables users to specify how long they would like their data to be immutable, empowering users to take control of their data protection. It also keeps audit logs for detailed records of activity to support compliance and track suspicious activities.
What predictions do you foresee in the data management sector for the next three to five years? How will emerging technologies like 5G, AI, and ML affect its evolution?
Some of the trends that will animate the next few years are:
- Distributed storage networks for Web 3.0: We are seeing an emergence in the use of decentralised consensus protocols and distributed ledger technologies for new ways of storing data in decentralised storage networks. We believe these storage networks will eventually become the foundation for Web 3.0. However, the amount of capacity within the decentralised storage networks will also need to grow to be able to onboard hundreds of petabytes of data.
- Greater adoption of multi-cloud strategies: A multi-cloud approach makes it easier for businesses to combine services and resources from multiple cloud providers to increase flexible use and availability of various applications and services like StaaS (storage as a service), SaaS (software as a service), and CaaS (content as a service). However, this means businesses will face greater complexity in terms of infrastructure, security, and cost.
- Data security and protection are becoming more important than ever: Intensified movement of data provokes vulnerability, hence a need for greater protection. There are applications and data in motion across cloud environments. This requires strict policy adherence and device security implementation, which manages access to the device itself. Additionally, as the hybrid cloud connects to the edge, endpoints, and IoT ecosystems, it requires data storage near each of those devices as well, since network costs of moving large data sets are expensive.
As a result, we will see continued reliance on secure shuttles as alternatives to expensive and bandwidth-limited network traffic—as well as continued ML optimisation to pair compute with storage at the edge for inference/decision making via intelligent appliances. Devices themselves will continue to see growth in secure at-rest encryption.
Today, newer centralised data management solutions can successfully leverage applications like AI and ML to identify sensitive data (e.g. personally identifiable information, private health information, credit card numbers, etc) and automatically mask them from the view of unauthorised personnel. This reduces the chance of a data breach or inadvertent data disclosure.
- Megatrends that are already vibrant—such as smart factories, smart cities, autonomous vehicles, and the study of human genomics—will also be driving the need for mass-capacity storage. While 5G, AI, and ML will play a critical role in the megatrends, they will eventually demand new mass-capacity storage solutions from endpoints to edge and cloud.
What do you think are Seagate’s top technology challenges in this age of COVID-19, heightened cyberthreats risks, and massive data explosion?
The pandemic has given the digital transformation that has been underway for decades a boost. But it has also changed the threat landscape and created new risks to business continuity.
The foremost priorities for Seagate are network security, data availability, and protection for business continuity. We are reinforcing our resilience and reducing vulnerability to data attacks while increasing productive data use. We have to protect remote devices, providing secure network access for all our remote employees to ensure operational effectiveness amid new cyberthreats.