Thomas Robinson is the COO of Domino’s Data Lab, where he is responsible for revenue and go-to-market, sales, marketing, professional services, customer support and partnerships.
The big cloud migration has revolutionized IT, but after a decade of cloud transformation, the most sophisticated enterprises are now making the leap to the next generation: rapidly moving workloads back to the cloud to support business-critical data science initiatives. Developing true hybrid strategies for to-premises systems. Enterprises that have not started this process are already behind.
Great cloud migration
Ten years ago, the cloud was mostly used by small startups that did not have the resources to build and operate physical infrastructure and by businesses that wanted to move their collaboration services to a managed infrastructure. Public cloud services (and cheap capital in a low-interest rate economy) meant that such clients could serve growing numbers of users relatively cheaply. This environment has enabled cloud-native startups like Uber and Airbnb to scale and thrive.
Cloud-first strategies may exceed the limits of their efficacy, and in many cases, ROIs are declining, triggering a major cloud backlash.
However, cloud-first strategies can exceed the limits of their efficacy, and in many cases, ROIs are falling short, triggering a major cloud backlash. Ubiquitous cloud adoption has thrown up new challenges, namely out-of-control costs, deepening complexity, and restrictive vendor lock-in. We call this the spread of clouds.
Cloud spending is skyrocketing due to the sheer volume of workloads in the cloud. Enterprises are now running core compute workloads and massive storage volumes in the cloud, not to mention ML, AI and deep learning programs that require dozens or hundreds of GPUs and terabytes or petabytes of data.
The cost keeps on rising with no end in sight. In fact, some companies are now spending twice as much on cloud services as they were before moving their workloads from on-premises systems. Nvidia estimates that moving large, specialized AI and ML workloads back to the premises could result in savings of 30%.