Cloud computing is a fuzzy concept by nature, since the whole purpose of the cloud is to create a hosted abstraction of computing services that can provide businesses and even consumers with an alternative to local technology resources. For the cloud service provider, however, the cloud concept can be anything but fuzzy.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
It’s convenient to look at the technology models of the cloud, starting with the computing options that are available, and then evaluate how the models selected might impact the services.
Tom Nolle, President, CIMI Corp.
Real dollars needs to be spent to buy real equipment in order to create cloud services, but there are still questions about what the cloud technology model should be. That is particularly true when cloud providers plan to offer Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS) or even Business Process as a Service (BPaaS). Thinking broadly, Everything-as-a-Service (XaaS) offerings are designed to provide application support to their users, but the question is how that support will be hosted.
All cloud providers, like all computer users, build their offerings from the same basic combination of servers, storage, software and network tools. That means it’s convenient to look at the technology models of the cloud, starting with the computing options that are available, and then evaluate how the models selected might impact the services that can be profitably offered.
Three essential computing options models mold cloud technology offerings
Three essential server models for a data center currently exist: discrete servers, virtual servers and multitasking servers. The best one is the one that matches the provider’s cloud business model goals.
- The discrete server model is often used by Web hosting companies, as well as by enterprises with application-specific server resources, because applications are assigned to specific servers and run there continuously. The benefit is that it’s easy to administer because work assignment to resources is static. The risk is that the work assigned to any server could under-utilize its resources. That would reduce return on investment (ROI) for the cloud provider, so a discrete server model is best suited for XaaS models with the highest profit margin—Software as a Service—or where the service provides “hot standby” resources to enterprises for backup or workflow offload.
- The virtual server model is based on the widely popular virtualization technology used by enterprises for server consolidation. The goal of virtualization is to create logical “containers” in each server, and these containers or virtual machines appear to applications like a dedicated server. This enables applications to run as before but assures that the server resources are as fully utilized as possible, increasing the total work a pool of servers can do and improving the ROI. IaaS cloud services also present virtual machines as the service itself, so the virtual server model is ideal for IaaS services.
- The multitasking host model of data center organization is used primarily by organizations that have adopted the service-oriented architecture (SOA) software model. Software is divided into components, and these components are then assigned to run as tasks on one of several compatible host systems. Because PaaS offerings typically offer software components combined with server hosting, the PaaS model is readily adapted to multitasking host-based data centers. SaaS offerings are also likely best adapted to this model since software services typically rely on a specific single set of operating system, database and middleware tools that can be hosted on multitasking server pools.
Operators are increasingly looking at the most flexible approach, which is simply to use a large server/storage farm that can host all of the service options rather than bet on a single approach or build one on top of another. A cloud operator’s size allows it to secure a commanding economy of scale without committing to a single approach, and the broader range of services it can offer this way would potentially increase its total addressable market in the cloud services space.
Choosing which database model to support your cloud services
The issue of which database model to support may be finessed in a similar way. Operators can create storage networks in their cloud data centers and offer both high-level database services like SQL/relational DBMS, similar to Microsoft’s Azure, as well as block level or file-level server I/O, which is more along the lines of the Amazon EC2 model.
For operators, the solution to the database technology debate may come out of their service targets. At traditional data/storage pricing levels, there is little chance that enterprises would move large data warehouses into the cloud, and there is also little chance the cloud would be able to back up mission-critical applications, since those applications would need data access to run.
The Database as a Service (DBaaS) model of storage could be extended to allow cloud applications or components to “back-access” enterprise repositories still located within a customer data center. This would eliminate the cost of storing the data in the cloud and also reduce security and compliance concerns. That would open a wider range of applications to public cloud services, potentially enhancing the business case for providers.
Flat data center network connections may prove most flexible
In the meantime, the need for flexibility at the server-infrastructure level implies that the connection among data center elements should be similarly flexible. The greater the demands for bandwidth and interconnectivity at the server and storage level, the greater the need for a fast and relatively flat data center network.
Given that hierarchies of switches generally depend on successively faster trunk connections within and among switch layers, rapid growth and the need to create a quick market response may accelerate operator deployments faster than high-speed Ethernet standards can evolve. This suggests that data center network fabric models of connection within the data center may be the best approach. By creating a deterministic model of interconnection among data center elements, operators could reduce performance variations that could develop as they change their server/storage connectivity models to accommodate changes in cloud models or support new service models and software components.
The market is in the early days of enterprise cloud services, and even earlier in the development of optimal models of cloud services for SMBs and consumers. For cloud providers, the risks of stranding capital by making sub-optimal server, storage and network decisions now in order to address market opportunity, can be mitigated by optimizing flexibility first, then focusing technology choices on the preferred models as those models can be more confidently identified.
About the author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is the publisher of Netwatcher, a journal addressing advanced telecommunications and networking strategy issues. Check out his SearchCloudProvider.com networking blog, Uncommon Wisdom.