by Darryl Chantry
Summary: As economic pressure builds globally, many organizations are starting to look at Cloud Computing as a potential choice to reduce the total cost of ownership for IT. In searching for ways to use Cloud Computing technologies, enterprises have to ask what applications make good candidates for moving to the Cloud and which do not, such as "Does the nature of the business itself allow for Cloud Computing to even be considered?"
This article provides a broad overview into what Cloud Computing is and discusses an approach to mapping enterprise applications to Cloud Computing platforms to assist in determining whether your applications or your business model are a good fit for the Cloud.
Which Came First: The Cloud or Cloud Computing?
Cloud computing has fired the imaginations of many informationtechnology (IT) professionals around the world, whether they are small independent software vendors (ISVs), Silicon Valley startups, or large corporations that are looking to cut costs. There seems to be an everincreasing number of people who look to the Cloud to hit upon the magic bullet that will solve any IT problem.
One interesting aspect of the hype that surrounds cloud computing is the lack of a clear definition as to what cloud computing is and, just as relevant, what it is not. If you were to ask 100 people to define the Cloud and what they believe cloud computing is, you would probably get 150 different answers (some people tend to answer twice, with the first answer contradicting the second). With this in mind, it seems only fitting to begin this article by discussing a general definition for cloud computing.
The Cloud (or the Internet, if you prefer) has been around for some time now, about 25 years; so without a doubt, the Cloud came first, right? Well, one could argue that the first servers on the Internet were really storage devices for data and applications to be shared and run globally; or to put it another way, to provide cloud computing resources in multiple locations globally with almost infinite scalability. Contrast that to today’s cloud-computing initiatives that are pretty much there to provide data, applications, and computing power globally with almost infinite scalability, and you quickly see the difference… Or is there really a difference?
The difference is that we are using new technologies to put a new spin on old ideas. Cloud computing is more about evolution than revolution, with technology allowing price points that take these thoughts and make them available to all people — regardless of budget size — via a utilitybased, pay-for-what-you-use model.
Utility computing refers to using computing resources (infrastructure, storage, core services) in the same way you would use electricity or water; that is, as a metered service in which you only pay for what you use. The utility can eliminate the need to purchase, run, and maintain hardware, server, and application platforms, and to develop core services — for example, billing or security services. Consider the following scenario.
A Web-based ISV that wants to makes components available for Facebook or MySpace faces the following dilemma: The components they create could be adopted by thousands, or could struggle to find acceptance in any form. Most ISVs have limited capital, so they need to balance the expenditure between developing their application and providing infrastructure to support their software.
Such balancing acts can lead to poor applications with good platform support, or great applications that are rarely accessible due to poor platform support. Neither scenario is a path to success; this is where utilitybased cloud platforms can help. Cloud utility platforms can provide a low-cost alternative that can easily scale to meet the demand for the ISV’s application, which allows them to commit practically all of their resources toward building a great application.
As cloud services are essentially available as a utility offering, should the product fail, the ISV can simply shut down the services and stop all costs associated with the software.
The utility model also allows organizations to offset some of the costs of running private data centers by providing additional infrastructure resources to manage peak loads; this is also known as cloud bursting .
Traditionally, to handle peak loading, organizations would often design data centers that had the processing power to manage peak loads; this meant that for the majority of the time the data center was underutilized. By using cloud bursting, an organization can build a data center to the specifications that will allow the entity to run all normal day-to-day workloads within their environment, and then use cloud providers to provide additional resources to manage peak loads.
Utility computing is often associated to some form of virtualized platform that allows for an almost infinite amount of storage and/ or computing power to be made available to the platforms users through larger data centers. The evolution of cloud computing is now broadening the definition of utility computing to include service beyond those of pure infrastructure.
Will All Applications Move to the Cloud?
Will all applications run in the Cloud? Should you attempt to port all of your existing applications to the Cloud? Should all your new applications be developed in the Cloud? What is this Cloud thing, anyway? These are a few of the questions that arise whenever you start thinking about using cloud services.
Some applications will be ideal candidates to be ported to a cloud platform, developed on a cloud platform, or hosted on a cloud infrastructure, while other applications will be poor cloud candidates. In this case, the standard architectural answer “it depends” can be applied to all of the preceding questions. Practically every application potentially could exist either partially or fully in the Cloud; the only caveat to this are the trade-offs in an application’s attributes — and, possibly, functionality — that you might be willing to make to move it to the Cloud.
The following pages discuss a few ideas for decomposing an application into its basic attributes, and decomposing the Cloud into its basic attributes, to help make decisions as to whether running your specific application in the Cloud is practical.
Mapping an Application to the Cloud
Every application is designed to fulfill some purpose, whether an order management system, flight reservation system, CRM application, or something else. To implement the function of the application, certain attributes need to be present: For example, with an order management system, transaction and locking support might be critical to the application. This would mean that cloud storage might not be suitable as a data store for such a purpose. Determining the key attributes of any application or subsystem of a larger application is a key step in determining whether or not an application is suitable for the Cloud.
Figure 1. Attributes map of an application
Figure 1 shows a number of key high-level attributes (blue column) that could be relevant to any application. The potential number of attributes for any given application does not need to be recorded; what you are attempting to determine is which attributes are the critical ones for your application. This will likely produce a manageable list of attributes that can then be mapped to the Cloud. Selecting Data Management, for example, presents a list of secondary attributes that provide more details for the high-level attributes. Selecting Access then allows you to specify if you want either online, offline, or both online and offline access to your data source.
Building on the Data Access example, you can start to see how this attribute could affect the choice as to whether or not to use a cloud provider for data storage. Should the application in question need purely online data, cloud storage would be an excellent choice; however, if offline data is all that is required, this could be a key indicator that the application is not suited for the Cloud. And if you decide that the application requires both an online and an offline model, the cost of developing the application to synchronize data between the application and the Cloud would need to be considered.
Choosing to support both an offline and online experience for the end users will add additional cost to the project; however, should another attribute, such as high scalability, be identified, the advantages that the Cloud provides in this area easily could offset the cost of developing an offline experience. (See Appendix A, Sample Application-Mapping Attributes, page 7.)
What Makes Up the Cloud?
After you have decomposed an application and determined its key attributes, you can begin work on a similar exercise for the Cloud — specifically, for cloud service providers. Splitting cloud attributes into broad categories can simplify the mapping process. The categories used in this example are cloud infrastructure, cloud storage, cloud platform, cloud applications, and core cloud services.
You could map any application attributes to cloud attributes in one or more categories, as depicted in Figure 2.
Figure 2. Mapping application attributes to cloud attributes
Cloud infrastructure is infrastructure, or more commonly, virtual servers in the Cloud. Infrastructure offerings are the horsepower behind large-scale processes or applications. For large-scale applications, think Facebook or MySpace; for large-scale processing, think a high-performance infrastructure cluster that is running engineering stress-test simulations for aircraft or automobile manufacturing.
The primary vehicle for cloud infrastructure is virtualization; more specifically, running virtual servers in large data centers, thereby removing the need to buy and maintain expensive hardware, and taking advantages of economies of scale by sharing Infrastructure resources. Virtualization platforms are typically either full virtualization or para-virtualization environments. (See Appendix B for a more detailed explanation of virtualization, page 8.)
Cloud storage refers to any type of data storage that resides in the Cloud, including: services that provide database-like functionality; unstructured data services (file storage of digital media, for example); data synchronization services; or Network Attached Storage (NAS) services. Data services are often consumed in a pay-as-you-go model or, in this case, a payper- GB model (including both stored and transferred data).
Cloud storage offers a number of benefits, such as the ability to store and retrieve large amounts of data in any location at any time. Data storage services are fast, inexpensive, and almost infinitely scalable; however, reliability can be an issue, as even the best services do sometimes fail. Transaction support is also an issue with cloud-based storage systems, a significant problem that needs to be addressed for storage services to be widely used in the enterprise.
A cloud platform is really the ability to build, test, deploy, run, and manage applications in the Cloud. Cloud platforms offer alternatives to these actions; for example, the build experience might be online only, offline only, or a combination of the two, while tools for testing applications might be nonexistent on some platforms, yet superb on others.
Cloud platforms as a general rule are low-cost, highly scalable hosting/development environments for Web-based applications and services. It is feasible (although an oversimplification) to consider cloud platforms as an advanced form of Web hosting, with more scalability and availability than the average Web host. There are pros and cons for any technology, and a con in the cloud platform world is portability. As soon as an application is developed to run on a specific platform, moving it to another cloud platform or back to a traditional hosting environment is not really an option.
A cloud application exists either partially or fully within the Cloud, and uses cloud services to implement core features within the application. The architecture of cloud applications can differ significantly from traditional application models and, as such, implementing cloud applications can require a fundamental shift in application-design thought processes.
Cloud applications can often eliminate the need to install and run the application locally, thereby reducing the expenditure required for software maintenance, deployment, management, and support. This type of application would be considered a Software as a Service (SaaS) application.
An alternative to this would be the Software plus Services (S+S) model. This is the hybrid between traditional application development and a full SaaS implementation. S+S applications typically use rich client applications that are installed on a client’s PC as an interface into externally hosted services. S+S often includes the ability to interact with an application in an offline mode, and sync back to a central service when required.
Core Cloud Services
Core cloud services are services that support cloud-based solutions, such as identity management, service-to-service integration, mapping, billing/ payment systems, search, messaging, business process management, workflow, and so on. Core cloud services can be consumed directly by an individual, or indirectly through system-to-system integration.
The evolution of core cloud services potentially will mimic that of the telecommunications industry, with many services falling under the categories of Business Support Systems (BSS) or Operational Support Systems (OSS).
BSS services manage the interactions, with customers typically handling tasks such as:
- Taking orders
- Processing bills
- Collecting payments.
OSS services manage the service itself and are responsible for items such as:
- Service monitoring
- Service provisioning
- Service configuration.
Attributes Map for Cloud Services
By using the five cloud categories, we can now develop a set of attributes for each of the categories. These attributes can be used in two ways:
- Mapping your application’s attributes to cloud attributes to validate whether cloud services are suitable for your application, and identifying which types of services to use
- Evaluating cloud service providers as possible candidates for hosting your applications, identifying which types of services are available from your chosen provider(s), and then determining specific implementation attributes of the services offered.
Figure 3. Five cloud categories and attributes for cloud storage
Figure 3 shows the five cloud categories and a list of attributes for the cloud-storage category. Each cloud provider implements its cloud services in a slightly different way, with companies like Microsoft offering a number of different storage alternatives that developers can choose to use, depending on the required features, for a specific application.
Just like application attributes, cloud attributes must be weighed carefully when determining whether a cloud provider’s services are a good fit for your needs, as you will have to factor in implementation cost for each decision you make. (See Appendix A, Sample Cloud- Mapping Attributes, page 8.)
Overlaying the Cloud and Applications
Now that you have a complete understanding of the application and of what cloud services you could use to implement a solution, you can now start to make decisions about what the eventual architecture could be. Should you find that the Cloud is a viable and cost-effective alternative to traditional application architecture, the next step would be to choose the most suitable cloud provider(s) for the application.
It is quite possible that no single vendor would match completely with your requirements; however, you might find that you can obtain all of the services that your application requires from multiple vendors.
Figure 4. Single application that uses multiple cloud services and vendors
Figure 4 depicts an application that uses a number of cloud services, from multiple cloud providers. The preceding example could represent an application that is built in ASP.NET and is running on the Azure platform (cloud platform); however, the application also needs components with full trust, which means that the components can run only in a full virtual environment (cloud infrastructure). Data is stored in a Microsoft cloud (cloud storage), with services such as Workflow and Identity (core cloud services) also provided via Azure. The last requirement for the application could be a billing/payment service (core cloud services), which could be provided by another cloud provider.
Although this scenario is feasible, the costs that are associated with having accounts with multiple providers, using a number of APIs, and then integrating all of the services into an application could be impractical. The likely solution would be to find a single vendor that delivers the majority of the services that are required by your application, and use this as the base platform for a hybrid solution.
One CloudOne Cloud to Rule Them All
Is there one cloud, or are there multiples clouds? This is a debate that I have heard a number of times already. One side of this argument is public versus private clouds. Can a private implementation of cloud technologies be called a cloud at all, or is it something else? Are all of the public cloud offerings the same? And what about applications or systems that span both private and public clouds in a hybrid model? Where do these fit?
The honest answer is that the argument is irrelevant. Whether you subscribe to one theory or another, the desired outcome is the same: Build the most cost-effective system that you can that works. The previous section looked at ideas to help you make decisions; now, we will take a quick look at potential applications that could exist in the Cloud or as part of a hybrid cloud solution.
Architecting Solutions in the Cloud
This section describes three application scenarios for which solutions could be implemented by using cloud services. The following scenarios are by no means an extensive list of possible solutions that are suitable for cloud services; they are only indications of applications that could be feasible.
When discussing the benefits of a cloud infrastructure, there seems to be a consensus that a concert ticketing system would make an ideal candidate for a cloud scenario (Figure 5). On the surface, this type of application looks like a viable candidate; ticketing systems are often subject to high demand over a short period of time (people scrambling to buy tickets for a concert or sporting event that will sell out rapidly). This is often followed by long periods of low to moderate activity.
Figure 5. Using cloud infrastructure for ticketing system
Ticketing applications are often overloaded during the periods of high demand, when the need for computing resources is extremely high. The ability to run up instances of virtual machines to cover such periods would be beneficial. There are, however, a number of issues that must be taken into account before architecting such a solution:
- Ticketing systems are data intensive and highly transactional. Transactions can be required for the payments system as well as to reserve specific seats for a given event.
- Personally identifying information is almost certain to be collected, with many customers having an account with the ticketing company or, potentially, wanting to create an account with the organization.
- Validating credit-card payments can be time consuming, and is a potential source of bottlenecks in the ticketing process.
- Some virtual cloud-server platforms cannot save state, which means that once a server image is shut down, all changes or additions to data that is stored within the image are lost.
- Existing ticketing companies already will have significant investment in infrastructure and data management.
- Depending on the cloud service that is chosen, virtual cloud-server images that are not able to save state will need to be recreated for every event, which could well result in a significant amount of work being required to prepare an environment for each new instance.
With all of this in mind, there are a number of ways to use a cloud service to reduce the demand on an existing system during a peak loading time:
- Duplicate the internal system completely. This would require the most amount of work; once the application is ready to use, it still would need to be synchronized with the current system for every new instance. Permanently leaving the system on (even in a reduced capacity) could be expensive due to the cost of using a services platform.
- Split the workload between internal systems and a cloud service in real time. This would involve splitting the process of selecting seats and purchasing tickets across the two environments; for example, the transaction could begin on the internal ticketing system where customers log into their account, select the event they wish to attend, select the seats, and then are passed off to a cloud service for final processing of payment. This would mean creating a virtual cloud server that simply completes the final stage of processing and, as such, would not need to be synchronized with the main system — effectively being a stateless processing engine. Only minimal data would need to be transferred to the cloud service, and credit-card information could be collected on the external system for single use and then deleted.
- Split the workload between internal systems using batch processing. Similarly to the preceding processing method, this method would differ in that all personal information would be collected on the internal system, including Cc: details. This information then could be placed in a process queue and shipped in batches to the cloud service for processing. This would mean that should payment fail, a secondary process for contacting the person who is attempting to purchase the tickets would need to be implemented if it does not already exist.
The preceding solutions are examples of how a ticketing system could split processing between a cloud service and a company’s internal systems during periods of heavy use.
This example shows how you can combine multiple services (infrastructure, storage, and queuing) to provide a solution for data processing (Figure 6). In this scenario there are a chain of photo processing stores that make use of the cloud service to render or reformat digital media files.
Figure 6. Using cloud infrastructure for photo/video processing
The photo chain has a number of stores spread across the U.S. and wishes to centralize large image and video processing to reduce two aspects of the system: the amount of hardware in each store; and the complexity of maintaining and supporting the hardware.
When a customer comes into a store with a video that needs to be converted to a different format, the video file is first uploaded to a storage service, and then a message is placed in a queue service that a file is on the storage platform and needs to be converted to a different format. An application controller that is running computer instances receives the message from the queue, and then either uses an existing instance of a virtual machine, or creates a new instance, to handle the reformatting of the video. As soon as this process is complete, the controller places a message in the queue to notify the store that the project is complete.
The preceding scenario easily can be converted to an online experience, so that customers could upload files for processing without having to go to a physical location.
Web Site Peak Loading
The final example that I will use is that of a Web site that has an extremely high amount of traffic on an irregular basis, which makes it impractical to build out the hosting infrastructure to support such peaks (Figure 7). Such sites could be news sites with breaking stories, game sites announcing a new game, or movie sites showing trailers of the next blockbuster.
Figure 7. Using cloud infrastructure for peak load coverage
The solution to this scenario involves creating a complete copy of the company’s Web site, or the part of the Web site that will experience the heavy traffic, on a cloud-service infrastructure service. The copy of the site would be a static instance running across a number of Web servers that could be configured as either a load-balanced set of servers or as a cluster. You can make any changes you need to on the original Web site, and then synchronize them back to the cloud servers. This would create latency, but would greatly reduce the effort needed to maintain the Web servers and Web sites, and would eliminate the problem of maintaining state between the internally and externally hosted Web sites.
There are many ways to architect solutions for the preceding scenarios, as there are many more scenarios in which you can use a cloud service. The goal of this article is merely to highlight a few of the alternatives and uses for the services that are emerging.
The fascination of the Cloud and cloud technologies is driving many developers, ISVs, start-ups, and enterprises to scrutinize cloud services and assess their suitability for adoption. The promises of lower cost of ownership — and of almost limitless scalability, in both storage and infrastructure power — are hard to ignore. The promise of the Cloud definitely warrants inspection; however, you must manage the adoption of cloud services carefully and realize that not all applications are suited for the Cloud. Many applications will work in the Cloud; however, hidden costs of hosting some solutions in the Cloud could see projects being delivered with much higher development and running costs than would be true of more traditional and well-defined architectures and technologies.
Cloud-Infrastructure Platforms and Virtualization Types
One of the key enabling-technologies for cloud-computing platforms is virtualization, which is the ability to provide an abstraction of computing resources. When we look at cloud-infrastructure platforms as they stand today, they predominately come in two flavors: fully virtualized or paravirtualized environments.
There are many more variations to virtualization than the two that I have just mentioned; so, for this post, I thought that I would discuss some of the virtualization methods that exist and that could well find their way into a cloud-infrastructure offering.
In this type of virtualization, the virtual environment emulates a hardware architecture that an unmodified guest operating system (OS) requires. One of the common instances in which you encounter emulated hardware is with mobile devices. Application developers will use an emulated environment to test applications that are designed to run on smart phones or on PDAs, for example. (See Figure 8.)
Figure 8. Emulated-virtualization environment
Pros: Simulates a hardware environment, which is completely different from the underlying hardware. An example of this would be a mobile device such as a smart phone emulated on a desktop PC.
Cons: Poor performance and high resource usage.
In full virtualization, an image of a complete unmodified guest OS is made and run within a virtualized environment. The difference between full virtualization and emulation is that all of the virtualized guests run on the same hardware architecture. All of the guests support the same hardware, which allows the guest to execute many instructions directly on the hardware—thereby, providing improved performance. (See Figure 9.)
Figure 9. Full-virtualization environment
Pros: The ability to run multiple OS versions from multiple vendors: Microsoft Windows Server 2003, Windows Server 2008, Linux, and UNIX, for example.
Cons: Virtualized images are complete OS installations and can be extremely large files. Significant performance hits can occur (particularly on commodity hardware), and input/output operationintensive applications can be adversely effected in such environments.
In para-virtualization, a hypervisor exports a modified copy of the physical hardware. The exported layer has the same architecture as the server hardware. However, specific modifications are made to this layer that allow the guest OS to perform at near-native speeds. To take advantage of these modified calls, the guest OS is required to have small modifications made to it. For example, you might modify the guest OS to use a hypercall that provides the same functionality that you would expect from the physical hardware. However, by using the hypercall, the guest is significantly more efficient when it is run in a virtualized environment. (See Figure 10.)
Figure 10. Para-virtualization environment
Pros: Lightweight and fast. Image sizes are significantly smaller, and performance can reach near-native speeds. Allows for the virtualization of architectures that would not normally support full virtualization.
Cons: Requires modifications to the guest OS, which allows the OS to support hypercalls over native functions.
In OS virtualization, there is no virtual machine; the virtualization is done completely within a single OS. The guest systems share common features and drivers of the underlying OS, while looking and feeling like completely separate computers. Each guest instance will have its own file system, IP address, and server configuration, and will run completely different applications. (See Figure 11.)
Figure 11. OS-virtualization environment
Pros: Fast, lightweight, and efficient, with the ability to support a large number of virtual instances.
Cons: Isolation of instances and security concerns around data are significant issues. All virtual instances must support the same OS.
Application virtualization, as with any other type of virtualization, requires a virtualization layer to be present. The virtualization layer intercepts all calls that are made by the virtualized application to the underlying file systems, redirecting calls to a virtual location. The application is completely abstracted from the physical platform and interacts only with the virtualization layer. This allows applications that are incompatible with each other to be run side by side: Microsoft Internet Information Services 4.0, 5.0, and 6.0 all could run side-by-side, for example. This would also improve the portability of applications by allowing them to run seamlessly on an OS for which they were not designed. (See Figure 12.)
Figure 12. Application-virtualization environment
Pros: Improves the portability of applications, allowing them to run in different operating environments. Allows incompatible applications to run side by side. Allows accelerated application deployment through on-demand application streaming.
Cons: Overhead of supporting a virtual machine can lead to much slower execution of applications, in both run-time and native environments. Not all software can be virtualized, so is not a complete solution.