Monday, November 30, 2009

Model-Driven SOA with “Oslo”

Summary: Service-oriented architecture (SOA) must evolve toward a more agile design, development, and deployment process. Most of all, it must close the gap between IT and business. This article presents a map between modeling theory (model-driven development and model-driven SOA) and possible implementations with the next wave of Microsoft modeling technology, codename “Oslo.”

Introduction

The following article presents a map between modeling theory (model-driven development and model-driven SOA) and possible implementations with the next wave of Microsoft modeling technology, codename “Oslo.”

Microsoft has been disclosing much information about “Oslo” while delivering several Community Technology Previews (CTPs), or early betas. However, most of the information that is available on “Oslo” is very technology-focused (internal “M”-language implementation, SDK, and so on). This is why I want to present a higher-level approach. Basically, I want to discuss Why modeling, instead of only How.


Problem: Increase in SOA Complexity

What is the most important problem in IT? Is it languages, tools, programmers? Well, according to researchers and business users, it is software complexity. And this has been the main problem since computers were born. Application development is really a costly process, and software requirements are only increasing. Integration, availability, reliability, scalability, security, and integrity/compliancy are becoming more complicated issues, even as they become more critical. Most solutions today require the use of a collection of technologies, not only one. At the same time, the cost to maintain existing software is rising.

In many aspects, enterprise applications have evolved into something that is too complex to be really effective and agile.

With regard to service-oriented architecture (SOA), when an organization has many connected services, the logical network can become extremely difficult to manage—something that is similar to what is shown in Figure 1, in which each circle would be a service and each box a whole application.



Figure 1. Point-to-point services network



The problem with this services network is that all of these services are directly connected; we have too many point-to-point connections between services. Using point-to-point services connections is fine, when we have only a few services; however, when the SOA complexity of our organization evolves and grows (having too many services), this approach is unmanageable.

Sure, you will say that we can improve the previous model by using an enterprise-service-bus (ESB) approach and service-orchestration platforms, as I show in Figure 2. Even then, however, the complexity is very high: Implementation is based on a low level (programming in languages such as the Microsoft .NET languages and Java); therefore, its maintenance and evolution costs are quite high, although not as high as having non-SOA applications.

Figure 2. SOA architecture, based on ESB as central point (Click on the picture for a larger image)



SOA foundations are right; however, we must improve the work when we design, build, and maintain.


Does SOA Really Solve Complexity, and Is It Agile?

On the other hand, SOA has been the “promised land” and a main IT objective for a long time. There are many examples in SOA in which the theory is perfect, but its implementation is not. The reality is that there are too many obstacles and problems in the implementation of SOA. Interoperability among different platforms, as well as real autonomous services being consumed from unknown applications, really are headaches and slow processes.

SOA promised a functionality separation of a higher level than object-oriented programming and, especially, a much better decoupled components/services architecture that would decrease external complexity. Additionally, separation of services implementation from services orchestration results in subsystems being much more interchangeable during orchestration. The SOA theory seems correct.

But, what is the reality? In the experiences of most organizations, SOA in its pure essence has not properly worked out. Don’t get me wrong; services orientation has been very beneficial, in most situations—for instance, when using services to connect presentation layers to business layers, or even when connecting different services and applications.

However, when we talk about the spirit of SOA (such as the four tenets), the ultimate goals are the following:

In SOA, we can have many autonomous services independently evolving—without knowing who is going to use my services or how my services are going to be consumed—and those services should even be naturally connected.

I think that this is a different story. This theory has proven to be very difficult; SOA is not as agile or oriented toward business experts as organizations would like.


Is SOA Dead?

A few months ago, a question arose in many architectural forums. It probably was started by Anne Thomas Manes (Burton Group) in a blog post called “SOA Is Dead; Long Live Services.”

So, is SOA dead? I truly don’t think so. SOA foundations are completely necessary, and we have moved forward in aspects such as decoupling and interoperability (when compared with “separate worlds” such as CORBA and COM). So, don’t step back; I am convinced that service orientation is very beneficial to the industry.

The question and thoughts from Anne were really very interesting, and she really was shaking the architecture and IT communities. She did not really mean that service orientation is out of order; basically, what she said is the following:

“SOA was supposed to reduce costs and increase agility on a massive scale. Except in rare situations, SOA has failed to deliver its promised benefits... .

“Although the word ‘SOA’ is dead, the requirement for service-oriented architecture is stronger than ever.

“But perhaps that’s the challenge: The acronym got in the way. People forgot what SOA stands for. They were too wrapped up in silly technology debates (e.g., ‘What’s the best ESB?’ or ‘WS-* vs. REST’), and they missed the important stuff: architecture and services.”

So, I don’t believe that SOA is dead at all, nor do I think that Anne meant that. She was speaking out against the misused SOA word. Note that she even said that “the requirement for service-oriented architecture is stronger than ever.” The important points are the architecture and services. Overall, however, I think that we need something more. SOA must become much more agile.

Furthermore, at a business level, companies every day are requiring a shorter time to market (TTM). In theory, SOA was going to be the solution to that problem, as it promised flexibility and quick changes. But the reality is a bit sad—probably not because the SOA theory is wrong, but because the SOA implementation is far from being very agile. We still have a long way to go.

As a result of these SOA pain points, business people can often feel very confused.

As Gartner says, SOA pillars are right, but organizations are not measuring the time to achieve a return on investment (ROI). One of the reasons is that the business side really does not understand SOA.

To sum up, we need much more agility and an easier way to model our services and consumer applications. However, we will not achieve this until business experts have the capacity to verify directly and even model their business processes, services orchestration, or even each service. Nowadays, this is completely utopian. But who knows what will start happening in the near future?


Model-Driven SOA: Is That the Solution?

Ultimately, organizations must close the gap between IT and business by improving the communication and collaboration between them. The question is, “Who has to come up?” Probably, both. IT has to come closer to business and be much more accessible and friendly. At the same time, business experts have to reach a higher level to be able to leverage their knowledge more directly. They have to manipulate technology, but in a new way, because they still have to focus on the business; they do not have to learn how to program a service or application by using a low-level language such as C#, VB, or Java.

Someday, model-driven SOA might solve this problem. I am not talking only about services orchestration (by using platforms such as Microsoft BizTalk Server); I mean something else—a one-step-forward level—in which orchestration has to be managed not by technical people, but by business experts who have only a pinch of technical talent. This is something far from the context of today, in which we need technical people for most application changes. I mean a new point of view—one that is closer to the business side.

Model-driven SOA will have many advantages, although we will have to face many challenges. But the goal is to solve most typical SOA problems, because model-driven development (MDD) makes the following claim: “The model is the code.” (See Figure 3.)

Figure 3. MDD: “The model is the code.”



The advantages and claims of model-driven SOA are the following:

1. The model is the code. Neither compilation nor translation should be necessary. This is a great advantage over code generators and finally drives us to maintain/update the code directly.
2. Solutions that model-driven SOA creates will be compliant with SOA principles, because we are still relying on SOA. What we are changing is only the way in which we will create and orchestrate services.
3. Models have to be defined by business languages. At that very moment, we will have achieved our goal: to close the gap between IT and business, between CIO and CEO.
4. We will consequently get our desired flexibility and agility (promised by SOA, but rarely achieved nowadays) when most core business processes are transformed to the new model. When we use model-driven SOA, changes in business processes have to be made in the model; then, the software automatically changes its behavior. This is the required alignment between IT and business. This is the new promise, the new challenge.


One Objective with “Oslo”: Is “Model-Driven SOA” Possible?

Microsoft is actually building the foundations to achieve MDD through a base technology, codename “Oslo.” Microsoft has been disclosing what “Oslo” is since its very early stages, so as to get customer and partner feedback and create what organizations need.

Of course, there are many problems to solve. The most difficult is to achieve “Oslo” modeling-specific domains—reviewed and even executed by business experts, and at the same time producing interconnected executables between different models.


Basic Concepts of “Oslo”

“Oslo” is the code name for the forthcoming Microsoft modeling platform. Modeling is used across a wide range of domains; it allows more people to participate in application design and allows developers to write applications at a much higher level of abstraction.

Figure 4. Architecture of “Oslo” and related runtimes (Click on the picture for a larger image)



So, what is “Oslo”? It consists of three main pillars:

* A language (or set of languages) that helps people create and use textual, domain-specific languages (DSLs) and data models. I am talking about the “M” language and its sublanguages (“M,” MGraph, MGrammar, and so on). By using these high-level languages, we can create our own textual DSLs.
* A tool that helps people define and interact with models in a rich and visual manner. The tool is called “Quadrant.” By using “Quadrant,” we will be able to work with any kind of visual DSL. This visual tool, by the way, is based on the “M” language as its foundation technology.
* A relational repository that makes models available to both tools and platform components. This is called simply the “Oslo” repository.

I am sure that Microsoft will be able to close the loop in anything that is related to those key parts (“M,” “Quadrant,” and the repository) for modeling metadata. In my opinion, however, the key part in this architecture is the runtime-integration with “Oslo,” such as integration with the next Microsoft application-server capabilities (codename “Dublin”), ASP.NET (Web development), Windows Communication Foundation (WCF), and Workflow Foundation (WF). This is the key for success in model-driven SOA (and MDD in general).

Will we get generated source code (C#/ VB.NET) or generated .NET MSIL assemblies? Will it be deployed through “Dublin”? Will it be so easy that business experts will be able to implement/change services? Those are the key points. And this is really a must, if “Oslo” wants to be successful in MDE/MDD.

In this regard, that is the vision of Bob Muglia (Senior Vice President, Microsoft Server & Tools Business), who promises that “Oslo” will be deeply integrated in the .NET platform (.NET runtimes):

“The benefits of modeling have always been clear; but, traditionally, only large enterprises have been able to take advantage of it, and [only] on a limited scale. We are making great strides in extending these benefits to a broader audience by focusing on [certain] areas. First, we are deeply integrating modeling into our core .NET platform. Second, on top of the platform, we build a very rich set of perspectives that help specific persons in the life cycle get involved.”


“Oslo” to Bring Great Benefits to the Arena of Development

In my opinion, it is good to target high goals. That is why we are trying to use MDD and MD-SOA to close the gap with business. However, even if we cannot reach our business-related goals, “Oslo” will bring many benefits to the development arena. In any case, we architects and developers will get a metadata-centric applications factory. To quote Don Box (Principal Architect in the “Oslo” product group):

“We’re building ‘Oslo’ to simplify the process of developing, deploying, and managing software. Our goal is to reduce the gap between the intention of the developer and the actual artifacts that get deployed and executed.

“Our goal is to make it possible to build real apps purely out of data. For some apps, we’ll succeed. For others, the goal is to make the transition to traditional code as natural as possible.”


MDD and “Oslo”

In the context of “Oslo,” MDD indicates a development process that revolves around the building of applications primarily through metadata. In fact, this is the evolution that we had for all development languages and platforms. In every new version of development platforms, we had more and more developmental metadata and less hardcoded/compiled code (consider WF, WPF, XAML, and even HTML). However, with MDD and “Oslo,” we are going several steps ahead— meaning that we are moving more of the definition of an application out of the world of code and into the world of data. As data, the application definition can be easily viewed and quickly edited in a variety of forms (even queried)—making all of the design and implementation details much more accessible. This is what the “Oslo” modeling technology is all about.

Figure 5. MDD with “Oslo” (Click on the picture for a larger image)



On the other hand, models (especially in “Oslo”) are relevant to the entire application life cycle. The term model-driven implies a level of intent and longevity for the data in question—a level of conscious design that bridges the gaps between design, development, deployment, and versioning.

For instance, Douglas Purdy (Product Unit Manager for “Oslo”) says the following:

“For me, personally, ‘Oslo’ is the first step in my vision to make everyone a programmer (even if they don’t know it).” (See Doug’s blog.)

This is really a key point, if we are talking about MDD; and, from Doug’s statement, I can see that the “Oslo” team supports it. Almost everyone should be able to model applications or SOA—especially business experts, who have knowledge about their business processes. This will provide real agility to application development.


Model-Driven SOA with “Oslo”

Now, we get to the point: model-driven SOA with “Oslo.”

Model-driven SOA is simply a special case within MDD/MDE. Therefore, a main goal within “Oslo” must be the ability to model SOA at a very high level—even for business experts (see Figure 5).

The key point in Figure 5 is the implementation and deployment of the model (the highlighted square that is on the right). “Oslo” must achieve this transparently. With regard to model-driven SOA (and MDD in general), the success of “Oslo” depends on having good support from base-technology runtimes (such as “Dublin,” ASP.NET, WCF, and WF) and, therefore, support and commitment from every base-technology product group.
Possible Scenarios in MD-SOA with “Oslo”

Keep in mind that the following scenarios are only my thoughts with regard to how I think we could use “Oslo” in the future to model SOA applications. These are neither official Microsoft statements nor promises; it is just how I think it could be—even how I wish it should be.
Modeling an Online Finance Solution

In the future, one of the key functions of “Oslo” will be to automate and articulate the agility that SOA promised. So, imagine that you are the chief executive of a large financial company. Your major competitor has started offering a new set of solutions to customers. You want to compete against that; but, instead of taking several months to study, analyze, and develop from scratch a new application by using new business logic, you can manage it in a few weeks—even only several days. By using a hypothetical SOA flavor of “Oslo,” your product team collects the needed services (which are currently available in your company), such as pricing and promotions, to create a new finance solution/ product.

Your delivery team rapidly assembles the services; your business experts can even verify the business processes in the same modeling tool that was used to compose the solution; and, finally, you present the new competitive product to the public.

The key in this MD-SOA process is to have the right infrastructure to support your business model, which should not be based on “dark code” that is supported and understood only by the “geek guys” (that is, the traditional development team). The software that supports your business model should be based on models that are aligned with business capabilities that bring value to the organization. These models are based also on standards—based on reusable Web services that are really your software composite components. These services should be granular enough for you to be able to reuse them in different business applications/services.

Also, your organization needs tools (such as a hypothetical SOA flavor of “Oslo”) to automate business-service models, even workflows—aligning those with new process models and composing existing or new ones to support the business process quickly. As I said, the key point will be how well-integrated “Oslo” will be with all of the plumbing runtimes (ASP.NET, WCF, .NET, and so on).

Finally, by using those model-driven tools, we could even deploy the whole solution to a future scalable infrastructure that is based on lower-level technologies such as “Dublin” (the next version of Microsoft application-server capabilities).

What Will “Oslo” Reach in the Arena of Model-Driven SOA?

The vision for model-driven SOA is that the software that supports your business model must be based on models and those models are aligned with business capabilities that bring value to the organization. Underneath, the models also must be aligned with SOA standards and interoperable, reusable Web services. Overall, however, business users must be able to play directly with those SOA models.

The question is not, “Will we be using Oslo?”, because I am sure that we architects and IT people will use it in many flavors, such as modeling data and metadata—perhaps, embedding “Oslo” in many other developer applications such as modeling UML and layer diagrams in the next version of Microsoft Visual Studio Team System (although not the 2010 version, which is still based on the DSL toolkit); modeling workflows; and modeling any custom DSL—all based on this mainstream modeling technology. However, when we talk about model-driven SOA, we are not talking about that. The key question is, “Will business-expert users be using ‘Oslo’ to model SOA or any application?”

If Microsoft is able to achieve this vision and its goals—taking into account the required huge support and integration among all of the technical runtimes and “Oslo” (the “Oslo” product team really has to get support from many different Microsoft product teams)—it could really be the start of a revolution in the applications-development field. It will be like changing from assembler and native code bits to high-level programming languages—even better, because, as Doug says, every person (business user) will be, in a certain way, a programmer, because business-expert users will be creating applications. Of course, this has to be a slow evolution toward the business, but “Oslo” could be the start.


Conclusion

SOA has to evolve toward a more agile design, development, and deployment process. Most of all, however, SOA must close the gap between IT and business.

Model-driven SOA can be the solution to these problems. So, with regard to MDD implementations, “Oslo” is Microsoft’s best bet to reach future paradigms for both MDD and model-driven SOA (a subset of MDD).
Find out more about Microsoft .NET outsourcing at the software outsourcing company website: www.symbyo.com

Source: The Architecture Journal

Protecting Oracle Databases

INTRODUCTION
One of the more recent evolutions in network security has been the movement away from protecting the perimeter of the network to protecting data at the source. This is evident in the emergence of the personal firewall. The reason behind this change has been that perimeter security no longer works in today's environment. Today more than just employees need access to data. It's imperative that partners and customers have access to this data as well. This means that your database cannot simply be hidden behind a firewall.
Of course, if you are going to open up your database to the world, it's imperative that you properly secure it from the threats and vulnerabilities of the outside world. Securing your database involves not only establishing strong password policy, but also adequate access controls. In this paper, we will cover various ways databases are attacked and how to prevent them from being “hacked.”

CURRENT ORACLE SECURITY ENVIRONMENT
It is very easy in the security community to create an air of fear, uncertainty, and doubt (FUD). As Oracle professionals, it's important to see through the FUD, determine the actual risks, and investigate what can be done about the situation. The truth is that most Oracle databases are configured in a way that can be broken into relatively easily. This is not to say that Oracle cannot be properly secured – only that the information to properly lock down these databases has not been made available, and that the proper lockdown procedures have not been taken.
On the other hand, the number of Oracle databases compromised so far has not been nearly on the scale that we have seen web servers being attacked and compromised. The reasons for this are several.
• There are less Oracle database then web servers.
• The knowledge of Oracle security is limited.
• Getting a version of Oracle was difficult.
• Oracle was traditionally behind a firewall.
These factors have changed significantly over the past year.
First, there is an increasing interest for databases in the Black Hat hacker community. The number of talks on database security has grown significantly over the past two years at the infamous Defcon and Black Hat conferences in Las Vegas. The number of exploits reported on security news groups such as www.SecurityFocus.com has increased ten fold over the last year.
Downloading Oracle's software has also become much simpler. The latest version is available for download from the Oracle web site for anyone with a fast enough Internet connection and the installation process has become increasingly simpler.
The point is not that the world is going to end. However we do need to start taking database security seriously. Start by taking a proactive approach to understand the risks and securing databases.

WHY SHOULD I CARE ABOUT ORACLE SECURITY?
The most common point of network attack is the web server and other devices connected directly to the Internet. Usually these programs do not store a company's most valuable assets. The biggest issue from a defaced web site is usually the publicity and loss in trust of the company's customers.
A hacked database is entirely a different story. Databases store a company's most valuable assets – credit card information, medical records, payroll information, and trade secrets. If your database is compromised, it could likely have serious percussions on the viability of your business.
Security is also about the weakest link. Your network is only as secure as the weakest computer on the network. If you have a secure network with an insecure database, the operating system or other devices on the network can be attacked or compromised by the database. Databases should not provide a point of weakness.
Also, Oracle databases have become the backbone of most web server applications. They are becoming more and more Internet enabled meaning they are opened up to the world of bad guys, not just your employees. This is especially the case with Oracle9i Application Server, which is being pushed heavily by Oracle.

UNDERSTANDING VULNERABILITIES
In order to understand vulnerabilities, we should start by listing and describing the various classes of vulnerabilities.
• Vendor bugs
• Poor architecture
• Misconfigurations
• Incorrect usage

VENDOR BUGS
Vendor bugs are buffer overflows and other programming errors that result in malformed commands
doing things they should not have been allowed to do. Downloading and applying patches usually fix vendor bugs. To ensure you are not vulnerable to one of these problems, you must stay aware of the patches and install them immediately when they are released.

POOR ARCHITECTURE
Poor architecture is the result of not properly factoring security into the design of how an application works. These are typically the hardest to fix because they require a major rework by the vendor. An example of poor architecture would be when a vendor uses a weak form of encryption.

MISCONFIGURATIONS
Misconfigurations are caused by not properly locking down Oracle. Many of the configurations options of Oracle can be set in a way that compromises security. Some of these parameters are set insecurely by default. Most are not a problem unless you unsuspectingly change the configuration. An example of this in Oracle is the EMOTE_OS_AUTHENTICATION parameter. By setting
REMOTE_OS_AUTHENT to true you are allowing unauthenticated users to connect to your database.

INCORRECT USAGE
Incorrect usage refers to building programs using developer tools in ways that can be used to break into a system. Later in this paper we are going to cover one examples of this – SQL Injection.

LISTENER SERVICE

A good place to start delving into Oracle security is the Listener service - a single component in the Oracle subsystem. The listener service is a proxy that sets up the connection between the client and the database. The client directs a connection to the listener, which in turn hands the connection off to the
database.
One of the security concerns of the listener is that it uses a separate authentication system and is controlled and administered outside of the database. The listener runs in a separate process under the context of a privileged account such as 'oracle'. The listener accepts commands and performs other tasks besides handing connections to the database.

LISTENER SECURITY IS NOT DATABASE SECURITY
Why is the separation of listener and database security a potential problem? There are a few reasons.
First is that many DBAs do not realize that a password must be set on the listener service. The listener service can be remotely administered just as it can be administered locally. This is not a feature that is clearly documented and is not well known by most database administrators.
Secondly, setting the password on the listener service is not straightforward. Several of the Oracle8i versions of the listener controller contain a bug that cause the listener controlled to crash when attempting to set a password. You can manually set the password in the listener.ora configuration file,but most people don't know how to, or have no idea that they should. The password itself is either stored in clear text or as a password hash in the listener.ora file. If it's hashed, setting the password in the listener.ora file manually cannot be done. If it is in clear text, anyone with access to read the $ORACLE_HOME/network/admin directory will be able to read the password.

KNOWN LISTENER PROBLEMS
So what are the know problems with the listener services? To investigate these problems, lets pull up the listener controller and run the help command. This gives us a list of the commands we have at our access.
To start the listener controller from UNIX, enter the following command at a UNIX shell.
$ORACLE_HOME/bin/lsnrctl
To list the commands available from the listener controller, run the following command at the listener controller prompt.

Note the command 'set password'. This command is used to log us onto a listener. There are a couple of problems with this password. Namely that there is no lockout functionality for this password, the auditing of these commands is separate from the standard Oracle audit data, and the password does not expire (basically there is no password management features for the listener password). This means writing a simple script to brute force this password, even if it is set strongly, is not very difficult.
Another problem is that the connect process to the listener is not based on a challenge-response response protocol. Basically whatever you send across the wire is in clear text. Of course if you look at the traffic you might notice that a password hash is sent across the wire, but this password hash is actually a password equivalent - and knowledge of it is enough to login.
So what can a hacker accomplish once they have the listener password? There is an option to log the data sent to the listener to an operating system file. Once you have the password, you can set which file the logging data is written, such as .profile, .rhosts, or autoexec.bat. Below is a typical command sent to the listener service.
(CONNECT_DATA=(COMMAND=ping))
Instead a hacker can send a packet containing a maliciously constructed payload such as below.
• "+ +" if the log file has been set to .rhosts
• "$ORACLE_HOME/bin/svrmgrl" followed by "CONNECT INTERNAL" and "ALTER USER
SYS IDENTIFIED BY NEW_PASSWORD" if the log file has been set to .profile.
Oracle released a patch for this issue, which basically provides a configuration option you can set, that will not allow parameters to be reloaded dynamically. By setting the option, you disable a hacker's ability to change the log_file. Of course if you do not set this option, this problem is not fixed. By default this option is not set and it is the database administrator's responsibility to recognize and fix this problem.

TNS LEAKS DATA TO ATTACKER
Another problem with the listener service is that it leaks information. This problem was first made public by James Abendschan. A full description of the problem can be found at http://www.jammed.com/~jwa/hacks/security/tnscmd/tns-advisory.txt.
The format of a listener packet is something like the following:
TNS Header – Size of packet – Protocol Version – Length of Command – Actual
Command If you create a packet with an incorrect value in the 'size of packet' field, the listener will return to you any data in its command buffer up to the size of the buffer you sent. In other words, if the previous command submitted by another user was 100 characters long, and the command you send is 10 characters long, the first 10 character will be copied over by the listener, it will not correctly null
terminate the command, and it returns to you your command plus the last 90 haracters of the previous command.
For example, a typical packet sent to the listener looks as follows:
.T.......6.,...............:................4.............(CONNECT_DATA=.)
In this case we are sending a 16-byte command – (CONNECT_DATA=.). One of the periods is actually the hex representation of the value 16, which indicates the command length. Instead we can change 16 to 32 and observe the results. Below is the response packet:
......."...(DESCRIPTION=(ERR=1153)(VSNNUM=135290880)(ERROR_STACK=(ERROR=(COD
E=1153)(EMFI=4)(ARGS='(CONNECT_DATA=.)ervices))CONNECT'))(ERROR=(CODE=3
03)(EMFI=1))))

This return packets says that Oracle does not understand our command and the command it does not understand is returned in the ARGS value. Notice that the ARGS value is as follows:
ARGS='(CONNECT_DATA=.)ervices))CONNECT'
The ARGS value has returned our command plus an additional 16 characters. At this point it's not clear what the last 16 bytes are. So we then try to up the lie and tell the listener our command is 200 bytes long. Below is the return value we get from the listener.
........"..>.H.......@(DESCRIPTION=(ERR=1153)(VSNNUM=135290880)(ERROR_STACK=
(ERROR=(CODE=1153)(EMFI=4)(ARGS='(CONNECT_DATA=.)ervices))CONNECT_DATA=(SID=
orcl)(global_dbname=test.com)(CID=(PROGRAM=C:\Oracle\bin\sqlplus.exe)(HOST=a
newman)(USER=aaron))')) (ERROR=(CODE=303)(EMFI=1))))
Notice this time the ARGS parameter is a little longer.
(CONNECT_DATA=.)ervices))CONNECT_DATA=(SID=orcl)(global_dbname=test.com)(CID
=(PROGRAM=C:\Oracle\bin\sqlplus.exe)(HOST=anewman)(USER=aaron))
Now it is a bit clearer what is being returned – previous commands submitted by other users to the database. You can even notice that the HOST and USER of the other user is displayed in this buffer.
This information is useful to an attacker in several ways. It can be used to gather a list of database usernames. An attacker can continually retrieve the buffer will over a matter of a few days retrieve a list of all the users that have logged in during that time. More dangerous is if the database administrator logs into the database using the listener password, you will be able to retrieve the listener password from the buffer.
This problem has been fixed in the latest patch sets (patchset 2 for Oracle version 8.1.7). It is also a good idea to deal with this problem by limiting access to connect to Oracle using a firewall or another packet filtering device.

BUFFER OVERFLOW IN LISTENER
Using the same techniques from the previous vulnerability, we can send a large connection string to the listener. If the packet contains more than 1 kilobyte of data, the listener crashes. Using a connection string of 4 kilobytes results in a core dump. An example of what this packet would look like follows:
.T.......6.,...............:................4.............(CONNECT_DATA=
XXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/0x12/0x54/0x5/0x34/0x12/0x54/0
x5/0x34/0x12/0x54/0x5/0x34/0x12/0x54/0x5/0x34/0x12/0x54/0x5/0x34/0x12/0x54/0
x5/0x34/0x12/0x54/0x5/0x34)
In the example above we have clipped most of the Xs. The funny characters at the end of the command are opcodes. Opcodes are low-level machine commands used by the hacker to inject commands that will be run on the database. By overflowing the stack with all the Xs, an attacker can cause the execution of arbitrary code by manipulating the SEH (Structured Exception Handling) mechanism.

EXTERNAL PROCEDURE SERVICE
External procedures are operating systems functions that can be called from PL/SQL. Oracle provides this facility to allow PL.SQL code to load and call functions in DLL (for Windows) or shared libraries (for UNIX). The functionality greatly enhances the capability of PL/SQL allowing it to perform any function the operating system can perform. With this flexibility is an increase in risk. Because external procedures are so powerful, the ability to create and use them should be controlled tightly and restricted to administrators only.
External procedures are setup using a combination of libraries, packages, functions, and procedures.
Below is an example of creating an external procedure server which creates a hook to the function exec() in the DLL msvcrt.dll. This function runs operating system commands as if at an operating system console. The commands execute under the operating system context that Oracle runs under:


CREATE LIBRARY test AS ‘msvcrt,dll’;
CREATE PACKAGE test_function IS
PROCEDURE exec(command IN CHAR);
END test_function;
CREATE PACKAGE BODY test_function IS
PROCEDURE exec(command IN CHAR)
IS EXTERNAL
NAME “system”
LIBRARY test
LANGUAGE C;
END test;

External procedures are configured via creating the appropriate entries in the listener.ora file through where commands are sent. Below is a sample of a default listener.ora file in a default Oracle8i installation. By default an EXTPROC service
is created in Oracle8i:

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = S0023605)(PORT = 1521))
)
)
(DESCRIPTION =
(PROTOCOL_STACK =
(PRESENTATION = GIOP)
(SESSION = RAW)
)
(ADDRESS = (PROTOCOL = TCP)(HOST = S0023605)(PORT = 2481))
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = E:\oracle\ora81)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = aaron)
(ORACLE_HOME = E:\oracle\ora81)
(SID_NAME = aaron)
)
)


Notice the sections in italics. These are the sections that apply to external procedures. To understand how this works, notice the entry “PROGRAM=extproc”. This is actually telling the listener which file to run when a command is sent. Several command line parameters are passed to this file, including the DLL to load and the function to call in the function.
This listener,ora file creates a listener service that accepts commands sent to port 1521 or to the IPC protocol. It will accept command sent to the SID “aaron” or to the EXTPROC0.
A feature of external procedures is that they can be called remotely. This feature is not official supported, but it does work. What this means is that the database may reside on one physical server and the listener and EXTPROC service may exist on a different physical server. While this is great for distributing computing power across servers, the fact is that there is no authentication between the database and the EXTPROC. This means that any remote user can connect to an external procedure service and cause it to load arbitrary DLLs and call functions in them. This allows an authenticated user total control to execute any commands on the server.

In Oracle9i, by default EXTPROC services are not configured by default. This alleviates the security out of the box, but does not address the issue if you actually need to use this feature. The correct way to use this feature securely is to setup a callout listener. This is basically setting up a second listener service that only listens for the IPC protocol. This prevents anonymous users from making CP/IP to the listener and sending it commands. Below is an example of configuring a callout listener.

callout_listener =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = IPC)
(KEY = extproc_key)
)
)
sid_list_callout_listener =
(SID_LIST =
(SID_DESC =
(SID_NAME = extproc_agent)
(ORACLE_HOME = oraclehomedir)
(PROGRAM = extproc)
)
)

SQL INJECTION
Because your firewall is behind a database does not mean that you do not need to worry about it being attacked. There are several other forms of attack that can be made through the firewall. The most common of these attacks today is SQL Injection. SQL Injection is not an attack directly on the database.
SQL Injection is caused by the way in which web applications are developed. Unfortunately since you are trying to protect the database, you need to be aware of these issues and understand how to detect and fix the problems.
SQL Injection works by attempt to modify the parameters passed to a web application to change the SQL statements that are passed to the database. For instance, you may want the web application to select from the orders table for a specific customer. If the hacker enters a single quote into the field on the web form and then enters another query into the field, it may be possible to cause the second query to execute.
The simplest way to verify whether you are vulnerable or not is to embed a single quote into each field on each form and verify the results. Some sites will return the error results claiming a syntax error.
Some sites will catch the error and not report anything. Of course, these sites are still vulnerable, but they are much harder to exploit if you do not get the feedback from the error messages.
This attack works against any database, not just Oracle. How this attacks works aries slightly from database to database, but the fundamental problem is the same for all databases.

SQL INJECTION SAMPLE1
So how does the exploit work? Does would an attacker a SQL Statement to another SQL statement?

SQL Injection is based on a hacker attempting to modify a query, such as:
Select * from my_table where column_x = ‘1’
to:
Select * from my_table where column_x = ‘1’ UNION select password from
DBA_USERS where ‘q’=‘q’
In the preceding example, we see a single query being converted into 2 queries. There are also ways to
modify the WHERE criteria to update or delete rows not meant to be updated or deleted. With other
databases you can embed a second command into the query. Oracle does not allow you to do this.
Instead an attacker would need to figure out how to supplement the end of the query. Note the ‘q’ = ‘q’
at the end. This is used because we must handle the second single quote that the ASP page is adding
onto the end of the page. This clause simply evaluates to TRUE.

Here is an example of a Java Server Page that you might typically find in a web application. Here we have the case of a typical authentication mechanism used to login to a web site. You must enter your password and your username. Using these two fields we get a SQL statement that selects from the tables where the username and password match the input. If a match is found, the user is authenticated.
If the recordset in our code is empty, then an invalid username or password must have been provided and the login is denied. Of course, a better idea would be to use the authentication built into the web server, but this form of "home grown" authentication is very common.

Package myseverlets;
<….>
String sql = new String(“SELECT * FROM WebUsers WHERE Username=’” +
request.getParameter(“username”) + “’ AND Password=’” +
request.getParameter(“password”) + “’”
stmt = Conn.prepareStatement(sql)
Rs = stmt.executeQuery()
Exploiting the problem is much simpler if you can access the source of the web page. You should not be able to see the source code, however there are many bugs in most of the common web servers that allow an attacker to view the source of scripts, and I’m sure there are still lots that have not yet been discovered.
The problem with our ASP code is that we are concatenating our SQL statement together without parsing out any single quotes. Parsing out single quotes is a good first step, but it's recommended that you actually use parameterized SQL statements instead.

For the following web page, I set the username to:

Bob
I also set the password to:
Hardtoguesspassword

The SQL statement for these parameters resolves to:
SELECT * FROM WebUsers WHERE Username=’Bob’ AND Password=’Hardtoguess’
What if an attacker instead of using a regular password, enters some letter, uses a single quote to end the string literal, then inserts another boolean expression in the where clause. Obviously this boolean expression is TRUE which returns all the rows in the table. For instance, if an attacker instead enters the password as:
Aa’ OR ‘A’=‘A

The SQL statement now becomes:
SELECT * FROM WebUsers WHERE Username=’Bob’ AND Password=’Aa’ OR ‘A’=‘A’
As you can see, this query will always return all the rows in the database, and the attacker will have convinced the web application that a correct username and password was passed in. The kicker is that when the recordset contains the entire set of users, the first entry in the list will typically be the Administrator of the system, so there is a good chance the attacker will be authenticated will full administrative rights to the application.

SQL INJECTION SAMPLE2
Various twists on SQL Injections can also be performed. An attacker can select data other than the rows from the table being selected from by using a UNION. Here’s another example of how to pull data back from other tables that are not directly involved in the current query. The best way to exploit this issue is to find a screen that contains a dynamic list of items, such as a list of open orders or the results of a search.
The trick here for the attacker is to make the single query into 2 queries and UNION them. This is somewhat difficult because you must match up the number of columns and column types. However, if the server provides you the error messages, the task is relatively simply. The error returned will be something to the meaning of:
Number of columns do not match
Or:
2nd column in UNION statement does not match the type of the first
statement.
This time we will look at a sample Active Server Page that might typically be found in an application.

Dim sql
Sql = “SELECT * FROM PRODUCT WHERE ProductName=’” & product_name & “’”
Set rs = Conn.OpenRecordset(sql)
‘ return the rows to the browser
Once again, we'll say we have access to the source code. An attacker does not really need the source code, but it does make our lives easier for demonstration purposes. Once again we are not using parameterized queries, but instead are concatenating a string to build our SQL statement.
you can find out more about oracle outsourcing company at the software outsourcing company website: www.symbyo.com

We try entering valid input by setting the product_name to:
DVD Player

The SQL Statement is now:
SELECT * FROM PRODUCT WHERE ProductName=’DVD Player’
An attacker would instead want to get a copy of the password hashes from your databases. Once he has these hashes, he can start brute-forcing them. The hacker would set the product_name to:
test’ UNION select username, password from dba_users where ‘a’ = ‘a
The SQL Statement is now:
SELECT * FROM PRODUCT WHERE ProductName=’test’ UNION select username,
password from dba_users where ‘a’=‘a’
Instead of entering a single word, the attacker used a single quote to end the string literal, then adds a UNION command and a second statement. Notice at the end that he must still handle the fact that the code will place another single quote at the end, so we end our second SQL query with:
‘a’=‘a
This last clause evaluates to TRUE causing all rows to be returned from the dba_users table.
PREVENTING SQL INJECTION
Preventing SQL injection attacks from happening are simply once you understand the problem. Really there are two strategies you can use to prevent the attacks.
• Validate user input
• Use parameterized queries
Validating user input involves parsing field to restrict the valid characters that are accepted. In most cases, fields should only accept alphanumeric characters.
Also you can escape single quotes into 2 single quotes although this method is riskier since it is much easier to miss parsing input somewhere.
Using parameterized queries involves bind variables rather than concatenating SQL statements together as strings.
The biggest challenge will be reviewing and updating all the old CGI scripts, ASP pages, etc… in your web applications to remove any instance of this vulnerability. It is also suggested that you setup a programming guideline for web programmers that includes emphasis on using parameterized queries and not constructing SQL by concatenating strings with input values.

Tuesday, November 24, 2009

Design Considerations for S+S and Cloud Computing

Summary: The purpose of this article is to share our thoughts about the design patterns for a new generation of applications that are referred to as Software plus Services, cloud computing, or hybrid computing. The article provides a view into S+S architectural considerations and patterns as they affect common architectural domains such as enterprise, software, and infrastructure architecture.

Introduction

Many enterprises have IT infrastructures that grew organically to meet immediate requirements, instead of following a systematic master plan. Organically grown enterprise systems have a tendency to develop into large, monolithic structures that consist of many subsystems that are either tightly coupled or completely segregated (sometimes referred to as a “siloed” system). Typically, these systems have arcane and inconsistent interfaces. Their complexity and inefficiency slows down business innovation and can force IT managers to focus on operational and firefighting processes instead of on how information technology can support the core business. Furthermore, some enterprise IT systems have partially duplicated functions that lead to fragmented and inconsistent views of business information, which affects the ability of an enterprise to make sound financial decisions.

Software plus Services (S+S) is an extension of Software as a Service (SaaS) that offers organizations more options for outsourcing development, management, deployment, and operational aspects of the technologies that run their businesses. S+S works in conjunction with principles of service-oriented architecture (SOA). S+S helps an SOA-enabled enterprise increase its technology choices by providing multiple modes of sourcing, financing, and deploying application software and services. To make informed decisions and take full advantage of the potential benefits of adopting an S+S model, IT architects and decision makers should weigh the business drivers and technical requirements against the economic, regulatory, political, and financial forces that are at work from both inside and outside the company.

This article is based on practical experience that was gained by the Microsoft Worldwide Services consulting organization during the design and delivery of S+S and Cloud-based applications. It provides a view into S+S architectural considerations and patterns as they affect common architectural domains, such as enterprise, software, and infrastructure architecture.


SOA, S+S, and Cloud Computing

SOA, S+S, and Cloud ComputingDuring the mid-2000s, SOA practices were introduced to help bring sanity to enterprises that were imbued with complex IT infrastructures. Since then, SOA has gone from being a hot industry buzzword to recent pronouncements that SOA is dead. Regardless, SOA instigated key paradigm shifts that remain relevant today.

At its technical core, the key impact of SOA is the set of SOA principles, patterns, and analysis processes that enable an enterprise to inventory and refactor its IT portfolio into modular and essential service capabilities for supporting day-to-day business operations. The key objectives of SOA are to align enterprise IT capabilities with business goals, and to enable enterprise IT to react with greater agility as business needs demand. Some key SOA principles that promote agile IT solutions include loose coupling, separation of concerns, standards-based technologies, and coarse-grain service design.

While SOA helps the enterprise identify key service capabilities and architect its business and IT alignment for agility, S+S provides the computing model for organizations to optimize their IT investments through cloud computing and solutions that are deployed in-house. S+S does not invalidate the need for SOA; instead, it empowers an SOA-enabled enterprise to optimize its technology choices by making available multiple modes of sourcing, financing, and deploying application software and services.

The SOA, S+S, and cloud-computing stack relationship is shown in Figure 1.



Figure 1. Optimizing IT with SOA, S+S, and cloud-computing stack


Because there is not one universally correct IT portfolio for every organization, what is best for an organization depends on its current set of business objectives and requirements. For this reason, the S+S computing model helps an enterprise optimize its IT portfolio by making specific technology choices that are based on decision filters such as cost, relevancy to the core mission, user experience and value for innovation, and business differentiation. S+S offers greater choices for the design of effective hybrid distributed architectures that combine the best features of on-premises software (for example, low latency and rich functionality) with the best features of cloud computing (for example, elastic scalability and outsourcing).

Cloud computing refers to a collection of service offerings. Currently, cloud computing includes vendor solutions for:

Infrastructure as a Service (IaaS). IaaS usually refers to a computing environment in which dynamically scalable and virtualized computation and storage resources are offered as a service. This service abstracts the number of service consumers from the need to invest in low-level hardware, such as servers and storage devices.
Platform as a service (PaaS). PaaS provides operating system and application platform–level abstractions to service consumers. PaaS provides system resource–management functions to schedule processing time, allocate memory space, and ensure system and application integrity within a multitenant environment. PaaS application-development tools enable service consumers to build cloud applications that run on the hosted platform.
Software as a service (SaaS). SaaS refers to business and consumer applications that are hosted by third-party service providers. Service consumers might use Web browsers or installed desktop applications to interact with the hosted applications. In some cases, SaaS providers also offer headless (that is, without a UI) Web services so that enterprises can integrate data and business processes with SaaS applications.
Cloud-computing solutions compliment enterprise-managed infrastructures and offer various benefits to businesses, including the following:

+ The ability to allocate resources dynamically, such as additional computing and storage capacity, enables an enterprise to adjust IT expenditures flexibly according to business demands.
+ Transaction and subscription–based cloud platforms allow enterprises to develop innovative application solutions quickly for testing new business and operation models, without huge IT investments.
+ Outsourced solutions reduce ongoing IT costs and the responsibilities of managing and operating nondifferentiating IT assets (which are opportunity costs to the company).



Design Considerations

This section provides a summary of the business and technical challenges that a business should consider during the design or adoption of an S+S–based solution. Figure 2 illustrates the frame that is used to organize this document. The frame is organized around specific architectural perspectives and identifies the crosscutting concerns, to provide an end-to-end perspective on the types of scenarios, design considerations, and patterns to consider as part of an S+S strategy.




Figure 2. Architectural-perspectives framework



This information provides a basis for evaluating the end-to-end implications of adopting S+S strategies.



Enterprise Architecture

One of the most demanding aspects of the enterprise-architect role is to balance the constantly changing business needs with the ability of the IT organization to meet those needs consistently. S+S introduces new technology-deployment patterns that reduce operational expense by consolidating—and, in some cases outsourcing—IT platforms, applications, or application (business) services. In addition, S+S can enable organizations to integrate systems across the organization with less friction. Organizations can provide information services to existing business relationships, often by combining existing channels.

At the highest level, enterprise architects must establish a means of determining the core competencies of the organization and then establish a process for determining which applications support those core competencies, as well as which should perhaps remain in-house and which should not.

The following is a model that is used by several large organizations:

* Proprietary and mission-critical systems—Systems that are proprietary or mission-critical in nature or that provide competitive advantages are often considered too important to risk outsourcing to an off-premises service provider. As a result, these systems are usually designed, developed, operated, and managed by the existing IT department of an organization.
* Nonproprietary and mission-critical systems—Systems that are nonproprietary yet still mission-critical might be developed by another company, but might still be designed, operated, and managed by the existing IT department an organization.
* Nonproprietary systems—Systems that are nonproprietary and deliver standardized functionality and interfaces are often good candidates for outsourcing to a cloud-service provider if appropriate service-level agreements (SLAs) can be established with the service providers. E-mail, calendaring, and content-management tools are examples of such systems.

This model provides a starting point to evaluate applications and systems, but organizations should take into account their individual organizational differences. For example, if an organization is unable to manage its core systems effectively because of cost or missing expertise, it might consider outsourcing them. Likewise, putting some mission-critical systems in the Cloud might offer additional capabilities at little cost that can offset the drawbacks that are introduced. An example might be allowing access to the system by trusted partners or company branches, without having to build an in-house dedicated infrastructure.

However, simply identifying opportunities for moving applications off-premises is insufficient. To leverage S+S opportunities, decision makers must have a clear understanding of the IT maturity of the organization. This understanding allows them to determine what changes in IT infrastructure and processes should be made to optimize the return on investment (ROI) or cost savings that can be gained through S+S adoption.

Figure 3 illustrates the ease of adoption for S+S at varying levels of IT maturity (maturity model based on “Enterprise Architecture as Strategy”[1]) and demonstrates that without determining the organizational maturity, the envisioned ROI might be incorrect.



Figure 3. S+S impact, depending on IT maturity



Software Architecture, Integration Design
Few enterprise applications exist in isolation. Most are connected to other applications and form complex systems that are interconnected through a variety of techniques such as data integration, functional integration, and presentation integration.

In most cases, organizations use a variety of integration techniques, which results in tightly coupled systems that are difficult to separate and replace with off-premises capabilities. Typically, in such cases, the organization either establishes course-grained facades around subsets of functionality within its subsystems or adopts integration technologies that provide a bridge between legacy applications and services that could be hosted locally or off-premises.

When it integrates at the data layer and allows off-premises applications to use the same data as on-premises applications, an organization must consider a variety of factors, such as where the master data should reside. If the data is read-only or reference data, it might be possible to use push-or-pull replication techniques to keep the data synchronized. For business or transactional data, the organization must consider other techniques.

Organizations that use functional SOA-based business services can consider migrating these services to the Cloud, which is discussed in greater detail in the next section. However, in some cases, business applications cannot easily be partitioned into service contract–driven clients and service-provider components. This might be the case when the system involves complex legacy processes and human-driven workflows. In these cases, it might be possible to move the workflow into the Cloud and support a hybrid mode of operation in which the workflow can span both online and offline scenarios.

Traditional atomic approaches to managing transactions might no longer be possible, which would require the organization to examine alternative models that can ensure data consistency. The information-design section of this article describes such processes in greater detail.

Applications that have been developed more recently might use a service-oriented approach to functional integration, which can simplify the adoption of S+S. Applications that use a common service directory might be able to update the location and binding requirements of destination services within the service directory, and the clients might be able to reconfigure themselves dynamically. This can work where a service with the same contract has been relocated off-premises. However, in many cases, the client must interact with a service that has a different contract. In this case, the problem could be mitigated by using service-virtualization techniques[2] by which an intermediary intercepts and transforms requests to meet the needs of the new destination service.

As organizations move applications into the Cloud and become more dependent on services that are provided by multiple service providers, existing centralized message-bus technologies might also be insufficient and, thus, require an organization to consider Internet service bus[3] technologies.

Software Architecture, Application Design

Over the last decade, we have seen many organizations move away from centralized, mainframe-based applications to distributed computing models that are based predominantly on service-oriented and Internet-oriented architectural styles. Applications that are designed according to the principles of service orientation provide a solid foundation for the adoption or integration of S+S applications.

However, we should not assume that this alone is sufficient. Organizations that develop client applications must design these applications to be more resilient when a remote service fails, because remote services are usually outside the control of the consuming organizations. Techniques such as caching of reference data and store-and-forward mechanisms can allow client applications to survive service-provider failures. Additionally, traditional atomic transactions might not be appropriate when interacting with remote services— requiring developers to consider alternative mechanisms, such as compensating transactions.

As a result of services being moved outside of organizational boundaries, the time to access a remote service might also increase. Application developers might need to consider alternative messaging strategies, including asynchronous-messaging techniques. Service providers will almost certainly want to consider using asynchronous-messaging strategies to increase the scalability of their systems.

Alternatively, enabling a client application to interact with alternate service providers depending on their availability and response times might require a client application to resolve the location of such services dynamically and even modify the protocols that are used for interaction. For large organizations that have large numbers of client applications or services that depend on external services, the configuration information must be centralized to ensure consistent management.

Change management requires more attention for S+S applications, because the applications often support multiple tenants. This is further complicated if the application is required to run at or very close to 100 percent availability, which provides little room for upgrades. Rolling updates or application upgrades that use update domains[4] require careful application design, in addition to demanding that the service providers include support for high-availability deployment patterns.

Changes in the design or implementation of a service might inadvertently affect consumers of the services, and client applications might need to be updated to remain functional. Additionally, the client might need to be modified in S+S scenarios in which services are provided by cloud-service providers; therefore, explicit versioning strategies are critical. In some cases, Web-service intermediaries might provide some ability to safeguard against such changes—allowing message transformations to occur seamlessly to clients. Service providers must have a good understanding of client-usage scenarios to ensure that changes in service contracts or behaviors do not result in unexpected changes in client behavior.

Testing of S+S applications also requires careful attention. Client applications must have the ability to simulate different environments—including development, user acceptance, and performance test environments—with the assurance that these environments are configured identically to the production environments.

Established principles, such as separation of concerns, can make it simpler to allow the security model of an application to change— allowing for new authentication and authorization mechanisms with minimal impact on the application.

Architects have traditionally focused on the design of the application and underlying data stores with the assumption that the organization is the only consumer of the application. During the design S+S applications, they can no longer make this assumption, and they must consider multitenancy and different approaches for scaling out database designs.

Software Architecture, Information Design

Information design is associated with the structure, management, storage, and distribution of data that is used by applications and services. Traditionally, enterprise applications have focused on data consistency, transactional reliability, and increased throughput. They have usually relied on relational data models and relational database-management systems that use the atomicity, integrity, consistency, and durability (ACID) principles as the measure of a reliable database design. S+S forces organizations to think about their information-design process very differently.

SOA adoption has lead to the notion of data as a service, where ubiquitous access to data can be offered independently of the platform that hosts the source data. This capability requires operational data stores that can verify and ensure the cleanliness and integrity of the data, while also considering the privacy and security implications for exposing the data.

Designing a service that will run in the Cloud requires a service provider to consider requirements that are related to multitenant applications. Multitenant applications require alternative schema designs that must be flexible, secure, and versioned. In some areas, there has also been a movement toward generalized nonrelational data models that provide greater flexibility to tenant-specific schema changes but leave management of data redundancy and possible inconsistencies to the application. Increasingly, systems are processing semistructured or unstructured data (such as documents and media) that are not well-suited to structured relational data models and, therefore, require generalized data models such as name-value stores and entity stores instead. Helpful are supporting queues, blobs for large amounts of unstructured data, and tables with limited relational semantics.

Services and underlying data structures must be designed to support much greater volumes of transactions and/or they must manage much greater volumes of data than in the past. This makes changes to schema designs and data-partitioning strategies necessary. Partitioning strategies must support scaling out of the underlying databases, usually by functional segmentation or horizontal partitioning. Such strategies, however, might affect the ability to obtain optimal performance. This explains why some high-performance systems are moving away from ACID reliability[5] and toward Basically Available, Soft State, Eventually Consistent (BASE) consistency,[6] as well as toward decoupling logical partitioning from physical partitioning schemes.

Infrastructure Architecture

Typically, computing infrastructure represents a significant proportion of the enterprise IT investment. Before the era of cloud computing, an enterprise did not have many alternatives to spending significant amounts of capital to acquire desktops, server hardware, storage devices, and networking equipment to meet its infrastructure needs. Larger enterprises might have also invested in building and operating private data centers to host their hardware and service personnel.

Now, with IaaS, private cloud services, and various forms of virtualization technology, enterprises have alternatives and are able to reassess their IT-infrastructure investment strategy.

With IaaS, enterprises can choose to pay for computing resources by using transaction or subscription payment schemes. An infrastructure that scales dynamically enables an enterprise to adjust its infrastructure expenditure quickly, according to business needs. When the infrastructure budget evolves into an operating expense that can be increased or decreased as the demand for compute cycles and storage fluctuates, enterprises gain new financial flexibility. Such infrastructure services are useful for e-commerce sites that have spikes in computation needs during peak seasons or time of day, followed by dips in resource usage during other periods. IaaS can simplify the capacity-planning tasks for IT architects, because computing resources can now be added or retired without over- or under-investing in infrastructure hardware.

Additionally, IaaS enables enterprises to be more agile in launching and testing new online-business services. Traditionally, the business would need to weigh the business case of upfront investment in hardware for online-service experiments. Even more challenging were the struggles with internal IT to reconfigure corporate networks and firewalls to deploy the trial services. Internal policy–compliance issues have been a frequent cause of delay for new online-business ventures. Now, IaaS can help expedite the process and reduce infrastructure-related complications.

User desktops can be delivered as a service, and users can access them from any location that has network connectivity to the virtualization service. Server-virtualization technology helps reduce the server-hardware footprint in the enterprise. By using IaaS, an enterprise can derive immediate infrastructure cost savings by replicating virtual server instances to run on the cloud infrastructure as business needs require.

While cloud infrastructure–related services can bring many benefits that were not previously available to enterprises, the advantages do not come for free. IT architects must continue to weigh design considerations that concern availability, scalability, security, reliability, and manageability while they plan and implement a hybrid S+S infrastructure.

Infrastructure service disruptions will ultimately affect the availability of application services. Because a number of higher-level application services might be running on an outsourced cloud infrastructure, an outage at the service-provider infrastructure could affect more than one business function—resulting in the loss of business productivity or revenue in multiple areas. Therefore, enterprises should know whether their infrastructure-service providers can help mitigate such failures. Alternatively, an enterprise might want to use multiple infrastructure-service providers so that it can activate computing resources at the secondary provider in the event of service failure at the primary provider.

When desktop-virtualization technology is delivered through a centralized hosting infrastructure, it is important to consider scalability and performance of the solution. Often, desktop-virtualization services are at peak load during office hours when employees log on and perform office tasks. The load gradually tapers off after-hours. Combining virtualization technology with a dynamically expandable computing infrastructure service can be a good approach when the computational demands of an organization fluctuate.

Infrastructure security has always been part of the defense-in-depth strategy for securing the enterprise. For example, some enterprises rely on IPSec for protecting machine-to-machine communications within the intranet. This mechanism can add another layer of defense to protect against unauthorized information access by non-corporate-owned computing devices. To continue using existing infrastructure-level security mechanisms with cloud-infrastructure services, an enterprise might need to reconfigure its network, public-key, and name-resolution infrastructure.

When the server infrastructure is deployed to run as virtualized instances, IT architects should think about organizing the virtual instances into atomic units, where a service failure is isolated within each unit and does not affect other atomic collections of virtualized services. This infrastructure-design practice enables higher application services to be deployed as atomic units that can be swapped in if the virtualized instances fail to operate in another unit.

When higher-level applications are deployed across a hybrid S+S infrastructure, it can be difficult to debug application failures that occur because of infrastructure malfunction. Traditional network-monitoring and tracing tools that are used within the enterprise might cease to work across the boundaries of corporate and service-provider firewalls. Therefore, an enterprise can request that its cloud-infrastructure providers provide diagnostic tools that can help inspect cloud-infrastructure traffic.

Security

Security has been a key area of enterprise computing focus since the late 1990s, when businesses began using the Internet as a mainstream channel of commerce and customer service. In the current computing era of S+S, the security best practices and the technology developed to serve the business Web not only remain relevant, but are even more important to observe.

S+S security covers a broad spectrum of topics, ranging from the provisioning of identities and their entitlements, to enabling enterprise single sign-on between on-premises systems and cloud services, to protecting data in transit and at rest, to hardening application code deployed on cloud platforms against malware and penetration attacks.

User provisioning is a key task in the life-cycle management of user identities. When an enterprise adopts a cloud service, it must consider how its enterprise users are provisioned with the cloud-service providers. In addition, as a user’s organizational role changes, the identity management processes should ensure that the user’s application permissions are adjusted accordingly at the cloud service. When a user leaves the enterprise, access to the cloud service should also be deactivated. The user provisioning activities for S+S should be automated as much as possible to reduce manual provisioning errors and prevent loss of employee productivity that is due to service-access issues.

Enabling single sign-on (SSO) by using existing corporate identities is a key requirement and priority for many enterprises that adopt cloud services. The reasons are obvious. SSO provides convenience and better application experiences to end users and can reduce security issues that arise from having to manage multiple security credentials. Rationalizing and consolidating multiple identity systems within the enterprise is usually the first step in meeting the SSO challenge. New identity-federation technology can also improve the portability of existing user credentials and permissions and should definitely be a key part of the SSO strategy with cloud-service providers.

Data is critically important in every business. Therefore, an enterprise should set a high bar for ensuring that its business information continues to be secure in the S+S world. The key security issues that concern data are confidentiality and integrity when data is transmitted over the Internet and when information is stored at the cloud-service provider. Security mechanisms such as encryption and signing can help ensure that data is not being viewed or modified by unauthorized personnel.

New security threats, exposures, and mitigation approaches must be factored in to the security strategy for enterprise applications that are developed and hosted on Internet-accessible cloud-computing platforms. The potential security threats range from service disruptions that are Internet hacks to the risk of proprietary business logic in application code and trade-secret content being discovered and stolen. The practice of a secure-by-design security-review process becomes even more crucial during the delivery of applications to run on cloud-computing platforms.

Finally, IT decision makers and implementers should be mindful that any system is only as secure as its weakest link. Therefore, companies should always examine new risk exposures that arise from using cloud providers and take appropriate measures to mitigate those risks. If the weakest link is the outsourced provider, it can invalidate many security measures that the company has put in place.

Management

IT management deals with the end-to-end life-cycle management of software applications and services that are used by the enterprise to accomplish its business objectives. The life-cycle stages include planning, implementing, operating, and supporting an IT portfolio that consists of the hardware, network, infrastructure, software, and services that support day-to-day business operations.

Typically, IT management includes the following activities:

Defining policies to guide project implementation and operation procedures
Putting processes in place to systematize execution
Identifying organizational roles with clearly defined accountabilities
Implementing and maintaining the tools that automate IT-management operations
Best practices for the first three activities can be found in existing industry management frameworks such as the Information Technology Infrastructure Library (ITIL)[7] and the Microsoft Operations Framework (MOF),[8] while architecture principles and IT-management solutions are the key ingredients for automating IT operations.

S+S extends the enterprise IT environment beyond its corporate firewall—not only from a deployed technology perspective, but also from the perspectives of IT roles and accountabilities, operational procedures, and policies that govern the use and operation of deployed software and services.

For example, applications that are outsourced to an SaaS provider are now maintained by administrators and operators who are not employees of the enterprise. In the S+S world, traditional IT roles and accountabilities might need to be collapsed into a single service-provider role that is contractually responsible for the duties that are specified in an SLA. Legally enforceable liability clauses should also be clearly defined to mitigate any negative result that might occur because a service provider cannot perform its responsibilities satisfactorily. Similarly, IT-management processes for resolving user issues and technical problems are now handled by the service provider. Establishing clear escalation procedures and integrating effective communication channels into the end user–support process of the enterprise are vital for the minimization of service disruptions.

Although the enterprise no longer controls the implementation details of the outsourced services, the company should be aware of any of the mechanisms and procedures of the service provider that might affect the accountabilities and liabilities between the enterprise organization and its customers. Some service providers will provide documentation that complies with auditing standards such as SAS 70, which can help the enterprise determine if the IT-management practices at the service provider meet their business and industry requirements.

Enterprise organizations should plan to deploy IT-management solutions for monitoring services that run in the Cloud. The operation strategy should outline the performance indicators and management rules that are required to gain visibility into the performance and availability of the external services. Operational monitoring systems should raise management notifications and alerts, so that any service anomaly can be detected early.

Additionally, both outsourced service providers and enterprises that are developing cloud services should implement operation-related service interfaces to automate management tasks such as provisioning user accounts, setting user permissions, changing service run state, and initiating data backups.

In summary, IT management in the S+S world must continue to embrace the end-to-end strategy of planning, delivering, and operating the IT capabilities that are needed to support business operations. Existing IT-management frameworks are still relevant. Enterprises, however, should consider the impact that arises as they integrate external operation processes, personnel, and tools into the existing IT-management practices.

With external service providers taking responsibility for systems, organizations lose much of the direct control that they used to have over the people, processes, and technology. Instead, the organizations must provide effective management through clearly defined and mutually agreed-upon SLAs, policies and procedures, key performance indicators, management rules, and service-control interfaces. Design for operation is the architecture mantra for delivering manageable software and services. Ultimately, the outsourcing of operational details to cloud-service providers should empower existing IT staff to focus on new, high-priority computing projects that deliver greater business value.

Operations

Operations make up a very specific stage of the IT-management life cycle. It involves the day-to-day activities of monitoring software and services, taking corrective actions when problems arise, managing customer helpdesks to resolve user issues, performing routine tasks such as backing up data, and controlling and maintaining consistent service run states to meet the required quality of service. Operational procedures are governed by IT policies, and the outcome is measured by precise systems and applications health metrics such as availability and response times.

For example, the MOF outlines a best-practices model for these activities. [8]

As enterprises adopt an S+S strategy, they must consider the business impact of outsourcing IT operational roles and responsibilities. Business continuity, liability, and employee and customer satisfaction are all key concerns that must be addressed by establishing clear SLAs with reliable cloud-service providers.

The enterprise should continue to play a proactive role in IT operations for its hybrid software-and-services environment. However, instead of focusing on execution details, enterprises should put monitoring systems in place that enable them to detect technical issues at the outsourced services. Enterprises should also establish operational procedures to ensure that problems are resolved by the service providers as quickly as possible.
you can find out more about asp.net outsourcing at the software outsourcing company website: Symbyo.com

Both enterprises and cloud-service providers can increase their S+S operational effectiveness by designing for operational best practices. “Designing for operation” requires architecture and execution discipline over the course of planning, delivering, and operating software and services. Architects should be aware of the transition points in their applications when stability, consistency, reliability, security, and other quality factors are affected, and should include instrumentation features within the applications to notify monitoring tools of impactive events. Architecture concerns and patterns such as application health state, performance counters, management events, logs, and synthetic transactions can help provide operation-ready software and services.

During their evaluation of a cloud service, enterprises should determine if the service providers offer application-performance information and operation service interfaces that can be consumed by standard off-the-shelf IT monitoring solutions. Enterprises should also architect their systems so that the failure of a service is compartmentalized. Only the parts of the solution that are dependent on that service should be affected. This operational strategy helps maximize business continuity.



Conclusion

S+S brings new opportunities for everyone. It provides new options for optimizing business and IT assets, and enables organizations to save cost, increase productivity, innovate, and reach new markets.

There are three main ways to think about extending the current portfolios of on-premises technology with cloud computing: consume the Cloud, use the Cloud, and embrace the Cloud:

Consume the Cloud is fundamentally about an enterprise outsourcing applications and IT services to third-party cloud providers. The key business drivers that push enterprises to consume online services are reducing IT expenses and refocusing valuable bandwidth on enabling core business capabilities. Cloud providers can usually offer commodity services cheaper and better because of their economy of scale. They can pass-on the cost savings and efficiency to enterprise customers. For Microsoft customers, some good examples are the Microsoft Business Productivity Online Suite (consisting of Exchange Online and Office SharePoint Online), CRM Online, and Live Meeting services.
Use the Cloud enables enterprises to tap into cloud platforms and infrastructure services and get an unlimited amount of compute and storage capacity when they need it, without having to make large, upfront capital investments in hardware and infrastructure software. Such a utility computing model gives enterprises more agility in acquiring IT resources to meet dynamic business demands. In addition, by using cloud services, enterprises can avoid affecting the existing corporate infrastructure and speed up the deployment of Web-based applications to support new business initiatives that seek closer communication with customers and partners. For Microsoft customers, some good examples include Windows Azure and SQL Azure.
“Embrace the Cloud” occurs when enterprises deploy technology that enables them to offer cloud services to their customers and partners. Service-oriented enterprises are best positioned to take advantage of this model by transforming their core business assets into information cloud services that differentiate them from their competitors. For Microsoft customers, on-premises technologies such as the BizTalk Server Enterprise Service Bus Toolkit can integrate data feeds and orchestrate workflows that process information exchange via cloud services.


Source: The Architecture Journal.