Wednesday, December 30, 2009

In 2009, Web goes on a diet

Hello everybody,

I found that interesting article about 2009, Happy 2010 everybody...

2009 was, in many ways, a good year for the Web and the technologies that help us access it. Companies big and small had to re-evaluate what was important: an ethos that channeled into more focused product launches and notable improvements to existing software and services.

That refocus meant tech giants spent the early part of 2009 trimming the fat on services that were too costly to run, or simply underused. Google cut a myriad of its offerings, shelving microblogging service Jaiku, its social network Dodgeball, Google Video, catalog search, "shared stuff," and its notebook service. Yahoo followed suit, dropping the ax on its Briefcase online storage service, closing off access to its Jumpcut Web video editor, and 360 blogging tool. Yahoo also pulled the plug on Geocities--one of the Web's early relics. Other notable discontinuations include Microsoft killing off its online encyclopedia Encarta, and HP getting rid of its Upline backup solution.

Services that were not shut down saw improvements. Google's Gmail finally left beta, and gained a feature that lets users access it offline. The company also launched Google Wave--a somewhat experimental real-time collaboration service. Microsoft's Windows Live Search was relaunched as a new product called Bing, which was received well both by the press and users. Bing, along with Yahoo and Google, also integrated real-time results from social networks like Facebook and Twitter.

deal signing
Credit: Yahoo/Microsoft
Microsoft CEO Steve Ballmer signs a Microhoo
pact alongside Yahoo CEO Carol Bartz, though
the deal was not as it was originally intended.

Speaking of Microsoft and Yahoo, Microhoo finally happened--though not as it was originally intended. In late July, Microsoft and Yahoo entered a 10-year search deal that gave Yahoo Microsoft's search engine technology, while Microsoft got Yahoo's ad sales force and partners; The result was quite different from 2008's $44.6 billion unsolicited bid that would have given Microsoft complete control of the company.

2009 also brought new location-based tools, some of which, by some accounts, are a little creepy. Microsoft's Bing got its own version of local maps, complete with a street-level view. And at the South by Southwest tech and music conference in Austin, Texas, Foursquare debuted. The service lets people show where they are to their friends, and vice versa. The month prior, Google launched a similar service called Latitude that would put a user's exact location on a map--right down to the city block. Google also expanded its Maps and Earth services, taking street view outside of the U.S., and Google Earth took users to the Earth's oceans, the moon, and Mars.

Along with search and location, 2009 was a boon year for social networks. Facebook in particular saw huge gains in its number of registered users. It began the year with 150 million users, and is now well past 350 million. That's no small feat, as recent projections boasted the much-hyped and talked about Twitter somewhere close to 60 million, up from less than 10 million at the beginning of the year. Twitter also gained some celebrity traction, netting an account from Oprah Winfrey as well as Ashton Kutcher. Kutcher went on to become the first Twitter user to hit 1 million followers, beating out news network CNN. He's since blown past 4 million.

Both Twitter and Facebook also continued to show that they are an integral part in the spread of information. Controversy over Iran's presidential elections, and the Iranian government censorship that followed made the social networks one of the few places Iranians could go to vent frustrations and pass across news tidbits that would have otherwise gone unseen. Twitter even skipped its scheduled maintenance to stay up--as per a request from the U.S. State Department. Facebook, in turn, rushed to provide support for Farsi so Iranian users could join it.

Twitter was also the first place to go to see photos of U.S. Airways Flight 1549, which had to make an emergency landing in New York's Hudson River. Nearby ferry riders snapped the first shots of the crash and uploaded them to photo host Twitpic, which ended up crashing because of the sudden, and massive traffic spike.

Besides social networks, voice services and VoIP telephony were big in 2009. E-commerce giant eBay sold off its Skype services to an investor group that now runs it as its own product, with hopes of an IPO in 2010. Google redesigned its GrandCentral service as a product called Google Voice, which was opened up to users after a year and a half of dormancy. Google also snatched up Web-based VoIP service Gizmo5, which could end up being integrated into Google Voice. Other notable telephony launches include 3Jam, which does voice forwarding and transcription, and Ribbit's mobile service. Both of those companies, along with Google, are trying to get users to manage their calls and voice mails online, functionality that is likely to expand in 2010.

Even with a flashy relaunch, Google Voice had its own share of controversy. This year the service got into hot water with AT&T. It started when Apple pulled all the third-party Google Voice applications from its App Store, along with rejecting Google's submission of its own Google Voice application. This action caused the FCC to launch an inquiry to see why the apps were removed, as well as why Google's Voice application was not allowed onto Apple's store.

It turns out AT&T was not having any part of Google's blocking phone calls to certain parts of the country that would have cost the company more money to connect users to. In late-October Google bounced back, announcing that it had limited the amount of blocked numbers to fewer than 100. Despite this, 2009 closed out without any Google Voice apps (including Google's own) making it back onto the App Store.

Finally, 2009 saw a continuation of the browser wars. Mozilla iterated on the third version of its Firefox browser several times, while Microsoft, Apple, and Opera introduced brand new versions of Explorer, Safari and Opera, respectively. Google took the crown though--it managed to jump two version numbers, going from version one to three, with version four currently in developer testing.

Chrome also jumped from being just a browser to a full-fledged operating system. In late November, Google publicly demoed Chrome OS, an instant-on browser-based operating system designed for Netbooks. Users, however won't be getting their hands on hardware that will run Chrome OS until mid- to late 2010.


GSM is "NOT" Encrypted now !

A German computer engineer said Monday Dec 28 that he had deciphered GSM encryption that most of the world’s digital mobile phone calls, saying it was his attempt to expose weaknesses in the security of global wireless systems.

Karsten Nohl, the encryption expert, put the effectiveness of the 21-year-old G.S.M. algorithm in a challenge, the code developed in 1988 and still used to protect the privacy of 80 percent of mobile calls around the world. In an attempt to expose holes in the security of global wireless systems, 28-year-old Karsten Nohl cracked the 21-year-old GSM algorithm, which is used to encrypt 80 percent of the world's mobile calls, reports The New York Times.

In a presentation given at the Chaos Communication Conference in Berlin, the researcher said that he had compiled 2 terabytes worth of data -- cracking tables that can be used as a kind of reverse phone-book to determine the encryption key used to secure a GSM (Global System for Mobile communications) telephone conversation or text message.

While Nohl stopped short of releasing a GSM-cracking device " that would be illegal in many countries, including the U.S." he said he divulged information that has been common knowledge in academic circles and made it "practically useable."

Intercepting mobile phone calls is illegal in many countries, including the U.S., but GSM-cracking tools are already available to law enforcement. Knoll believes that criminals are probably using them too. "We have just basically copied what you can already buy in a commercial product," he said.

There are about 3.5 billion GSM phones worldwide, making up about 80 percent of the mobile market, according to data from the GSM Alliance, a communications industry association representing operators and phone-makers.

" 2010 will face the hottest struggle for innovating a more secure encryption algorithm" said Yahia Megahed, Vice President of Symbyo Technologies USA - a Global leader in Mobile software application development - " GSM organizations are at risk now, as within few months if their developers don't reach to a more secure encryption algorithm they will be in a hard situation and the whole industry will face its biggest challenge" he said in a statement.

Monday, December 21, 2009

Egypt Signs up for 1st Arabic Domain Name on the Web

Minister of Communications Information Technolog, Dr. Tarek Kamel and Minister of Higher Education and Scientific Research, Dr. Hani Helal announced that Egypt had signed up to acquire the first Arabic domain name suffixed ".misr"The announcement was made during a session on managing internet resources at the Internet Governance Forum (IGF) meeting attended by Rod Beckstrom, CEO of Internet Corporation for Assigned Names and Numbers (ICANN).ICANN opened at the 4th annual IGF meeting held in Sharm El-Sheikh, resort the registration of domain names in several languages including Arabic, Korean, Chinese and others in addition to Latin letters. ICANN declared the initiative in Egypt in recognition of the country's leading role in spreading the culture of internet usage nationwide.

Dr. Kamel stated that Egypt was the first Arab country to sign up in this system, which boost traffic on Arabic websites as well as open new investment horizons. Domain names ended by “.misr” will then be available on search engines for internet users to find.National Telecommunications Regulatory Authority (NTRA) will be the entity in charge of finalizing the procedures for Egypt's registration.Coordination is underway between the Egyptian Universities Network at the Supreme Council of Universities, which undertakes the registration of the websites of academic and governmental sector to be suffixed “.misr” in Arabic letters.The registration of companies, non-governmental organizations and other civil organizations will be carried out through specialized Egyptian companies.Also, the sign up process is expected to include thousands followed by millions of Arabic domain names in the coming years.Egypt has been actively involved in the process as a member of the Governmental Advisory Committee of ICANN through the participation of Eng. Manal Ismail from NTRA. Egypt is also an active member in the Arabic Domain Names Task Force.Last October, the ICANN board approved during the annual meeting held in Seoul, Korea an executive plan to apply international domain names with non-Latin names. Egypt sees such a decision as a positive step toward multilingualism on the web, which supports the spread of the internet in different societies’ native languages.The announcement followed a decision by the US-based ICANN to end the exclusive use of Latin characters for website addresses, allowing Internet users to write an entire website address in any of the world's language scripts.IGF 2009 groups over 1,500 representatives of government, non-governmental organizations, advocacy groups and the private sector to discuss the future of the Internet.Under the banner "Creating Opportunities for All", this year's forum will discuss increasing accessibility to the Network, the development of local content and the encouragement of cultural and language diversity, the promotion of safe use of the Internet, means of combating cybercrime and managing critical Internet resources.

This step will support a broader wing for outsourcing business in Egypt. For example, Cartel Capital, a private equity firm which owns Symbyo technologies, an IT outsourcing company that is based in Tampa, FL and has got one of its offshore development centers in Egypt.
Yahia Megahed, one of the General Partners of Cartel Group commented as follow:"With the high qualifications of Egyptian software developers, Egypt will succeed to attract a bigger share of outsourcing from Gulf and other Arabic speakers worldwide who need to develop and build their own IT infrastructure without facing the burden of language. Instead of importing high qualified manpower from Egypt to work in Gulf as developers and other IT related stuff, workers can work from their own motherland and get a lower cost by decreasing the living allowances and getting the work done easily and efficiently" Many countries, specially the United States, outsource the IT related stuff offshorly to other countries as Egypt, India & China to get a high quality software with a competitive rates.

Tuesday, December 15, 2009

Microsoft's server chief talks cloud (Q&A)

Hello there,
I found that interesting article and would like to share it with our precious Symbyo blog readers:

It's been a busy year for Bob Muglia.

Microsoft's server and tools boss shipped an update to Windows Server, got promoted to division president, and prepared Microsoft's operating system in the clouds--Windows Azure--for its commercial launch.

Bob Muglia

(Credit: Microsoft)

In what has become a bit of a year-end ritual, Muglia sat down with CNET for a year-end interview. We hit on a range of topics, from the future of Windows Server, to why his bank won't be moving to Windows Azure any time soon, to the changing life of an IT manager, to Microsoft's consumer future. (Spoiler alert: Muglia thinks it is bright.)

Here's an edited transcript of our interview:

A few years out, how much does Windows Server, the server operating system, start to resemble Windows Azure?
Muglia: Well, making them as similar as possible is clearly the goal, and the goal is to take all the things that we do in Windows Server and make those capable to be done in Windows Azure, and then take the learning we have in Windows Azure and bring it back to Windows Server.

We just took the step of bringing the Windows Azure team, Amitabh (Srivastava) and his group, and putting that in my organization.

Now, what we also did as a part of that, is we merged the Windows Azure and Windows Server teams together. I just talked to Amitabh and he's really excited about the synergies that he can build across the organization and making these things as similar as possible.

In our own services, obviously we choose the hardware, and so there's a more limited set of things that work together. In some senses that gives us a bit of agility on the services side, because we can make something work in one very particular way, but what we've got a long term history of doing is understanding how to do that, and then abstracting that out to work in a much more general purpose way, to work with the hardware that our customers have.

One of the things we're looking at is how do we take the ideas that we're bringing to market in the form of Windows Azure service, and then build those into Windows Server and make them available to our on-premises customers and our hosters.

Do you expect there to be sort of an interim option? I wouldn't be surprised to see you guys do something in the intermediate term where you have the traditional Windows Server that runs on any hardware, you have Azure very customized for your data center, but then make a version of Windows Azure that they can write Azure apps for, but run in their own data centers.
Muglia: We're looking at those sorts of options. I think the trick to the thing is to understand what are the workloads that are most appropriate for doing that, and how would we structure that, and honestly we're still looking at that.

There are some interesting thoughts in that sense, and you can see how, for example, in a high-performance computing environment, where people could use hundreds or thousands of computers in one cluster, you know, Azure is really very, very helpful for something like that. But we're still looking at understanding exactly how we might bring some of those things to market.

"It's probably a fair thing to say that we understand the nerd. I think we understand more than the nerd, but it's certainly true that we do a good job with that audience."

I know you're the enterprise guy, but I thought I would give you the opportunity to come to the rescue of your consumer colleagues. One of the analysts pretty prominently said in The New York Times that it's kind of "game over" for Microsoft on the consumer side, particularly phones. I'm curious your thoughts on this.
Muglia: Well, I read your blog. You did, generally speaking, come to our defense. I mean, it's probably a fair thing to say that we understand the nerd. I think we understand more than the nerd, but it's certainly true that we do a good job with that audience, and that's actually my customer in a lot of senses, because I've got the developers and the IT pros.

You know, I feel like as a company we have a lot of focus on the consumer, and are doing a lot of great things that are quite revolutionary to consumers, and we're going to continue to do it. I mean, obviously if you look at what's happened even with Windows 7 and the success of Windows 7, most of the short term success has been in the consumer marketplace. The business marketplace is going to happen, but business moves slower than the consumer does.
Obviously we've had products like Xbox that have been very successful with consumer, and I think the new Zune work has been really fantastic, and obviously Bing has been really great.

I also think that people are going to be very pleasantly surprised to see the work we're doing in phones, and that will become visible next year. So, I think even in areas where there's been some concern, there is some really substantive, very, very innovative work coming down the line.

It seems a little weird to be saying this, but we're almost at 2010. Are there things in technology that you thought would have happened by now that haven't yet happened?
Muglia: Well, I certainly thought that the 787 would have flown by now. I hear it's supposed to fly Tuesday or Wednesday, though, so that's a good thing. I'm glad to hear that.

I think, if you had asked me in 2000, "Would we be further along on reading on devices than we are now?" I would have said we would be. And we're really starting to see that now with some very special purpose readers, but I would have thought it would have hit across more general purpose devices by now. So, that's probably been one thing.

You know, I think that that's related to how fast tablets or slates or whatever you might want to call them might have taken off. I might have thought that would have gone a little faster than it has. Those two are somewhat related with each other.

But those things always kind of come at different speeds, and I'm pretty confident both of them will become very important as we move forward.

If you could show the world as it is now to your 2000 self, what do you think would be most surprising?
Muglia: We've shifted to a world where all kinds of media are delivered digitally.

You think about what's happened with newspapers and magazines and things, I guess I wouldn't have predicted that shift would happen as dramatically. Not that it's delivered digitally, I think we would have expected that, but the kind of reporting and the real-time nature associated with it.

And I guess, similarly, some of the way social networks have developed, and the impact that they have had on the way people communicate. I find it so fascinating as an example to see the impact that Facebook has on what's happening in other countries like Iran, and getting information out, those sorts of things. You know, I'm not surprised by them, but I certainly wouldn't have predicted them back then.

So, probably your 2000 self might be surprised that in 2009 you're in touch with more people from your high school than you were in 2000?
Muglia: Right, exactly, things like that, exactly. That's a perfect way to say it.

Obviously, your focus is the IT world, and I'm curious, how different is the life of an IT manager? How much has changed in the last decade for what they do on a day-to-day basis?
Muglia: I actually think it's quite different for an IT manager. I think IT managers used to be expected to build the systems and do everything, and now I think they're much more focused on providing the infrastructure for the business teams and the people within business to do things.

One other thing that's missing at least from Microsoft at the end of this decade, as compared to the beginning is Bill Gates at least in a full-time sense. As one of the people kind of at the top of the technical ranks, I'm curious how have you noticed his absence in the last 18 months since he's left full-time work?
Muglia: I watched while Bill was here how his role shifted over a period of time. There was a period of time when the company was at a size and scope where Bill really was able to do the direction, the technical direction of very large parts of the company.

"I'd like to think that I learned a lot from Bill (Gates) and I'm able to do some of the things that Bill would have done."

During the 1990s, we grew to a point where that was just not possible for a human being to do, although Bill's capacity is far beyond most. And so over time, Bill shifted into much more of an advisor role, and he provided advice and guidance.

While Bill's advice was always incredibly useful, he did a great job of also building a lot of people within the company that could also think in a similar sort of way to him. I mean, I'd like to think that I learned a lot from Bill and I'm able to do some of the things that Bill would have done.

I view part of my job is to take up and do many of the things that Bill did, and to do it in my area. My area of scope is broad but it's to a level that I think I can still do that effectively.

But I think what's happened is we now have a hierarchy of people. You've got Ray doing some significant cross-group things, and then you've got people like myself or Steven Sinofsky or folks in Stephen Elop's world, J Allard, all playing subset roles of what Bill used to do.

For a long time, Microsoft and you talked about this notion of autonomic computing. One of the things that conveyed was the sense that the IT would just sort of manage itself. Now you guys talk more about Dynamic IT. It seems like some of the idea that it's just going to magically happen has been moved from the notion.
Muglia: Let's just sort of kind of go back and talk about the time from where all those things sort of emerged in the 2002, 2003 sort of time. People sometimes called things autonomic computing, they sometimes called it utility computing. You know, our name for it was always Dynamic IT. It actually started out as the Dynamic Systems Initiative, and we sort of broadened it a bit with Dynamic IT a few years ago.

And the idea being that operational resources should be largely self-managing, and the process, the lifecycle of developing an application should be connected from the point of requirements, definition, through development, all the way to operations. That vision has not changed. We said it was a 10-year vision in 2003, and it will probably take us 10 years to really fully fulfill it. I think every year, as we release new products, we take substantive steps forward.

Now, the thing I didn't understand back then is that all of this would lead so naturally to the cloud application model, and that's what we've kind of put in place over the last year or two. I would very clearly say that is the next thing.

When you think about some of the biggest trends in the coming few years, I imagine cloud computing is a big one?
Muglia: Yeah. The thing that to me is so exciting about that is the impact I think it's going to have on business, most importantly allowing people to write applications more rapidly that really meet the needs of tier one enterprise apps, and do so at a fraction of the cost, both from a hardware perspective and from an operational perspective.

"A container is essentially the mainframe of the future. The difference is that it's thousands of times more powerful than a mainframe at a fraction of the cost."

So, the kinds of things that would have required a mainframe in the past.
Muglia: Absolutely. Obviously we built software for very high-end systems. I mean, being straight and honest, that's not the main thing that our software is used for. We have a fairly small number of our systems running on these big $100,000 plus machines. But it's an important segment of the market, and clearly UNIX has a significant segment there, and the mainframe is a significant segment there.

When I talk to the Windows Azure guys, and I talk the software architecture that they've put in place in terms of having isolation units and understanding how to contain failures, and then I look at the way we are building the next generation data center systems, built inside these containers, I recognize that a container is essentially the mainframe of the future. The difference is that it's thousands of times more powerful than a mainframe at a fraction of the cost.

Do you have a sense of how many people are actually doing real cloud computing today, what percentage of businesses, and what that might look like in the next two or three years?
Muglia: I think Forrester has done some work that it's a really small number of people, like 4 percent of folks that are actually deploying things right now. So, it's still very nascent. But we expect a really large number of folks to start to (use cloud computing) over the next 12 to 18 months.

We're right at that inflection point where people are going to begin to start building real applications and begin deploying those applications into their environment, both for internal use and for their external customers. Certainly if you go out three to five years, we expect it to become very mainstream.

One of the constant debates is when people ask how much work will be done in a company's own data centers as compared to some sort of public cloud like Microsoft is running with Azure. Do you have a sense of where that mix might be a couple years from now or five years from now?
Muglia: I think certainly over the next five years we'll still see more work done in-house than in a public cloud. I mean, you'd have to move an awful lot of work out in order to shift that.

The question will be what is the cost and the effect, and at one level how much can this be done for people in a public cloud environment at a lower cost, and what level of security and trust can be established so that people feel comfortable moving their workloads to the cloud. I don't expect my bank will be moving their core financial systems to a public cloud environment in a five-year horizon, and that's probably a fine decision on their part.

I want to make, enable, and build the technology infrastructure to allow people to move their most sensitive data into the cloud so that some day it will become possible for a bank to do that, but I think it will take a little while for it to actually happen.

Last week, Microsoft bought a company called Opalis that specializes in software to manage data centers. Is this a big company? What made you interested in them?
Muglia: It's a moderate sized company. I'm really excited about this acquisition. I mean, what they do is something that's called "run book automation," and what they've done is they've built a very strong base of understanding of how to automate tasks that are happening within the data center.

And by the way, that's quite heterogeneous in its nature. Although they run fully on Windows, they're not limited and restricted to data center tasks that happen simply in Windows, but they can reach out and work with Linux and UNIX systems, et cetera.

Being able to automate a set of tasks is one of the key things that's going to be necessary to simplify the operations of any of these data center environments, and Opalis is a fantastic acquisition for us because they bring a ton of expertise and real world customer experience in that space. We think our customers will see value from this literally from day one.


you can learn more about microsoft .net outsourcing at the software outsourcing company website:

Wednesday, December 9, 2009

Enabling Business Capabilities with SOA


Evolving from business-process reengineering (BPR), business-process management (BPM) was an established discipline well before service- oriented architecture (SOA). Initially, enterprises viewed the two as distinct—often, establishing separate teams that leveraged disparate technologies. Since then, SOA is less about leveraging Web services for faster integration and more about providing an abstraction of information-technology (IT) resources to support business activities directly. This maturity in SOA thinking brings it closer in alignment with BPM.

Enterprises now see this alignment and frequently combine new SOA and BPM projects; however, challenges still exist. While activities in a business process can be implemented as discrete services, there is usually no direct connection between BPM model artifacts and SOA model artifacts, which makes traceability difficult. Enterprise architects have a strong desire to see which processes would be affected if a given service were modified. At the same time, business owners want to see how their investments in SOA are faring. Ideally, they would like to gain visibility into which processes and services support a given business capability such as “Order fulfillment.”

This article explains how Microsoft Services might provide enterprises with the visibility that they want through the application of service offerings (see Figure 1) that would leverage any existing Microsoft Services Business Architecture deliverables. Specifically, the article focuses on how Architecture and Planning services might feed directly into technology-optimization services. By weaving existing Integration (SOA) offerings—part of the Application Platform Optimization (APO) model under Business Technology Optimization (BTO) services—with emerging concepts that are used to drive future requirements, we hope to paint a picture of how you can enable business capabilities today on our stack and how this will only get easier over time.

Figure 1. Services offered by Microsoft Consulting

The Integration offerings are built around the concept of enterprise layers that describes relationships among processes, services, and data in the enterprise. The specific mesh of enterprise- layer items and their relationships in a given environment is referred to as an enterprise service model (ESM). To deliver this ability, we start by mapping items in our ESM to capabilities in a capability model, as described by Martin Sykes and Brad Clayton in their article titled “Surviving Turbulent Times: Prioritizing IT Initiatives Using BusinessArchitecture”in the July 2009 issue of The Architecture Journal. When the mapping is complete, we have a dependency map that we can use to identify which IT resources support a given capability and to understand which capabilities are affected by IT resources. In this case, an IT resource might be a Web service; but it might also be a message queue (MQ) or a mainframe Customer Information Control System (CICS) transaction.

We will continue to use the fictional retail bank, Contoso Bank, which was previously introduced to provide a context for discussing the concepts of enterprise layers and the ESM. As with all businesses, Contoso Bank must perform employee onboarding. The onboarding of an IT resource begins when a start is established after a candidate has accepted an offer and cleared the background check. Onboarding ends when the employee has a telephone number, e-mail address, laptop computer, and cubicle; has registered for HR benefits; and is set up in payroll. Several multistep processes are executed by different groups in the organization to fulfill employee onboarding. In an effort to reduce costs and increase productivity, Contoso Bank has a strong desire to reduce the time that it takes to onboard employees, so that they can start at their jobs more quickly.

Enterprise Layers

The EA team at Contoso Bank structures its decisions and activities around an enterprise-layer model, as shown in Figure 2.

Figure 2. Enterprise layers

The model that is shown in Figure 3 is an evolution of the distributed- (or n-tier–) application architecture paradigm:

Figure 3. Application layers

The enterprise-layer concept extends these areas of concern across multiple applications. Whereas the presentation layer in a single application represents the actionable interface to invoke business logic, it is often asynchronous events in a process that invoke business logic in an enterprise ecosystem. The business layer encapsulates business logic to a specific application. Frequently, enterprise services must coordinate interaction with multiple business services to fulfill an activity step in a process.

The integration layer is where traditional data jobs and enterprise application-integration (EAI) activities take place. The enterprise- layer concept provides enterprise architects with an opportunity to prescribe policies across a given layer and gives them a framework in which to think about dependencies across the entire IT landscape.

After several years, the application portfolio of Contoso Bank is more consistent in approach and architecture, but it still struggles to align with the business. An additional relationship is missing: capabilities.

Business Capabilities

Capabilities do not represent an IT resource or a group of IT resources. They are purely business abstractions that can be accomplished or that are wanted. A given capability might depend on additional capabilities that are to be delivered. The Contoso Bank capability model includes the Onboarding capability. As shown in Figure 4, this capability depends on the following child capabilities:

Figure 4. Capabilities of Contoso Bank

The business might choose to model these capabilities at a lower level of granularity and include further child capabilities. In our example, the HR Registration capability could be composed on a Benefit Registration capability and a Payroll Setup capability. These capabilities are pure business abstractions, so that there are no assumptions of how these capabilities are implemented—or even assumptions of whether they are implemented at all.

Modeling Enterprise Layers and Capabilities Capabilities

Artifacts in each enterprise layer can be modeled independently using existing tools in the disciplines of business-process modeling, capability models, and service modeling. Most enterprises would agree that there is benefit in developing their skills in these disciplines. Mature organizations that have established modeling methodologies face new challenges. Often, the business-process models, capability models, and service models are created by using different notations and tools. More importantly (at this point in the discussion), while the models are intended to represent the business and IT landscape, they drift apart quickly and lose value over time.

The Enterprise Service Model

Originally, the concept of an enterprise service model (ESM) existed to rationalize a portfolio of services. The ESM would include a conceptual model of services in the IT environment and services that were planned to be developed and deployed. Challenges immediately existed with the model becoming outdated, because IT employees manually modeled changes in the environment. Also, as enterprises saw SOA aligning closer with BPM, the model had to account for processes and business capabilities—which it often did not, so that its value diminished. The concept of an ESM had to be expanded to keep the model in sync with reality and include capabilities and processes. Today, our concept of an ESM aims to represent the instances of artifacts that are found in the enterprise layers and their relationships with capabilities that are found in a capability model (see Figure 5). Not only are the relationships for specific systems, resources, operations, and capabilities defined, but the ability to affect the real instances is an important goal.

Figure 5. Mapping capabilities through the ESM to enterprise layers (Click on the picture for a larger image)

Each capability can be enabled by one or more processes or service operations. Each step (activity) in a process can be mapped to a service operation. In cases in which an activity maps to multiple services, a façade should be created to manage the coordination of or aggregation to where the activity logically maps to a single service. In our Contoso Bank example, we are attempting to enable “onboarding.” As described earlier, the Onboarding capability comprises child capabilities. However, not all capabilities need to be mapped. If a process or service can be identified that fulfills the entire Onboarding capability, it would be the only one that is mapped. Our customer does not have a single service to fulfill the Onboarding capability.

In the future, Contoso Bank might invest in creation of an onboarding process that would execute in its data center. At this point, the problem is that it is completely conceptual; it might exist in diagrams or described in documents, but there is no concrete IT-resource representation or execution of capabilities. Capabilities exist for traceability and for demonstrating a return on investment (ROI) to business stakeholders. They also aid in strategic architecture planning, by helping IT identify which processes might be leveraged to enable a capability and identify new processes that should be created.

Microsoft Service-Oriented Infrastructure

The process of mapping capabilities to IT resources is powerful. The mapping might help identify gaps in a service portfolio and provide business traceability to service development. As this model become more concrete, moving from diagrams to metadata, it becomes more powerful. Enterprise assets can now develop stronger links—not just at design time, but at runtime.

This is one of the goals of the “Oslo” modeling technologies. “Oslo” is the code name for Microsoft’s forthcoming modeling platform. “Oslo” extends beyond squares and rectangles that cannot be shared across tools to a set of technologies that are focused on enabling gains in productivity. Gains in productivity will come in the form of sharing metadata that is captured through diagrams and runtime environments that will execute metadata that is stored in the repository.

The Integration offering that we introduced earlier includes the service-oriented-infrastructure (SOI) offering that consists of the Microsoft Services Engine (MSE), which allows customers to start establishing their ESM today. The MSE stores information that is related to IT resources in its metadata repository. This metadata includes the location, types, and policies for Web services. It can also include the metadata that is necessary to invoke a CICS transaction through Microsoft Host Integration Server or the metadata that is required to drop a message in an MQ.

As more metadata is imported into the MSE, a more complete picture of the ESM is created. Certainly, there is value in understanding what is in the environment. However, the value in MSE does not come from storing metadata, but from the ability to take this metadata and affect how the environment is constructed virtually. The MSE accomplishes this through its implementation ofservice virtualization. In this context, service virtualization is the abstraction of the address, binding, and contract between the service consumer and the service provider. Because we can hide where the service operation actually resides, change how we communicate with the service operation, and manipulate what the service operation looks like, we can create virtual services.

Adoption of the MSE by Contoso Bank allows them to take operations from disparate services and combine them to create a new endpoint. While Contoso Bank did not possess a single operation or process entry point to fulfill the Onboarding capability, they did have operations to start the hardware, HR, and payroll processes. By leveraging the MSE, Contoso Bank was able to project a virtual service that contained each operation—providing a simple façade that could be developed against, to fulfill the Onboarding capability.

Coming Enhancements

he MSE solution was developed over a four-year period by a team of architects in Microsoft Services who were working with customers and their scenarios. During this period, the MSE solution underwent several releases that added features; updated the ESM; and supported new operating systems, as well as new versions of Microsoft .NET Framework and Microsoft SQL Server. New features were developed in consultation with customers and as a result of frequent meetings with various Microsoft product groups. These meetings helped align the solution with product releases and validate the solution.

The result was the filling of a joint patent application that was related to service virtualization and acceptance of the MSE solution into the codename “Dublin” Microsoft Technology Adoption Program (TAP). “Dublin” is an extension of Windows Server that will provide enhanced hosting and management for Windows Communication Foundation (WCF) and Windows Workflow (WF) applications.

In the coming year, MCS plans to extend the ESM to include capabilities as first-class citizens. The ESM will be extended so that business capabilities can be associated with operations. When this is complete, we will be able to get end-to-end visibility—from the business capability all the way to the IT resources that deliver that capability. In our example, we find that the child capabilities are mapped to operations and that those operations are projected as virtual services. Someone who was examining the model would see the need to develop a workflow service to coordinate interaction between the three operations. After the development of that service, it too could be managed by the MSE and mapped to the Onboarding capability for dependency tracking.

As “Oslo” gets closer to release, the MSE will take it as a dependency and leverage the repository to store the metadata that represents the ESM. By utilizing “Oslo,” we will map the virtualized service and operations to the Contoso Bank Onboarding business capabilities. A business analyst or domain expert can start with business capabilities and associate the services and resources that are utilized to perform the specified business capability. The addition of the “Oslo” modeling technology will be a natural extension of the ESM and MSE.

For additional information, please refer to César de la Torre Llorente’s article titled “Model-Dri ven SOA with ‘Oslo’” in this issue ofThe Microsoft Architecture Journal.


Services and applications that were developed over a period of months or even years at Contoso Bank and a variety of different technologies have been utilized. The developers at Contoso Bank followed common architectural patterns for distributed- andn-tier– application development. However, as their business evolves, they face the continued struggle of aligning applications with their business.

By focusing on a business capability such as “Onboarding,” Contoso Bank is better able to align with the applications and business processes. Instead of having to write new applications or application adapters to map to the business capability, Contoso Bank found that by leveraging an MCS solution (the managed-services engine, or MSE), their developers could create new virtual services that align with the business capability. This is possible by modeling the existing services, endpoints, operations, and data entities with the use of the ESM. This process is quickly performed by importing existing services or applications into the ESM; then, new virtual services that align with the business capabilities are defined. Service virtualization enables the composition of multiple physical resources and operations into a virtual service that aligns with the business capability.

Source: The Architecture Journal
For more about Microsft .net consulting, Microsoft .net Outsourcing you can refer to the software outsourcing company website:

Sunday, December 6, 2009

Oracle Rac - Failover And Load Balancing


Oracle RAC systems provide two methods of failover to provide reliable access to data:

Connection failover.

If a connection failure occurs at connect time, the application can fail over the connection to another active node in the cluster. Connection failover ensures that an open route to your data is always available, even when server downtime occurs.

Transparent Application Failover (TAF).

If a communication link failure occurs after a connection is established, the connection fails over to another active node. Any disrupted transactions are rolled back, and session properties and server-side program variables are lost. In some cases, if the statement executing at the time of the failover is a Select statement, that statement may be automatically re-executed on the new connection with the cursor positioned on the row on which it was positioned prior to the failover.

The primary difference between connection failover and TAF is that the former method provides protection for connections at connect time and the latter method provides protection for connections that have already been established. Also, because the state of the transaction must be stored at all times, TAF requires more performance overhead than connection failover.

Connection Failover

Enabling connection failover allows a driver to connect to another node if a connection attempt on one node fails. When an application requests a connection to an Oracle database server through the driver, the driver does not connect to the database server directly. Instead, the driver sends a connection request to a listener process, which forwards the request to the appropriate Oracle database instance. In an Oracle RAC system, each active Oracle database instance in the RAC system registers with each listener configured for the Oracle RAC.

Transparent Application Failover (TAF)

With TAF, if a communication link failure occurs after a connection is established, the connection is moved to another active Oracle RAC node in the cluster without the application having to re-establish the connection.

Load Balancing

Oracle RAC systems provide two types of load balancing for automatic workload management:
Server load balancing distributes processing workload among Oracle RAC nodes.
Client load balancing distributes new connections among Oracle RAC nodes so that no one server is overwhelmed with connection requests. For example, when a connection fails over to another node because of hardware failure, client load balancing ensures that the redirected connection requests are distributed among the other nodes in the RAC.
The primary difference between these two methods is that the former method distributes processing and the latter method distributes connection attempts.

Server Load Balancing

With Oracle9i RAC systems, a listener service provides automatic load balancing across nodes. The query optimizer determines the optimal distribution of workload across the nodes in the RAC based on the number of processors and current load.
Oracle 10g also provides load-balancing options that allow the database administrator to configure rules for load balancing based on application requirements and Service Level Agreements (SLAs). For example, rules can be defined so that when Oracle 10g instances running critical services fail, the workload is automatically shifted to instances running less critical workloads. Or, rules can be defined so that Accounts Receivable services are given priority over Order Entry services.
The DataDirect Connect for ODBC Oracle drivers can transparently take advantage of server load balancing provided by an Oracle RAC without any changes to the application. If you do not want to use server load balancing, you can bypass it by connecting to the service name that identifies a particular RAC node.

Client Load Balancing

Client load balancing helps distribute new connections in your environment so that no one server is overwhelmed with connection requests. When client load balancing is enabled, connection attempts are made randomly among RAC nodes. You can enable connection failover for DataDirect Connect for ODBC drivers through a driver connection string using the Load Balancing connection string attribute.
Suppose you have the Oracle RAC environment shown in Figure 4 with multiple Oracle RAC nodes, A, B, C, and D. Without client load balancing enabled, connection attempts may be front-loaded, meaning that most connection attempts would try Node A first, then Node B, and so on until a connection attempt is successful. This creates a situation where Node A and Node B can become overloaded with connection requests.

With client load balancing enabled, the driver randomly selects the order of the connection attempts to nodes throughout the Oracle RAC system. For example, Node B may be tried first, followed by Nodes D, C, and A. Subsequent connection retry attempts will continue to use this order. Using a randomly determined order makes it less likely that any one node in the Oracle RAC system will be so overwhelmed with connection requests that it may start refusing connections.

For more details on Oracle RAC you can view on
for more about oracle outsourcing company, you can view the software outsourcing company website:

Wednesday, December 2, 2009

Interoperability Between Oracle and Microsoft Technologies, Using RESTful Web Services


I found that interesting article while reading:

Interoperability Between Oracle and Microsoft Technologies, Using RESTful Web Services

by John Charles (Juan Carlos) Olamendy Turruellas ACE

A guide to developing REST Web services using the Jersey framework and Oracle JDeveloper 11g

Published December 2009

RESTful Web services are the latest revolution in the development of Web applications and distributed programming for integrating a great number of enterprise applications running on different platforms. Representational state transfer (REST) is the architectural principle for defining and addressing Web resources without using the heavy SOAP stack of protocols (WS-* stack). From the REST perspective, every Web application is a service; thus it's very easy to develop Web services with basic Web technologies such as HTTP, the URI naming standard, and XML and JSON parsers. (The story of RESTful Web services begins with Chapter 5 of Roy Fielding's Ph.D. dissertation, Architectural Styles and the Design of Network-Based Software Architecture, although Fielding, one of the authors of the HTTP spec, presents REST not as a reference architecture but as an approach to judging distributed architectures.)

The key to RESTful Web services is that application state and functionality are abstracted as resources on the server side. These resources are uniquely referenced by global identifiers via URI naming, and they share a uniform interface for communication with the client, consisting of a set of well-defined operations and content types. The traditional HTTP methods—POST, GET, PUT, and DELETE (also known as verbs in REST terminology)—encompass every create, read, update, and delete (CRUD) operation that can be performed on a piece of data. The GET method is used to perform a read operation that returns the contents of the resource. The POST method is used to create a resource on the server and assign a reference to this resource. The PUT method updates the resources when the client submits content to the server. And finally, the DELETE method is used to delete a resource from the server.

URI naming must be meaningful and well structured, so the clients can go directly to any state of the application through resource URIs without passing by intermediate layers. It's recommended to use path variables to separate the elements of the path in a hierarchical way. For example, to get a list of customers, the URI can be http://server/customers, and to get the customer whose identifier is 1234, the URI can be http://server/customers/1234. You must use punctuation characters to separate multiple pieces of data at the same level in the hierarchy. For example, to get customers whose identifiers are 1234 and 5678, the URI can be http://server/customers/1234;5678, with a semicolon separating the identifiers. The last tip is to use query variables to name parameters—URI naming is for designating resources, not operations, so it's not appropriate to put operation names in the URI. For example, if you want to delete the customer 1234, you should avoid the URI http://server/deletecustomers/1234; the solution is to overload the HTTP methods (POST, PUT, and DELETE).

The data exchanged across resources can be represented with MIME types such as XML or JSON documents as well as images, plain texts, or other content formats. This is specified via the Content-Type header in the requests. The format used in the application will depend on your requirements. If you want to convey structured data, the format might be XML, YAML, JSON, or CSV. If you want to transfer documents, you might use a format such as HTML, DocBook, SGML, ODF, PDF, or PostScript. You can also deal with different content for manipulating photos (JPG, PNG, BMP), calendar information (iCal), and categorized links (OPML).

And finally, RESTful Web services need protocols to transfer states that must be client/server, stateless, cacheable, and layered, so there can be any number of connectors (client, servers, caches, tunnels, firewalls, gateways, routers) that transparently mediate the request.

There are two types of states in RESTful Web services: resource and application. The resource state is information about resources, stays on the server side, and is sent to the client only in the form of representation. The application state is information about the path the client has taken through the application, and it stays on the client side until it can be used to create, modify, and delete resources.

A RESTful Web service is by nature stateless, and if the client wants the states to take part of the request, then it must be submitted as part of the underlying request. Statelessness is a very important feature for supporting scalability in your solution, because no information is stored on the server (this is responsibilities of the client) and none of it is implied from previous requests. If you have a workload balancer and a request cannot be handled by one server, another one can process this request, because message requests are self-contained, and we don't need to refactor the solution architecture.

Ruby on Rails is an open source Web development framework for the Ruby programming language. You can create a Web application very easily to expose relational data, using REST principles. Django, another open source Web development framework, was written for the Python programming language, following the model-view-controller (MVC) design pattern.

And finally, for Java developers, JAX-RS (JSR 311) provides an API for creating RESTful Web services according to REST principles. JAX-RS uses annotations (Java 5 and above) to simplify the development effort for Web services artifacts, so you can expose simple Plain Old Java Objects (POJOs) as Web resources. There are several JAX-RS-based implementations, such as Jersey, JBoss RESTEasy, Restlet, Apache CXF, and Triaxrs. This article explains how to develop REST Web services by using the Jersey framework (the Sun reference implementation for JAX-RS) and Oracle JDeveloper 11g. (At the time of this writing, Oracle is planning to support JAX-RS in the near future by using the Jersey framework approach, integrated with Oracle JDeveloper tools and Oracle WebLogic Server.)

To set up the environment to develop the RESTful Web service with the Jersey framework, download the Jersey libraries with all the necessary dependencies from After you reach this site, you can see that Jersey contains several major parts:

  • Core server. A set of annotations and APIs (standardized in JSR-311) to develop a RESTful Web service
  • Core client. The client API for communicating with REST services
  • Integration. A set of libraries for integrating Jersey with Spring, Guice, Apache Abdera, and so on

For our demonstration REST solution, we need only the following three Java Archive (JAR) files from the Jersey download: jersey.jar, asm-3.1.jar, and jsr311-api.jar.

Developing the RESTful Web Service with Oracle Technologies

This article's example application features a common business scenario in which a client needs to search for detailed information about business entities by using a Web service and the client applications and server applications are running on different platforms. The RESTful Web service is developed with Oracle JDeveloper 11g and the Jersey framework running on the Oracle platform, and the front-end application is a console application, developed with Microsoft Visual Studio .NET 2008 and Microsoft .NET Framework 3.5, that consumes the Web service and displays the underlying client data. We're going to use RESTful Web service technologies as the key integration layer between the client and the server.
Let's start with the server-side application. Open the Oracle JDeveloper 11g IDE, and create a new application by entering the application name and the directory for storing the files in the dialog box.

Figure 1: Creating a new application

The next step is to add a Web project to the application by choosing File -> New. When the New Gallery dialog box appears, select the Projects node from the Categories pane and select the Web Project item.

Figure 2: Creating a Web project

The Create Web Project wizard appears. Click Next. On the Location page, enter the project name and the directory of the project within the application's working directory. Then click Next.

Figure 3: The Create Web Project wizard's Location page

On the Web Application page, select Servlet 2.5\JSP 2.1 (Java EE 1.5) to set up the Web technology to be used by the application. Click Next. On the Page Flow Technology page, select None and click Next. On the Tag Libraries page, click Next to go to the next page without choosing any tag library. The last page of the wizard is the Web Project Profile page, where you enter the document root, Web application name, and context name. Click Finish.

Figure 4: The Create Web Project wizard's Web Project Profile page

Now we need to include the Jersey framework in the libraries of Oracle JDeveloper 11g by selecting Tool -> Manage Libraries, launching the Manage Libraries dialog box, and clicking New to open the Create Library dialog box. The next step is to enter a meaningful name for this managed library, such as Jersey Framework, and then click Add Entry and browse to the directory where the Jersey libraries are deployed. For this example, we need to include the asm-3.1.jar, jersey.jar, and jsr-311.jar JARs. Remember to check the Deployed by Default check box.

Figure 5: Adding JARs for the Jersey framework

After clicking OK, return to the Manage Library dialog box. The setting is shown below.

Figure 6: The Jersey framework is added to the Oracle JDeveloper 11g managed libraries.

The next step is to right-click the Web project and select Properties. Go to the Libraries and Classpath node, click Add Library, and select the Jersey Framework library. The Libraries and Classpath setting is shown below.
Figure 7: Libraries and Classpath settings
Now, according to the programming model for RESTful Web services, we need to define an application whose state and functionality are abstracted into resources. Every resource is uniquely addressable with URIs. All resources share a uniform interface for transferring state and functionality from resources and clients by using basic HTTP operations and by sending the object data, using any established content format (string, XML, JSON, and so on). Each resource invokes the business services to process the request and then create the underlying response.

In the JAX-RS standard, a resource class is a POJO class annotated with an @Path annotation to represent a particular REST resource and at least a method annotated with @Path or a request method designator such as @GET, @PUT, @POST, or @DELETE to handle requests and specific operations such as creating, reading, updating, and deleting resources.

The first step is to define the business entity whose state will be persisted in the payload of the RESTful message. In our case, we want to send information concerning our customers, so let's create a Customer class in the domainobjects package (see below).

Figure 8: Creation of the Customer class in the domainobjects package

Next, let's define the attributes as well as the setter and getter properties for the customer entity type, such as the identifier, full name, and age fields. In real-world situations, we must characterize a customer entity with a lot of fields. We also have to override the toString method in order to return an XML representation of a customer. Our definition of the Customer class is shown below.

Figure 9: Creation of the Customer class in the domainobjects package

We also need to define an entity manager for the customers. The entity manager is responsible for the main data operations, acting as a gateway to the database on behalf of the customer entity. In order to focus on the development of the RESTful Web service and not on the creation of data access objects, this article avoids any interaction with database systems and defines a Java array of customers to simulate the underlying querying operation. Then the entity manager will return a list of available customers by using the getCustomers method and enable getting a particular customer, given that customer's identifier, with the getCustomerById method.

Figure 10: Definition of the CustomerEntityManager class

For a real-world scenario requiring interaction with relational datasources, it's recommended to use built-in features of Oracle JDeveloper 11g and Oracle Application Development Framework (Oracle ADF) technologies (data control and data bindings) for developing a robust persistent layer.

The next step is to specify that all REST requests should be redirected to the Jersey container. This involves defining a servlet dispatcher in the application's web.xml file. In this case, the ServletRestfulWS servlet will map to the pattern /resources/*, and the base URL for accessing the REST resources is http://{remote_server}:{remote_port}/{app_context}/resources/. We also include the index.jsp page as the welcome page for the applications' entry point, although it's not necessary. Besides declaring the Jersey servlet, we also need to define an initialization parameter indicating the Java packages that contain the resources. In this case, the restfulresources package where our resources, developed in Java, reside (see below).

Figure 11: Configuring the Jersey dispatcher in the web.xml file
The next step is to define a resource named CustomerResource for accepting HTTP GET requests and sending back information about the customers. Let's create the CustomerResource class in the restfulresources package (see below).

Figure 12: Creation of the CustomerResource class in the restfulresources package

The CustomerResource class is the root resource class for our application. Root resources classes are POJO classes that are either annotated with an @Path annotation or have at least one method annotated with an @Path annotation or a request method designator such as @GET, @PUT, @POST, or @DELETE. In this case, we're going to get information about customer entities. The code for the CustomerResource class is shown below.

Figure 13: Definition of the CustomerResource class
The business logic for the CustomerResource class, being exposed with REST Web services, is to get a list of customers (by invoking the getCustomerList method) and detailed information about a particular customer (by invoking the getCustomer method and passing its identifier as a parameter).

Now I'll explain the annotations used in this example. First, all the annotations are defined in*, part of the JAX-RS (JSR 311) specification. Because in the REST world, resources are key citizens, we need a way to access them. The @Path annotation identifies the URI path template to which resources respond. The URI path template is relative to the base URI of the server and port (in this case, localhost and 7101), the context root of the Web application (in this case, RestfulWebserviceApp-CustomerLookupRestfulWebService-context-root), and the URI pattern to which the Jersey servlet responds (in this case, resources as defined in web.xml). This approach concerning the URI path template in REST is very useful for building dynamically URIs for our REST application. In our application, the root resource is annotated with the @Path("customers") annotation, so this is the entry point to operations involving the customer entities of our application. Because a resource can have subresources (a way to access attributes and methods of the underlying resource class), we can define a subresource for accessing the getCustomerList method by using the @Path("list") annotation and accessing the method by using the http://{remote_server}:{remote_port}/{app_context}/resources/customers/list URI.

hint: For more about Microsoft .net outsourcing, Oracle outsourcing and Oracle outsourcing company at the software outsourcing company:

We can also define a subresource for accessing the getCustomer method by using the @Path("customer/{identifier}") annotation and access the customer whose identifier is 1 by using the http://{remote_server}:{remote_port}/{app_context}/resources/customers/customer/1 URI. In this case, the URI path template includes variables, which are denoted with curly braces and substituted for by the Jersey framework at runtime. To obtain the values of these variables and match them with the underlying request method, we need to use the @PathParam("identifier") annotation in the definition of the method parameter.

The @GET annotation in both methods is a request method designator—along with@POST, @PUT, @DELETE, and @HEAD—that is defined by JAX-RS and corresponds to the similarly named HTTP methods. It means that the methods process only HTTP GET requests.

The @Produces annotation specifies which MIME types are supported for responses. In this case, it's application/xml, although it could be any content format such as an image, JSON, HTML, or plain text.

To successfully deploy and run the RESTful application in the embedded Oracle WebLogic Server, we need to do some tricky work here, because Oracle JDeveloper 11g does not know how to deploy Jersey applications to Oracle WebLogic Server. We need to create a weblogic.xml file along with web.xml in the WEB-INF directory of the Web application (see below).

Figure 14: Oracle WebLogic Server deployment file
Finally, we're going to use another trick by adding an empty index.jsp page to the project. After that, right-click this JSP page and select the Run option from the context menu. When the application server is launched and the application is deployed there, you can see the results.

When you browse to the RESTful application with Internet Explorer, you get the results shiwn below.

Figure 15: List of customers returned from the RESTful Web service
The screenshot below displays the detailed information for the customer, with its identifier, 1.
Figure 16: The customer with identifier 1 returned by the RESTful Web service

Consuming the RESTful Web Services with Microsoft .NET Technologies

To create the client-side part of the solution, let's open Visual Studio .NET 2008, select File -> New -> Project, and navigate in the Project Types tree to the Windows node on the Visual C# subtree. Select Console Application from Templates, and then enter descriptive names for the project, solution, and directory where you'll store the underlying files (see below).
Figure 17: Creation of a new Console Application project

In Microsoft .NET 3.5 and Windows Communication Foundation (WCF) 3.5, there are two methods for consuming a RESTful Web service. The first one is by using the new WebHttpBinding, which is used to configure endpoints that are exposed through HTTP requests instead of SOAP messages. So we can simply invoke a service by using a URI, sending an HTTP request, and deserializing the response to an object model. WCF supports different message formats such as XML, JSON, and raw binary data. Another way to consume RESTful services is to manually create an HTTP request including all the parameters as part of the URI, get the response, and parse the data on the response payload.

In the sample application for this article, I will use the second strategy to consume the RESTful Web service. The System.Net.HttpWebRequest implements the logic for processing the request and the response to a Web server. For the creation of the Web request, we need to use the factory method to create an instance of the request HttpWebRequest class based on the passed URI as the parameter. In this case, the URI instance is dynamically generated by the arguments passed to the application. If there are no arguments and we want a list of customers, we will use the address. Otherwise, the address will be based on the argument representing the customer identifier, such as for the customer with identifier 1.

After that, the request is sent when we invoke the GetResponse methodby returning an instance of the System.Net.HttpWebResponse class. Because the HttpWebResponse class implements the System.IDisposable interface to release external resources, the GetResponse method is called in a using block to enable the Dispose method to be called when the WebResponse instance is no longer needed, permitting the network connection to be closed.

If the Web request to the server results in an HTTP error code (4xx or 5xx), a WebException instance will be thrown. Otherwise, the GetResponseStream method on the HttpWebResponse instance will be called, returning a System.IO.Stream instance that can be read from to process the payload of the response.

In this case, the message payload contains an XML document representing the list of customers. To process the XML document directly, we need to load the stream (representing the XML payload) into an XPathDocument.

There are two options for parsing the XML document. The first strategy is to deserialize the XML document to an object model representing the business entities. This is the more elegant solution, because it establishes a clear separation between the persistent medium and the business logic. The other way is to parse the XML document directly and display the result as it appears in the parsing process. The inconvenience of this strategy is that there is a semantic mismatch, because we don't know the meaning of the data and its structure. In this example, we're just displaying the payload of the response message and not taking into consideration any semantic issues, so we're going to follow the second strategy, so we don't lose focus on the RESTful solution (see the code snippet below).

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using System.Net;
using System.Xml.XPath;

namespace RestfulWSClientConsoleApp
class Program
static void Main(string[] args)
Uri uriRestfulWS = null;

if (args.Length>0)
string strUri = String.Format("{0}", args[0]);
uriRestfulWS = new Uri(strUri);
uriRestfulWS = new Uri("");

HttpWebRequest objWebRequest = (HttpWebRequest)WebRequest.Create(uriRestfulWS);
using (HttpWebResponse objWebResponse = (HttpWebResponse)objWebRequest.GetResponse())
XPathDocument objXmlDoc = new XPathDocument(objWebResponse.GetResponseStream());
XPathNavigator objXPathNav = objXmlDoc.CreateNavigator();
foreach (XPathNavigator objNode in objXPathNav.Select("/Customers/Customer"))
string strCustomerId = objNode.SelectSingleNode("CustomerId").ToString();
string strFullname = objNode.SelectSingleNode("Fullname").ToString();
string strAge = objNode.SelectSingleNode("Age").ToString();

System.Console.WriteLine("Customer Information. Id={0}, Fullname={1}, Age={2}", strCustomerId, strFullname, strAge);
catch (WebException objEx)
System.Console.WriteLine("Web Exception calling the RESTful Web service. Message={0}", objEx.Message);
Finally, let's run the console application, passing as an argument the customer identifier 1. The output resembles that shown below.


Now that you have read this article explaining how to create RESTful Web services by using Oracle technologies such as Oracle JDeveloper 11g, the Jersey framework (the reference implementation of the JAX-RS [JSR 311] specification), and Oracle WebLogic Server as well as how to consume the Web service by using Microsoft technologies such as Visual Studio .NET 2008 and the .NET 3.5 framework, you can adapt your own Web solutions to be extended with this revolutionary approach.
Juan Carlos (John Charles) Olamendy Turruellas [] is a senior integration solutions architect, developer, and consultant. His primary focus is object-oriented analysis and design, database design and refactoring, enterprise application architecture and integration using design patterns, and management of software development processes. He has extensive experience in the development of enterprise applications using Microsoft and Oracle platforms as well as in distributed systems programming, business process integration, and messaging with principles of service-oriented architecture (SOA) and related technologies. He has been awarded Most Valuable Professional (MVP) status by Microsoft several times and is an Oracle ACE.


Tuesday, December 1, 2009

Building Twitter Search using the ASP.NET Ajax Library Beta – Part 1

While going around the web, I've found that interesting article, here is part 1, I will publish the rest once the author launches it:

Building Twitter Search using the ASP.NET Ajax Library Beta – Part 1

by James 30. November 2009 10:00

Last week we launched the ASP.NET Ajax Library Beta during PDC, oh and we donated it to the CodePlex Foundation under new BSD license (FTW). As the email volume has been fading away running up to Thanksgiving in the US and everyone at work is recovering from conferences, I took this golden opportunity to sit down and build a small sample with the new library now that we are in Beta.

Since the ASP.NET Ajax Library takes care of JSONP requests for me (which enables cross-domain service requests) it is really easy to hit a service like the Twitter Search API which in turn provides me with a JSON result and a callback to trigger functionality which does something with the result.

This allows us to build a Twitter Search application that is running completely on the client-side depending on no servers apart from those at Twitter HQ (we kind of need them for the search results, remember). In Part 1 of this 2-part series I will look at how to call the Twitter Search service using the WebServiceProxy.invoke method and then in subsequent posts I will look at using the data once in a DataView and how we can use client-side template to render the results.

First we start by exploring the WebServiceProxy.Invoke method – which is how we call the Twitter APIs.

Sys.Net.WebServiceProxy.invoke("", null, true, null, doSomething(result));

The above code will make the required call to the Twitter Search API and takes a few parameters including an onSuccess callback function (doSomething) so we can then do something with the result set. We can also specify things like the methodName (for webservice requests), query parameters, timeout etc – for the full set of params check out the MicrosoftAjaxWebServices.debug.js file. Behind the scenes, the invoke method figures out if you are making a request cross-domain in which case we require the call to be JSONP so we can receive the callback on our end.

To get access to WebServiceProxy.invoke we need have referenced a number of scripts from the ASP.NET Ajax Library including MicrosoftAjaxWebServices.js. The most sensible way to do this is to use the new Script Loader which takes care of loading not only this particular script but also any others from the library on which it is dependant. It does this in a really efficient way both in parallel and asynchronously allowing scripts to be loaded but not executed according to a dependency tree. Furthermore, you don’t even need to reference the Script Loader from a local folder or web server, you can grab it direct from the Microsoft Ajax CDN:

You’ll notice that we do versioning based on year and month so that your apps won’t break when we bring out new versions.

Once you’ve got the Script Loader referenced from the CDN, you can start bringing in the components you need and in this sample, so far, we need Sys.scripts.WebServices. We use Sys.require to tell the Script Loader that’s what we need and also can provide a callback function for it to call once everything has been loaded and we are good to start using the script.

Sys.require([Sys.scripts.WebServices], callback);

If we put all this code together and push one of the result tweets into a simple alert to show that it works, we get the following code:

Try it yourself by copy and pasting it into a blank HTML document, it’s that easy.

In the next post, I’ll show you how to do something useful with the result set by using the DataView component and client-templates. Stay tuned.

For more information about the ASP.NET Ajax Library including samples, downloads and docs check out the wiki here:

For more information about ASP .Net outsourcing, refer to the software outsourcing company website:

There is a bug in the beta where you need to include the following method to get the sample working. Sorry to those who’ve been having trouble getting it working. Add this to your javascript code and all will be right in the world.
// Workaround for a bug in ASP.NET Ajax Beta, you don't need this in the final version
function createElement(tag) { return document.createElement(tag); }