Very often, virtualization is diverse us with cloud computing. Running with the ability to provision and release set of virtual machines is IaaS? Not really.

The reasoning that people want machines, when they want, via a portal or through automated API are that, we need more computing capacity. Computing capacity, at the very basic is about CPU and memory. But with services like Amazon EC2, though you can allocate AMIs with nominal CPU and memory, when it comes to really running the system, the effective computation capacity that you can get is MUCH lower than what you actually get.

In short, Amazon EC2 sucks!

Amazon EC2 is good to run simple applications to off load sporadic traffic bursts. But NOT good for anything else.

At the top of Infrastructure as a Service list of problems is large and unacceptable I/O. We have tried using Amazon EC2 for various scenarios in the real world, and found that it is not really usable in real world for all those cases.

1. Running incremental builds for WSO2 Carbon.

This is a very CPU intensive as well as I/O intensive task. We use maven to build Java projects, and there are 
- Downloads involved – uses network a lot – Amazon EC2 is not predictable when it comes to download and upload bandwidth usage 
- CPU intensive compilation operations – Amazon EC2 AMI instance’s load average goes high like crazy and operations are very slow compared to native hardware 
- Disk writes, lots of them, as compiled classes are written to the disk – Amazon EC2 is unbelievably slow

The observation is that, Amazon EC2 usage for this is two to three times slower compared to the same done on native hardware with an average Internet connection. The compilation that takes two to three hours in total on native hardware would take four to eight hours on Amazon.

2. Hosting WSO2 Oxygen Tank (OT) Developer Portal.

We use Drupal, the PHP based CMS, for hosting OT. This involves three machines, two running httpd with Drupal, and the other with MySQL. 
When we ran this Web site with Amazon EC2, we were facing service outages, at least three to four times a month. The main problem was that, the MySQL instance was not able to handle the queries. And the problem was that Amazon machine instance could not handle the the rate of I/O required by the queries run by the two Drupal instances on MySQL. 
The simple decision of moving this hosting out of Amazon EC2 and hosting on much smaller boxes with native hardware, solved the problem, and we have had no issues with the Web site ever since.

3. Hosting WSO2 StratosLive PaaS.

It is natural to assume that a PaaS should run in IaaS. But we have learned, with over two three of usage experience with Amazon EC2, that nominal cloud benefits like auto scaling and elastic computing is not really useful when it comes to running a powerful platform like the StratosLive cloud Java pass. 
The biggest challenge, as we saw in Oxygen Tank hosting experience is that, I/O bottlenecks on Amazon IaaS is really prohibiting when it comes to running a decent database driven Application. Unlike in the case of OT, where we ran our own MySQL instance, we even tried to use Amazon RDS as our database solution for StratosLive and it terribly fails when it comes to enterprise PaaS aspirations. 
To add to the troubles, on Amazon EC2, the network performance is quite unpredictable. We have experienced, for prolonged periods like 24 hours, the connectivity between services hosted within is breaking badly with read timeouts and broken pipes. Then suddenly it becomes, OK, and then fluctuates in an unpredictable manner. We have to spend a considerable amount of man hours to troubleshoot problems that were not really there in our software. Moving out of Amazon EC2 and hosting on native hardware showed that, our software was stable, rather Amazon EC2 was not. On top of the Amazon bill for using their computing resources to figure that they are broken, we also have a frustrated and tired team who spent tons of hours figuring out the unpredictable behavior in the Amazon cloud. The TCO is much more than the amount that Amazon deducted from our credit cards.

When people think of cloud computing problems, they think about the outage that happened in April 2011, etc. and think that they are being resolved. But outage is not the key problem. Rather the virtual availability. The sheer fact that the computing resources are up and running and that you can create and shutdown those at your will is NOT the real benefit of cloud computing. In fact, the Nagios based monitoring system is telling us all the time, which Amazon instances are healthy. But weather you get the real CPU time, if your disk I/O or network bandwidth is just trickling down, or if your application hosted can be really accessed by end users when they want it are real problems.

It is one thing that you pay for the CPU amount that you that you really use, and it is yet another that you really get the CPU capacity you need to serve your customers or users when they really need your service.

Why pay three times the money for an IT infrastructure that is three times slower? It got to be much much effective, and much much cheaper, to rationalize infrastructure on the cloud.

With all due respect to Amazon, for igniting and initiating cloud computing hype, I hope Amazon EC2 take this as real and positive feedback for the betterment of  cloud computing in the future!

 
The cloud remains dominant in terms of software and document distribution, and there are an increasing number of online solutions that enable managers to run their projects in a timely and efficient manner. Trello is a prominent example of this technology, which uses cloud principles to connect business owners and contractors regardless of where they are situated in the world. With this in mind, what are the exact benefits of Trello and how can it allow businesses to operate more proficiently while reducing it’s costs?

  • Real Time Interaction and Document Sharing: In days gone by, sharing documents was an all too complicated process, and the reliance of flawed technology such as fax machines and post often incurred delays that could significantly delay a project’s completion. With Trello, documents and information can be shared in real time, and collaborators can interact live and maximize the efficiency of any individual project.

  • An Innovative and Flexible Online Solution: The purpose of technology is to provide flexible and innovative solutions to problems, and this has been relevant throughout numerous niche sectors and industries. Just as firms such as Ukhost4U have delivered evolving web hosting solutions to clients, so too Trello has emerged as a malleable tool that can be used to house multiple projects simultaneously.

  • Cost Effective Project Management: Given the tough and unforgiving economic climate, it is little wonder that businesses nationwide are striving hard to save money and minimize their monthly costs. Trello allows them to do this effectively, as project managers and coordinators require far less equipment and staff to share documents, distribute tasks and ultimately complete a project in it’s entirety.

Conclusion
Trello is a tremendously purposeful and resourceful online tool, and is continuing to revolutionize project management in the UK. As the above video showcases, it is also simple to use and as open source software also free to access and install for small, medium and large businesses alike.

 
 
Picture
Widespread study is the reason for the breakthrough of cloud computing. Perhaps a lot of of us inquire if what time and what place it was truly established or began. Believe it or not! But the original thought and notion about cloud computing can be seen in the year of 1960s.The discovery of cloud computing is a product of the partnership of some IT professionals to improve the existing computer hosting service before. Extensive research is the culprit of the discovery of it. Maybe many of us ask if when and where it’s really starts? Believe it or not! But the fundamental idea and concept of this can be trace back on the year 1960s, when John McCarthy speak out that computation may someday be organized as a public utility.'   More or less all the recent features of it, the similarity to the electricity industry and the use of private, public, community, and government forms, were systematically investigated in the book The Challenge of the Computer Utility by Douglas Park hill in the year 1966.
The definite word 'cloud' started on 1990 wherein one telecommunication company offers a service that is dedicated in point-to-point data routes and Virtual Private Network (VPN) services with analogous feature of service but at a much lesser charge. By controlling traffic to steadiness consumption as they saw in shape, they were capable to make use of their general system bandwidth more efficiently. The cloud representation was used to signify the separation peak linking between the task of the cloud provider and the accountability of the cloud user. Cloud computing broaden this boundary to cover up servers over and above the network infrastructure. It is an expected advancement of the widespread adoption of virtualization, service-oriented design, autonomic, and service computing. Fine points are explained from potential cloud users, who no longer have need for proficiency and technical knowledge in, or direct control over, the technology infrastructure of cloud computing.