ec2
4 TopicsInstalling RPM Packages from AWS EC2 User Data
I am trying to install an RPM package (amazon-ssm-agent) on a new F5 AMI in AWS using EC2 User Data. My UD script is working perfectly to do many other configuration tasks, but the rpm -ivh command is failing with the message: installing package amazon-ssm-agent-2.3.1319.0-1.x86_64 needs 126MB on the /usr filesystem In the UD I can run df -h immediately prior to running the install and I can easily see that there's plenty of free space, and if I later SSH into the server as "admin" I can install just fine. I ran the install with -vv to better see the difference and I have pinpointed the exact issue, but I have no idea why it's doing this. Here's a piece of the output of a successful install via SSH: Preparing...D: computing file dispositions D: 0x0000fd051024231280104882 / D: 0x0000fd064096174346251236 /usr D: 0x0000fd084096618487185679 /var And here's an unsuccessful attempt via UD: Preparing...D: computing file dispositions D: 0x0000fd051024231280104882 / D: 0x0000fd0640960251236 /usr D: 0x0000fd084096618487185679 /var If I'm reading this correctly it seems that when running through UD the system sees 0 free space in the /usr file system, which is not really true. Has anyone had luck installing RPM packages from UserData before?880Views0likes0CommentsTo the Cloud! (On a Wing and a Prayer)
Being the incredible horrible planner I am, I started to order invitations early last week for a party I’m throwing for my wife’s graduation and it turns out they wanted double the cost of the invitations in overnight shipping! So…I sent evites. It took a day, however, to actually get them out. I started the process but was interrupted by the EC2 outage. I only know that for sure because the evite site I used was very quick to tell me in their error message that the problem was with the “Amazon EC2 Datacenter.” Was Amazon down? Yes. Is it Amazon’s fault the evite site couldn’t deliver? Absolutely not. The only failure that’s really noteworthy is that the issues they faced cascaded beyond a single availability zone and impacted others. That shouldn’t happen—Amazon has some explaining to do on that front. Infrastructure as a service is a platform, not a design. To set it and forget it in EC2 is just begging for problems, as hundreds of app owners found out last week. “The Cloud” is hot, trendy, sexy, whatever you want to call it, but it’s not a panacea. It’s difficult enough to find all the hard and soft points of failure in your own datacenter, but the problem is even more exacerbated when most of the systems your application runs on is abstracted and inaccessible for you to isolate problems. Everything fails, all the time --Werner Vogels, CTO Amazon.com So for a better experience in deploying applications to the cloud, you must assume that everything will break at every point. That means that multiple availability zones in a single region is probably not a smart move. If your application is mission critical, perhaps even multiple regions with a single vendor is not a smart move. It’s time to stop looking to the cloud as the “easy button” and face reality—you still need people with solid network and systems design skills to get you from an application in the cloud to a cloud application. Resources EC2 Outage Reactions Showcase Widespread Ignorance Regarding the Cloud The AWS Outage: The Cloud’s Shining Moment Three Things We Can Learn from AWS Failure Cloud Computing Podcast Episode 144 How to “Think Cloud”: Architectural Design Patterns for Cloud Computing Related Articles Maybe Ubuntu Enterprise Cloud Makes Cloud Computing Too Easy On Cloud, Integration and Performance Lori MacVittie - cloud computing Cloud Computing: Location is important, but not the way you think Cloud is the How not the What Dynamic Infrastructure: The Cloud within the Cloud Cloud Computing: The Last Definition You'll Ever Need Infrastructure Matters: Challenges of Cloud-based Testing Load balancing is key to successful cloud-based (dynamic ... F5 and the Cloud Get your SaaS off my cloud232Views0likes0CommentsDespite Good Intentions PaaS Interoperability Still Only Skin Deep
Salesforce and Google have teamed up with VMware to promote cloud portability but like beauty that portability is only skin deep. VMware has been moving of late to form strategic partnerships that enable greater portability of applications across cloud computing providers. The latest is an announcement that Google and VMware have joined forces to allow Java application “portability” with Google’s App Engine. It is important to note that the portability resulting from this latest partnership and VMware’s previous strategic alliance formed with Salesforce.com will be the ability to deploy Java-based applications within Google and Force.com’s “cloud” environments. It is not about mobility, but portability. The former implies the ability to migrate from one environment to another without modification while the latter allows for cross-platform (or in this case, cross-cloud) deployment. Mobility should require no recompilation, no retargeting of the application itself while portability may, in fact, require both. The announcements surrounding these partnerships is about PaaS portability and, even more limiting, targeting Java-based applications. In and of itself that’s a good thing as both afford developers a choice. But it is not mobility in the sense that Intercloud as a concept defines mobility and portability, and the choice afforded developers is only skin deep.188Views0likes1Comment