In the first article we configured salt-master and created a Cloud Server. In this article we will start building up the Marconi environment and while doing so shape what our salt configuration will look like.
We have two goals in mind. First, we have to be capable of creating several Marconi environments with little effort. As an example, we should have servers under dev, test and production environments managed under one configuration. Taking it a step further, we may have these in different locations. So having the ability to managing multiple environments is essential. Second, we will try to build generic configurations (SLS Formulas) that we can use for different projects. For example, we could have a generic firewall formula that will set proper iptables rules on Linux servers based on the role and environment they are in.
At Rackspace, we're working very hard to support the ever-growing platform of mobile. We're working hard to design a cloud-based mobile platform for developers and on the next generation of our mobile applications. Over the years we have developed a number mobile applications to interact with our services, but we recently made a conscious decision to improve them to more "fanatical" standards.
Possibly one of the hardest things about mobile testing is the infrastructure needed to support it. There are various vendors that provide some of this infrastructure, but there are few established best practices for how to build things in-house if you want to do more than run your tests on a local simulator. While we plan to rely on some of the work our friends at Sauce Labs have been cooking up, we also built a sizable chunk of testing infrastructure in-house.
Because of the lack of shared knowledge, we spent a good deal of time figuring things out the hard way. We now want to share with you some of the nasty details that we have overcome in this battle.
If you need to run MySQL on the Rackspace Cloud, you have two fundamental choices: run MySQL on a Cloud Server, or run MySQL as a Cloud Database instance. This naturally raises a few questions: What are the features and benefits of each? Which performs better? Which will be more cost effective? As with every application, the answer is ”it depends;” however, the information below should help you make the right choice based on your needs.
One of the cool things we do on the [Cloud Databases](http://www.rackspace.com/cloud/databases/) operations side of the house is come up with statistics that can help us gain insight to hardware performance to identify issues with systems. We use some really cool tools, but one of the most versatile tools we work with is [logstash](http://www.logstash.net/). The goal of this article is to get you started pushing metrics with logstash that you may already collect to [Graphite](http://graphite.wikidot.com/). Along the way, I'll be showing you how to get started with logstash, test your configuration locally and then start pushing your first metrics to Graphite with some different examples along the way.
As of July 16th 2015, this client has been updated to use JSON requests ONLY. All XML references have been removed at this time. Several new updates have been introduced:
Note that on July 20, 2015, Rackspace (in following with OpenStack developments) will disable XML support within the Cloud Servers API. All PowerClient users should upgrade to the new release. See the following for more information:
https://github.com/drmmarsunited/rackspacecloud_powershell/wiki
Adding Redis to your application stack is a fantastic way to gain speed with existing applications. Many of our customers aren’t running the latest and greatest new hotness NoSQL-using cloud thing. A lot of them port over a full stack of an existing applications that once only existed on bare metal servers, or use a hybrid environment with a big MySQL configuration on bare metal with web/app servers in the cloud.
In any case, we advise that customers use caching… EVERYWHERE. Adding Redis to your application stack can greatly improve site speeds when used as a cache.
Getting started with a distributed system like Hadoop can be a daunting task for developers.From installing and configuring Hadoop to learning the basics of MapReduce and other add-on tools, the learning curve is pretty high.
I’m a fan of giving code snippets together with working demonstrations. I’m much more likely to trust code if I can see it and watch it work, as opposed to just reading it and hoping it still works. Has it been deprecated since it was written? Will it throw warnings? Did the author write this from memory, perhaps never even trying it? With a simple demonstration these questions disappear.
When I was asked to explain some Android features to my colleagues, I planned to compose demo apps with prepared, read-only code snippets. But as you can imagine, just dumping Java code into a TextView was a mess. The formatting was all wrong and (at least for me personally). Reading code without syntax highlighting is a pain. Fortunately, there is a way to fix this.
You can consider message queues as a way to achieve parallel computing in an application. In this post, we dive into the different message queues out there and how to implement a message queue in an application.
Establishing a new SSH connection usually takes only a few seconds, but if you’re connecting to a server multiple times in succession the overhead starts to add up. If you do a lot of Git pushing and pulling or frequently need to SSH to a dev server, you’ve probably felt the pain of waiting for SSH to connect so you can get back to doing work.
One of SSH’s lesser known features is the ability to reuse an already-established connection when creating a new SSH session. This means you only have to pay the connection overhead once, making future sessions incredibly quick to start.