Blog

Recent Posts by Anthony_Green

MITM and Why JS Encryption is Worthless

You build this great web app that is loaded with JavaScript-based features with a spectacular AJAX setup where content is being pulled instantly as the user wants it. The application is real work of art.

Your development version works flawlessly and your client loves the new website, but, as always happens, they want a few changes. This button gets moved from the bottom of the page to the top of the page. The order of that menu gets rearranged. Every web designer has experienced these general design and layout changes, which take a bit of time to complete, but are technologically easy and take client satisfaction to a whole new level.

The client then asks about security, and how their customer information is protected. They had heard about a friend’s website getting hacked and want to make sure it doesn’t happen to their new beauty. You tell them about how the server can only be accessed ftp connection via cPanel and how you used this great input filtering class so bad stuff can be uploaded. The client misunderstands your conversion of programming jargon into “Real English”, and infers that all data their customers send.

You know the correct answer is to serve the website over HTTPS and use TLS to encrypt data between the customer’s browser and the server. The problem with doing that with this particular client is they went cheap on hosting, and use one of those ultra-discount co-hosting sites, so to deploy TLS on their site would add $10 a month to their hosting bill, in addition to the cost of the certificate, setup and annually updating the certificate. You know the client is not going to like this increased expense so you run through every option you can think of to protect customer data between the browser and the server. Having done all this great work with JavaScript already, the obvious solution is to use JavaScript to encrypt the customer data in the browser and then just decrypt it on the server.

WRONG!

First, encryption via JavaScript in the browser is nearly worthless. There is a single, very limited situation where encryption can be performed in the browser, and I will discuss that at the end. However, in general, JavaScript based encryption provides no real level of security. It does not matter how good of an algorithm you use, or the key length or whether you use RSA/Asymmetric encryption or AES/symmetric encryption. JavaScript based encryption provides no real level of security.

The weakness in every situation revolves around the classical man-in-the-middle (MITM) attack. A hacker filters the connection between your client’s server and the customer’s browser, making changes as the hacker sees fit, and capturing all data they want.

This is how it would work:

  1. Visitor connects to the internet through a compromised router (think encrypted coffee house wi-fi)
  2. Hacker monitors the request data going through router and flags the Visitor’s connection as an attack point
  3. Visitor requests a page on the standard, http domain
  4. Hacker intercepts request and passes it along unadultered after recording all the data tied with the request (like the URL and cookies)
  5. Server hosting domain builds up web page normally and sends it back to Visitor
  6. Hacker intercepts the response html and makes a change to remove the JavaScript encryption mechanism and records any encryption keys
  7. The Visitor gets the web page which looks perfectly normal and proceeds to enter their address, credit card or sensitive data in a form and send it back
  8. Hacker captures this sensitive data then mimics the JavaScript encrypt mechanism and forwards on the correctly encrypted data to the server
  9. The server hosting the domain decrypts the data and continues on, never realizing someone intercepted the data

This general methodology will work for any situation where a response is served over http, without the protections offered by HTTPS via TLS. Unless the HTML response and all JavaScript associated with the browser-based encryption mechanism is served over TLS, there is no guarantee that the end user received correct algorithm, ran the algorithm and sent only encrypted data back to the server.

This guarantee cannot be short-cutted by serving the JavaScript files over HTTPS, but not the original HTML content, as a hacker would either remove the JavaScript or simply change URL of the JavaScript files to a version they controlled. Serving the HTML content via HTTPS but not the JavaScript allows the Hacker to modify the JavaScript during transit and will create mixed-content errors for the user when the browser sees content being served over HTTP and HTTPS.

The Crux

The crux of the problem with encryption via JavaScript is disseminating an encryption key that only the user can create and keeping the key local to the user. Assuming the encryption key can be securely disseminated to the user via a secondary channel, like email, SMS or even physically off-line, how do you keep the key only with the user when they need to use the key on various pages of the website.

The short answer is you cannot do it.

Without HTTPS, the developer does not control the authenticity of the content being delivered. If you cannot control the authenticity of the content being delivered, you have no way to make sure additional methods of data extraction are not coupled with your intended methods.

Hashing is not Encryption

Hashing is generally thought of as one-way encryption, while symmetric and asymmetric are viewed as two-way encryption. Hashing, however, has the same issues as two-way encryption when used in the browser. In order for it to provide any value, the entire connection has to occur over TLS; generally mitigating the value that hashing hoped to create.

A few months ago, after giving a talk at Atlanta PHP on API Security, I was asked about the concept of hashing a password in browser then transmitting the digest (hexadecimal version of the hash) to the server and querying the database for the digest. After having him break down exactly what was happening, he realized that the whole process still must occur over TLS and it provided no more security than transmitting the raw password to the server and having the server do the hashing. From an attack standpoint, the digest simply becomes the password, and due to performance issues of running JavaScript hashing on a variety of platforms, users will only accept the time delay of certain hashing algorithms and a certain number of iterations of the give algorithm.

The gentleman next suggested using a 2-fator key as the salt for the hash and sending this new digest to the server. This makes the situation actually less secure, because in order to continually validate the password, the server must store it the password in plain text (or a symmetrically encrypted, which is only marginally better). If/when the database is hacked; all the passwords are then immediately compromised, rather than the significant delay using current key-lengthening techniques with robust hashing algorithms.

I have actually seen another situation where hashing in the browser reduced the overall security of the application. In this circumstance, the username was concatenated to the password, hashed, and the digest was sent to the server for validation. The app did not even send the raw username, it simply sent the digest. The digest was then queried in the database and whichever user happened to have that digest as their password became the authenticated user. I should correct that, the active user. Authentication is a much too strong of a word to describe what was happening. This methodology created a significant reduction in entropy of the user credentials, allow for real chances of digests collisions where User B has the same credentials as User A, and therefore the system thinks User B is User A.

Minimal Value

At the very beginning I mentioned one particular situation where JavaScript encryption (or hashing for that matter) has some minimal value. When the server and the browser connects 100% over HTTPS, and all the content is encrypted during transmission and authenticated at the user-end, but the JavaScript in the browser must communicate with a third-party server which does not run support TLS. In this situation, JavaScript can be trusted to securely encrypt the data being sent over HTTP to the third-party server which can securely decrypt the data. This whole situation only makes sense if the third-party does not support TLS, but your server supports it completely. I have seen this setup once in a payment processing situation.

LivingSocial applies the third-party server principle to their internal subnet. The browser receives everything over TLS and uses asymmetric encryption to encrypt their customer’s credit card data. The browser then posts this encrypted data to the LivingSocial domain, which really is a Network Access Point (NAT) for their internal subnet. The data is then directed all the way to their gateway processor (BrainTree), without ever being decrypted within their subnet. This effectively provides their customers full end-to-end encryption of their credit card data, without having to deal with redirects and other tricks common in payment processing industry.

JavaScript based hashing has a different situation where value can be created; weakening brute-force attacks. As I mentioned earlier, hashing public form data prior to submitting the data, can increase the cost of form spam to spammers, though the hash can be validated on the server at minimal cost.

Summary

Do not expect JavaScript to impart any security in your web application. Encryption and hashing in the browser, for the purposes of security, is a pointless task, and simply results in another situation of theatrical security.

Security Theater Coined: https://www.schneier.com/crypto-gram/archives/2003/1115.html

Living Social Payment Processing: https://www.braintreepayments.com/blog/client-side-encryption/

Defending Aganist Spambots - Dynamic Fields

One of the things spambots often cannot do is run JavaScript. A simple preventative measure, therefore, is to dynamically create a form field via JavaScript that requires some kind of user interaction to pass the server-side validation.

Initially this concept was applied to a simple check box that had the label “Check if you are human.” Spambots would not create nor check the box and the presence of the checkbox field was used to determine if the form was submitted via a human.

More advanced spambots utilize the V8 JavaScript engine and can mimic the loading of the page, where the dynamic field is created. The bot then would use this dynamically created DOM as the source to pull the form element, and the associated field names, to be submitted. This level of sophistication is relatively rare in comment-spam bots, but for spambots focused on user account forms (login, password reset and account setup) it is being more common due to the increase value associated bypassing these form’s validation methodologies.

The big caveat with this defense is the 10% or so, of users who have JavaScript disabled will never see the dynamic field and submit the form without the field just like a spambot. An alternative method to JavaScript creating the fields is to use the new HTML5 input field range and have the user move the slider from left to right, or to the center, depending on the instructions in the associated label. This only works for newer browsers, but helps reduce some of that 10%.

Request Based Field Names

Merging the underlying concepts behind honey pot fields, form expirations and dynamic fields creates request based field names. In this situation, every request has a unique set of field names, and the field names are validated against the source of the request. If the field names have been reused, the submission is deemed spam. This requires every submission of the field to be fetched individually, which often isn’t the case in spam bots. The parsing of the HTML requires significant processing power (from a computer or person) and would limit the cost effectiveness of spam, whose value proposition is often based upon volume.

Defending Aganist Spambots - CAPTCHAs

CAPTCHA is a backronym for “Completely Automated Public Turing test to tell Computers and Humans Apart” and is generally the bane of any user trying to submit a public form. The concept involves displaying an image containing characters and has the human retype the characters into a text box. Computers are supposed to not be able to understand the characters in the image while humans can easily understand the characters.

This worked well in 1997, when the concept was developed, but the advances in image processing have required the images to become more and more obscured from simple plain text. Adding colors, lines, as well as messing with the shapes of the letters obscure image processing applications from detecting the text. This obscurity also makes it challenging for anyone with visual impairments to read the text and get the CAPTCHA response correct.

The user experience issues make CAPTCHAs make them an undesirable solution to spambots, but one that can be implemented when the other solutions are inadequate. UX focused sites often use CAPTCHA only in situations where other protections have returned multiple failures though the system does not want to prevent a potentially legitimate user from accessing the material. These are situations like password resets, login screens, account creations and search pages.

 Integration of a CAPTCHA solution involves either integrating a third-party source into your form, or generating the images yourself. Generating the images locally via an image manipulation library sounds like a good, cheap method for using implementing CAPTCHA , however there has been a significant effort placed on defeating the protection and everything you can think of doing to the text to prevent analysis, but still be readable by a human has been reverse-engineered. Good CAPTHCHA solutions test their image database against the best tools on a regular basis, eliminating those images defeated by the analysis tools. Consequently, homebrew CAPTCHA’s are often little better than having no protection while providing a noticeable depredating in the user experience.

Integrating a third-party solution generally involves embedding a JavaScript in your form which fetches the image and a unique ID code from the provider’s servers. The user then provides you with the plain text version and you check this, along with the image ID code which was submitted as a hidden form field, with the provider to get a pass or failure response. All of the good CAPTCHA providers have nice clear documentation about this process and attempt to make it as easy as possible to integrate their solution.

I have avoided CAPTCHAs primarily due to the poor user experience factor. Different combinations of the other methods, especially the hashcash and the Bayesian analysis have provided good protection so far.

Defending Aganist Spambots - Form Expirations

Humans are inherently slower than computers when it comes to reading and filling out a form. Even simple login forms where everything is auto-completed and you just have to click the “login” button takes a second, while a computer can do it in milliseconds. More complex forms require even more time to for a human to read, understand and complete. Recoding the timestamp of the form request and requiring the response to occur with a set range makes the automatic completion of the form more expensive for a spambot.

The timestamp can be sent along with the normal form fields as a hidden input field, so long as the validity of the timestamp is checked when validating the form submission. The easiest method is a HMAC check with a server-specific key. This actually allows for integration of additional data into the timestamp field, like the requester’s IP address and User agent.

Example Creation and Validation of a Form Timestamp

// globals $gvServerPublicFormKey = '5f4dcc3b5aa765d61d8327deb882cf99'; //! Create Timestamp Field //! @return string HTML of timestamp field function createTimestampField(){ global $gvServerPublicFormKey; // get current unix timestamp $t = time(); // compose raw value as time:IP:user agent $data = $t . ':' . $_SERVER[‘REMOTE_ADDR’] . ':' . $_SERVER['HTTP_USER_AGENT']; // generate HMAC hash $hash = hash_hmac('sha512', $data, $gvServerPublicFormKey); // build input $html = ""; return $html; } //! Validate Timestamp Input //! @param int $min Minimum delay time in seconds @default[5] //! @param int $max Maximum delay time in seconds @default[1200] //! @returns bool Returns the validity of the timestamp input field function validateTimestampInput($min = 5, $max = 1200) { global $gvServerPublicFormKey; $t = 0; $hash = ''; // field field foreach(($_REQUEST as $key => $val) { if(strpos($key, 'ts-') !== 0) continue; $t = substr($key, 3); // validate potential timestamp value if(!$t || intval($t) != $t || $t + $min < time() || $t +$max > time() ) { continue; } $hash = $val; break; } // potentially valid timestamp not found if(!$hash) return FALSE; // generate hash based upon timestamp value $data = $t . ':' . $_SERVER['REMOTE_ADDR'] . ':' . $_SERVER['HTTP_USER_AGENT']; $correctHash = hash_hmac('sha512', $data, $gvServerPublicFormKey); // return validity of hmac hash return hash_equals($hash, $correctHash); }

Defending Aganist Spambots - Honeypots

Honeypots are a concept taken straight from email spam prevention and come in 2 types: honey pot fields and honey pot forms. Honeypots are basically a very tempting submission location that should never receive real data. Any submissions to the honeypot are automatically labeled as spam.

Honey pot fields are fields within a form that should always be left blank and are indicated as such to the user via a label. When a form is submitted with that field completed, it can be quickly marked as spam, discarded and the submitter fingerprint recorded for tracking. In order to make the field tempting, the field name and field type should be chosen wisely. An input field with a name of “website” and a type of “url” is more tempting to a spambot than an input field with a name of “honeypot” and a type of “text”. Good spambots will detect the field type and name and try to inject appropriate content to bypass automated validation mechanisms.

Example Honey pot field

<style> form>div#form_hp { position: absolute; left:-99999px; z-index:-99999; } </style> <form method="POST" action=""> <div id="form_hp"> <label for="another_email">Leave this field blank</label> <input id="another_email" name="another_email" type="email" value=""/> </div> <!--- the real form content--> </form>

When hiding the honey pot field, the best method is to use embedded CSS to shift the field wrapper off the screen. A good quality bot will check to see which fields are natively displayed and only submit information to those displayed. Fields with “display:none” or “visibility:hidden” can be easily marked as hidden. Even situations where the field itself is absolutely positioned off screen can be detected without too much difficulty. Moving the wrapper off screen via CSS requires considerably more programming to detect, as all the CSS needs to be parsed and applied prior to evaluating the display nature of any field. The CSS should be embedded into the HTML to prevent loading issues, where an external CSS file is not loaded, and the wrapper, with the honey pot fields are displayed to the user.

Honey pot forms are entire forms that a real user should never find or submit information to, though are easily detected via automated scripts. Hidden links to the page containing the form are embedded in the footer or header and indicated that they should not be followed by bots. The page then contains a description that clearly states the form should not be used and a bunch of tempting fields to submit. Any submissions by this form are consequently deemed a bot and appropriate measures are taken. This type of honey pot can be integrated into a web-server layer filter (via a web application firewall like modsecurity) where the submissions are track prior to the application layer and attacks are mitigated at the web server.

The biggest concern with honey pot forms are search engines, and their bots finding the pages, and then displaying the page as a result in search results. Appropriate steps should be taken to minimize bots taking the honeypot links via usage of the rel=”nofollow” attribute in the hidden links, the ‘’ tag in the html head section of the form page and clear text on the page saying not to submit this form.

Defending Aganist Spambots - Request & Response Header Validation

Taking a step back from the HTML side is the validation of the HTTP header of the request for the form html and the associated headers of the response to posting the form values. Running some basic checks of the HTTP headers can provide an early warning to the presence of a spambot.

Before serving the form HTML, the server validates that the request has the appropriate headers for a real browser. If the “Host”, “User-Agent” or “Accept” headers are not sent, it’s likely a cURL attempt to access the web page, and therefore a script to harvest and attack the form. This provides some basic security by obscurity, and as such should be viewed just as attack limiting approach, and not actual security of the form. An attacker can just as easily go to the actual page with a web browser, cut & paste the html into their script and attack the form. Limiting the display of the form limits the amount of this process done via scripts, particularly poorly written spambots.

The other side of the coin is the Response headers to the form page. In addition to checking for the headers required when initially serving the form, you should also check for the correct HTTP method (GET vs POST), and the “Cookie” (if applicable) and the “Referer” headers (yes, the referer header is misspelled). A real browser will never mess up the HTTP method and switch between GET and POST responses, while a spambot may default to the wrong method. Bots are also often mediocre about managing cookies, so the lack of a cookie field can be indicative of a spambot, or a paranoid user.

The “Referer” header should not be used conclusively to determine if the page was sent from a web browser. Some internet security suite and browser plugins will mess with the “Referer” header, either erasing it, or replacing it with the destination domain. Further, pages not served over TLS should not receive the “Referer” header when sent from a page served over TLS. (Forms served over TLS should never post to a page not served over TLS anyways.) Lastly, the HTML5 standard includes the HTML meta header ‘referer’ that can be set to ‘no-referrer’ where the browser is not supposed to send the referrer from the source page.

The last check that should be performed is the geolocation of the source IP address within the context of the form. For example, if the form is for online ordering of a pizzeria in Chicago, an request or submissions from an IP addresses geolocated in Australia has a very low probability of being legitimate.

There is one caveat of filtering based upon IP addresses: VPNs and Proxies. Smart phones in particular should be the biggest concern for most implementations, since the IP address of a phone on the network is often geolocated to the corporate headquarters rather than the location of the user.

Example HTTP Header Validation Function

//! Checks for HTTP Request Headers of a submitted form page //! @param string $formUrl URL of the form page //! @return bool Returns TRUE if all the checks passed, else FALSE function checkRequestHeaders($formUrl = ‘’) { // make sure $formUrl is a legit url if(!empty($formUrl) && !filter_var($formUrl, FILTER_VALIDATE_URL, FILTER_FLAG_SCHEME_REQUIRED & FILTER_FLAG_HOST_REQUIRED) { return FALSE; } // verify presence of basic headers if(empty($_SERVER[‘HTTP_USER_AGENT’]) || empty($_SERVER[‘REMOTE_ADDR’]) || empty($_SERVER[‘HTTP_ACCEPT’]) || empty($_SERVER[‘HTTP_ACCEPT_LANGUAGE’]) ) { return FALSE; } return TRUE; }

A complete list of the HTML5 input fields can be found on Wikipedia:

https://en.wikipedia.org/wiki/List_of_HTTP_header_fields

W3C Referrer Policy Working Draft

https://www.w3.org/TR/referrer-policy/

Defending Aganist Spambots - Field Specificity & Validation

Field specificity and absolute validation of the replied values should be the first level of defense. Whenever a public form is created, you create the inputs with as much specificity as possible, then validate strictly against this specificity. The HTML5 specification has made this much easier with the expansion of the types of input fields.

For example, if you are asking for a user’s age, use an input with ‘type=“number”’ and ‘step=”1” min=”5” max=”120”’ instead of a simple ‘type=”text”’. This forces the user to input an integer between 5 and 120 (max range of legitimate possible ages of a user) otherwise the form field should indicate it is an illegal value and prevent submission of the form. Then on the server side, you validate strictly against these criteria, immediately tossing any submission that contains an invalid value. There is an added bonus as the error messages for HTML5 compliant browsers don’t need to be as robust, since the user already should have received an error when they attempted to input the field the first time.

Example Validation Function

//! Validate input value of a Number Input //! @param string $input Inputted value //! @param int $min Minimum Value @default[0] //! @param int $max Maximum Value @default[100] //! @param string $step Incremental increase between minimum and maximum value @default[1] //! @success string Returns inputted value on success (including potentially 0) //! @failure FALSE Returns FALSE on validation failure function validateInputNumber($input, $min = 0, $max = 100, $step = 1) { // verify all inputs are numbers if(!is_numeric($input) || !is_numeric($min) || !is_numeric($max) || !is_numeric($step) ) { return FALSE; } // verify $input is within appropriate range if($input < $min || $input > $max) return FALSE; // check that $input is at a valid step position $inc = ($input - $min) / $step; if($inc != intval($inc)) return FALSE; // all checks passed, return $input return $input; } // example pass ($input == ’32.5’) $input = validateInputNumber(’32.5’, 0, 100, 2.5); // example fail ($input === FALSE) $input = validateInputNumber(’32’, 0, 100, 2.5);

A complete list of the HTML5 input fields can be found at MDN:

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input

Defending Against SpamBots

SPAM is THE four-letter of IT. Nothing makes users, developers and IT managers more annoyed than filtering this frivolous data coming into their systems. SPAM as it relates to email has some really good utilities that can limit a large amount of the unwanted messages while having relatively low false positive and false negative rates. Most of these utilities are so mature that you simply install them, configure a few settings and generally forget about it, with the filter taking care of everything.

Comment or Form Spam, on the other hand, does not have a drop-in type solution because of the level of integration a form has within the larger system. The field types and names of each form vary drastically when compared to the MIME headers of an email. Drop-in solutions have been attempted for form spam, however they often have limited success when they are run independent of more integrated methods.

The various form spam prevention methods can be grouped into one of 10 general categories.

Field Specificity & Validation

Field specificity and absolute validation of the replied values should be the first level of defense. Whenever a public form is created, you create the inputs with as much specificity as possible, when validate strictly against this specificity. The HTML5 specification has made this much easier with the expansion of the types of input fields.

Request & Response Header Validation

Taking a step back from the HTML side is the validation of the HTTP header of the request for the form html and the associated headers of the response to posting the form values. Running some basic checks of the HTTP headers can provide an early warning to the presence of a spambot.

Honeypots

Honeypots are a concept taken straight from email spam prevention and come in 2 types: honey pot fields and honey pot forms. Honeypots are basically a very tempting submission location that should never receive real data. Any submissions to the honeypot are automatically labeled as spam.

Form Expirations

Humans are inherently slower than computers when it comes to reading and filling out a form. Even simple login forms where everything is auto-completed and you just have to click the “login” button takes a second, while a computer can do it in milliseconds. More complex forms require even more time to for a human to read, understand and complete. Recoding the timestamp of the form request and requiring the response to occur with a set range makes the automatic completion of the form more expensive for a spambot.

Dynamic Fields

One of the things spambots often cannot do is run JavaScript. A simple preventative measure, therefore, is to dynamically create a form field via JavaScript that requires some kind of user interaction to pass the server-side validation. This can be as simple as a check box that the user needs to check to indicate they are human or a slider that needs to be moved to a specific position.

Request Based Field Names

Merging the underlying concepts behind honey pot fields, form expirations and dynamic fields creates request based field names. In this situation, every request has a unique set of field names, and the field names are validated against the source of the request. If the field names have been reused, the submission is deemed spam. This requires every submission of the field to be fetched individually, which often isn’t the case in spam bots. The parsing of the HTML requires significant processing power (from a computer or person) and would limit the cost effectiveness of spam, whose value proposition is often based upon volume.

CAPTCHAs

CAPTCHA is a backronym for “Completely Automated Public Turing test to tell Computers and Humans Apart” and is generally the bane of any user trying to submit a public form. The concept involves displaying an image containing characters and has the human retype the characters into a text box. Computers are supposed to not be able to understand the characters in the image while humans can easily understand the characters.

Hashcash

Hashcash is an iterative hash algorithm that requires the client (ie web browser) to repetitively hash a set of data (the serialized form fields) until a bitmask can be cleared. The iterative nature of a hashcash requires the web browser to expend a minimal amount of energy to get the correct output of the hash while the server simply needs to take the inputs and perform the hash once.

Blacklists & Keyword Filtering

Blacklists and keyword filter involves running regular expressions against the submitted content to extract html tags, urls, email addresses and specific keywords. The results of the regular expressions are checked against a blacklist of banned results, with any found results indicating a spammy submission. This method is strictly dependent upon the quality and completeness of the blacklist database.

Bayesian Analytics

Bayesian analysis is the basis for most of the good email spam filters. The overall principle is to run statistical analysis on the response headers and the posted values with respect to a database of known good content and spam content. The Bayesian analysis outputs a probability of the content being spam, which is then filtered against set level and discarded if the probability is too high. Bayesian analysis can be the most effective since it based upon the actual content of the form submission, the effectiveness is highly dependent upon the training against good and bad content. Also, Bayesian analysis is by far the most complex to implement and requires the most resources to run.

The source code required to implement some of these methods can be long and a little complex. So, over the next month, I will be publishing posts with more details on how to implement each of these protections as well as some notes on when each methodology should be implemented.

CSS for Mobile Devices - Media Query Essentials

In the modern age of web design, understanding media queries is essential if you want the website to be functional on multiple devices platforms. Media queries allow you to conditionally apply CSS selectors based upon the view port, screen size and resolution.

Implementation Theory - Smallest to Largest

The most common implementation theory for responsive website design is the mobile first approach. The concept is you start with the smallest screen and build your core CSS file for those dimensions. Then, for each subsequently larger screen size, you add media queries and additional CSS selectors. Applying selectors from the smallest to largest screen size allow you to minimize the bandwidth requirements for the smartphones by including only small, low-resolution images while allowing for large, high-resolution images to be included on larger screens.

Media Query Syntax

Media queries are implemented using the @media css selector followed by the constraints. The CSS selectors to be applied to the media query are grouped within curly brackets, just like any other CSS selector.

@media (max-width:461px) and screen {

  body { background-color:red; }

}

This would change the background of the body element to red for screen sizes smaller than 461px.

If all of the conditions are true then the media query results to true and the content is applied to the page. Individual constraints can be coupled together with the AND keyword to require multiple constraints to be true. Multiple sets of constraints can be coupled together using a comma separated list (just like multiple CSS selectors) to apply a logical OR situation. Unless otherwise specified, the "all" media type is added to all media query sets, which means an empty set is the same as having no media query wrapper and the selectors are applied in all situations.

Constructing Media Queries

Media query constraints use some of the CSS properties and add a few more to be applicable on the device level. Note, constraints do not always need a value to be applied.

Viewport

The view port is the box in which the page is constructed. This is not always the same as the device width or the browser width. Users can set automatic zoom features which change the ratio between the browser width and the viewport, causing web developers headaches. You can set a scaling ratio for the viewport using the media tag in the html head section.

Constraints

Constraint Value* Effect
all - Apply to all media types. This is the default behavior of any media query, so it is only needed when using complex constraints.
screen - Apply to only screen media types
print - Apply to only print media types. This is useful for creating a custom layout when a visitor want to print the site
handheld - Apply to only handheld media types.
only - Limits application of the media query, particularly in the situation of older browsers which do not properly support the queries, as these browsers do not recognize the keyword, causing an error in the processing.
not - Apply to all situations except the identified one.
width px/em/cm Limit to browsers with a specific RENDERING width. This turns out to be less useful than min-width or max-width.
min-width px/em/cm  Limit to browsers with a RENDERING width of at least the set amount. Used when applying media queries from smallest to largest.
max-width px/em/cm Limit to browsers with a RENDERING width up to set amount. Used when applying media queries from largest to smallest, or to further constrain selectors when used with min-width.
height px/em/cm Limit to browsers with a specific RENDERING height. This turns out to be less useful than min-height or max-height. Height is not often used since width can often dictate the specific device and height becomes less important for vertically scrolling pages.
min-height px/em/cm Limit to browsers with a RENDERING height of at least the set amount.
max-height px/em/cm Limit to browsers with a RENDERING height up to set amount.
device-width px/em/cm Limit to browsers with a specific SCREEN width. This turns out to be less useful than min-device-width or max-device-width.
min-device-width px/em/cm Limit to browsers with a SCREEN width of at least the set amount. Used when applying media queries from smallest to largest.
max-device-width px/em/cm Limit to browsers with a SCREEN width up to set amount. Used when applying media queries from largest to smallest, or to further constrain selectors when used with min-device-width.
device-height px/em/cm Limit to browsers with a SCREEN height of at least the set amount.
min-device-height px/em/cm Limit to browsers with a SCREEN height of at least the set amount.
max-device-height px/em/cm Limit to browsers with a SCREEN height up to set amount.
orientation portrait landscape Limit to browsers with a particular orientation. This effectively only used when dealing with mobile devices which are orientation conscious.
aspect-ratio ratio Limit to a ratio between the "width" and the "height" values. 
min-aspect-ratio ratio Limit to a minimum ratio between the "width" and the "height" values.  
max-aspect-ratio ratio Limit to a maximum ratio between the "width" and the "height" values.
device-aspect-ratio ratio Limit to a ratio between the "device-width" and the "device-height" values. Common values include 1/1, 4/3, 5/3, 16/9, 16/10.
min-device-aspect-ratio ratio Limit to a minimum ratio between the "device-width" and the "device-height" values. 
max-device-aspect-ratio ratio Limit to a maximum ratio between the "device-width" and the "device-height" values.
resolution dpi/dpcm Limit to devices with a specified resolution. dpi = dots per CSS inch, dpcm = dots per CSS centimeter.
min-resolution dpi/dpcm Limit to devices with a minimum resolution.
max-resolution dpi/dpcm Limit to devices with a maximum resolution.
color -/integer Limit to a specific color depth per component. For example, 0 would indicate monochrome while 2 would indicate 8 bit colors (256-color palette) and 8 indicates the standard full RGB palette.
min-color integer Limit to a minimum color depth per component.
max-color integer Limit to a maximum color depth per component.
color-index -/integer Limit to a specific total color depth. For example, 1 would be monochrom while 8 would indicate 8 bit colors (256-color palette) and 24 indicates the standard full RGB palette.
min-color-index integer Limit to a minimum total color depth. This is must effective at displaying different background images based upon the displayable colors, saving bandwidth on monochrome and greyscale displays.
max-color-index integer Limit to a maximum total color depth.
monochrome -/integer Limit to a specific greyscale color depth on a monochrome device. This is valuable when creating a custom display for printing out the page.
min-monochrome integer Limit to a minimum greyscale color depth.
max-monochrome integer Limit to a maximum greyscale color depth.
scan progressive interlace Limits to TV media with progressive scanning or interlace scanning. Seldom used.
grid -/0/1 Limit to displays running on a pure grid. Seldom used.

* Dashes (-) indicate the value can be omitted and still work fine.

Legacy and Browser-Specific Constraints

Legacy Constraint Browser Modern Constraint
-moz-images-in-menus Firefox 3.6+ none; Used to determine if images can appear in menus. Accepts 0/1. Corresponds to constraint "-moz-system-metric(images-in-menus)".
-moz-mac-graphite-theme Firefox 3.6+ none; Used to determine if user is using the "Graphite" appearance on Mac OS X. Accepts 0/1.Corresponds to constraint "-moz-system-metric(mac-graphite-theme)".
-moz-device-pixel-ratio -webkit-device-pixel-ratio Firefox 4-15 resolution
-moz-os-version Firefox 25+ none; Used to determine which operating system is running the browser. Currently only implemented on windows, with values of "windows-wp", "windows-vista","windows-win7","windows-win8"
-moz-scrollbar-end-backward Firefox 3.6+ none; Used to determine if user's interface displays a backward arrow at the end of the scrollbar. Accepts 0/1. Corresponds to constraint "-moz-system-metric(scrollbar-end-backward)".
-moz-scrollbar-start-forward Firefox 3.6+  none; Used to determine if user's interface displays a forward arrow at the start of the scrollbar. Accepts 0/1. Corresponds to constraint "-moz-system-metric(scrollbar-start-forward)".

Screen Sizes

Device Display (WxH) Viewport (WxH) Resolution Render
iPhone 2G, 3G, 3GS 320x480 320x480   163 dpi 1 dppx
iPhone 4, 4S 640x960 320x480  326 dpi 2 dppx
iPhone 5, 5C, 5S 640x1136 320x568 326 dpi 2 dppx
iPhone 6 750x1334 375x667 326 dpi 2 dppx
iPhone 6 Plus 1080x1920 414x736 401 dpi 3 dppx
iPad, iPad 2 768x1024 768x1024  132 dpi 1 dppx
iPad Air, iPad Air 2 1536x2048 768x1024 264 dpi 2 dppx
iPad mini 2, 3  1536x2048 768x1024 326 dpi 2 dppx
iPad mini 768x1024 768x1024  163 dpi 1 dppx
iMac 2560x1440 2560x1440 109 dpi 1 dppx
iMac Retina 5120x2880 5120x2880 218 dpi 1 dppx
MacBook Pro Retina -13.3" 2560x1600 1280x800 227 dpi 2 dppx
MacBook Pro Retina -15.4" 1800x2880 900x1440 220 dpi 3 dppx
Galaxy Nexus 720x1280 720x1280 316 dpi 1 dppx
Galaxy Mini 2 320x480 320x480 176 dpi 1 dppx
Galaxy S3 720x1280 360x640 306 dpi 2 dppx
Galaxy S4 1080x1920 360x640 441 dpi 3 dppx
Galaxy S5 1080x1920 360x640  432 dpi 3 dppx
Galaxy Tab 7 Plus 600x1024 600x1024  169 dpi 1 dppx
Galaxy Tab 8.9 800x1280 800x1280  169 dpi 1 dppx
Galaxy Tab 10.1 800x1280 800x1280 149.45 dpi 1 dppx
Google Nexus 4 768x1280 768x1280  318 dpi 1 dppx
Google Nexus 5 1080x1920 360x640 445 dpi 3 dppx
Google Nexus 6 1440x2560 1440x2560  493 dpi 1 dppx
Google Nexus 7 1200x1920 600x960 323 dpi 2 dppx
Google Nexus 9 1536x2048 1536x2048  288 dpi 1 dppx
Google Nexus 10 1600x2560 800x1280 300 dpi 2 dppx
HTC Evo 480x800 480x800 217 dpi 1 dppx
HTC One V 480x800 480x800 252 dpi 1 dppx
HTC One X 720x1280 720x1280 312 dpi 1 dppx
HTC One 1080x1920 360x640 469 dpi 3 dppx
HTC One Mini 720x1280 720x1280  342 dpi 1 dppx
HTC One Max 1080x1920 1080x1920  373 dpi 1 dppx
HTC Pure 480x800 480x800  292 dpi 1 dppx
HTC Desire Z, T-Mobile G2 480x800 480x800 252 dpi 1 dppx
Blackberry Q5, Q10 720x720 360x360 330 dpi 2 dppx
Blackberry Z10 768x1280 384x640 356 dpi 2 dppx
Blackberry Z30 720x1280 360x640 295 dpi 2 dppx
Blackberry Passport 1440x1440 1440x1440 453 dpi 1 dppx
Lumia 520, 521 480x800 480x800 233 dpi 1 dppx
Lumia 620 480x800 480x800 246 dpi 1 dppx
Lumia 625 480x800 480x800 199 dpi 1 dppx
Lumia 720, 820, 822 480x800 480x800 217 dpi 1 dppx
Lumia 920, 928, 1020 768x1280 480x800 332 dpi 1.6 dppx
Moto X 720x1280 360x640 312 dpi 2 dppx
Moto G 720x1280 360x640 326 dpi 2 dppx
Kindle Fire 600x1024 600x1024 169 dpi 1 dppx
Kindle Fire HD - 7" 800x1280 800x1280 216 dpi 1 dppx
Kindle Fire HD - 8.9" 1200x1920 1200x1920 254 dpi 1 dppx
Kjndle Fire HDX - 8.9" 1600x2560 1600x2560 339 dpi 1 dppx
Kindle Fire HDX - 7" 1200x1920 1200x1920 323 dpi 1 dppx
Surface 768x1366 768x1366 148 dpi 1 dppx
Surface 2, Pro, Pro 2 1080x1920 1080x1920 208 dpi 1 dppx
Surface Pro 3 1440x2160 1440x2160 216 dpi 1 dppx
Yoga 2 Pro 1800x3200 1800x3200 276 dpi 1 dppx
ThinkPad Edge E531 1920x1080 1920x1080 141 dpi 1 dppx
IdeaPad U310 1366x768 1366x768 118 dpi 1 dppx
UltraSharp UP2414Q 3840x2160 3840x2160 185 dpi 1 dppx
UltraSharp U2412M 1920x1200 1920x1200 94 dpi 1 dppx
UltraSharp U2414H 1920x1080 1920x1080  93 dpi 1 dppx

If you set the viewport scale to 1, the display dimension is the maximum size of an image you want to include while the viewport dimension is the one you use for your media queries.

In  Practice

It is not practical to create a media query for every single device, specifying its particular dimensions and resolution. Creating a set of breakpoints allow you to format a group of devices, instead of each individual device/screen. Also, setting the viewport scale to 1, simplifies all the artificial display dimensions into a handful of sizes. Adding a min-resolution constraint allows for special styling for those high resolution smart phones.

To assist in the development process, we start with a mobile framework which already has the major break points identified.

Download our framework

Reference

The specifications for media queries can be found at:

Managing the Postfix Mail Queue

Postfix is one of the most common open-source mail transfer agents (MTAs), and is the one we run for ourselves and clients. Like just about every other MTA, once Postfix accepts an email from any source, it places the email in a queue. Another Postfix process then runs through the queue and processes the email according to the Postfix settings (typically deliver it to a local mailbox, spamassassin or an outside mail server).

Types of Mail Queues

  • maildrop The maildrop queue is the temporary queue for all incoming mail from the local server. Messages submitted directly to the maildrop or sendmail scripts are placed here until they can be added to the active queue.
  • hold The hold queue is basically a quarantine queue created by changing access restrictions from Postfix settings. Normally this queue is not used unless explicitly setup by the admin.
  • incoming Emails arriving are placed in the incoming queue immediately upon arrival. Normally once an email hits the incoming queue it is sent to the active queue, however this is dependent upon the resources available to the active queue.
  • active Emails which have yet to be delivered due to resource restrictions are placed in the active queue. This is the only memory queue, where the other queues operate with a file on the hard-disk.
  • deferred Emails which could not be delivered but where not necessarily bounced are placed in the deferred queue for future deliver. 

 

Mail Queue Operations

Viewing a Queue

An individual queue can be viewed with the mailq command.

~ mailq

~ QUEUE_ID MSG_SIZE ARRIVAL_TIME SENDER RECIPIENT,RECIPIENT...

The mailq command outputs a table with the queue ID, message size, arrival time, sender and outstanding recipients.

 

Flush Queue

Flushing the queue forces the mail queue manager to attempt to process and deliver every message in the queue. Unless the active queue crashed, you typically will only flush the deferred or hold queues since the other ones seldom have messages for longer than a few seconds.

~ postfix flush

 

Clear Queue

Clearing the queue forces the mail manager to delete all the messages in the particular queue.

~ postsuper -d QUEUE

Substitute QUEUE for the mail queue you want to clear, or 'ALL' to delete all the messages in all the queues.

Fixing Stored User Names on Chrome

Recently a client had a problem logging in. She had the correct username and the password was correct, because she just reset it. When I tried, after changing the password to something I knew, I logged in with no problem. After repeating the process about a dozen times, and digging through way too many debugging files, the problem turned out to be a single space at the end of her username. This is a common problem I have run into myself when copying and pasting usernames or passwords. I told her about the problem and she tried again, but got the same result; authentication failed.

The real problem turned out to be a saved password on chrome. Chrome had saved her username with an additional space, then when she went to put in the username, it automatically went to the saved username. Being that an end space is invisible in the text box, the only way she could tell it was happening is to click at the end of the text box and then delete the extra space. Chrome, while trying to be smart, was actually causing the authentication failure.

While saving passwords in your browser is a bad idea, occasionally it is necessary, or you just happen to hit the "Save" button instead of the "Don't Save" button. No matter the reason for the saved username and password, getting rid of the bad combination is more challenging than it should be. In my client's situation, she had to first remove the password before she could remove the erroneous username.

Removing a Saved Password

  1. Click on the Menu button to the right of the address bar
  2. Click "Settings"
  3. Go down the list until you get to the "Passwords and forms" section. If it doesn't appear, click on the link "Show advanced settings" to display all of the settings
  4. Click on "Manage passwords" link
  5. Find the website where you want to remove the saved password and click the little "X" to the right of the appropriate username.
  6. Click "Done" and exit the settings. For security reasons, you should remove all the passwords and uncheck the box on the Settings page "Offer to save your web passwords." This will prevent Chrome from asking you to save a password in the future.

Removing the Username

  1. Right click on the text box for the username. This should bring up a list of usernames previously entered into the text box.
  2. Press the down arrow key until you select the username you want to delete.
  3. Press Shift+Delete (for Windows & Linux) or Shift+Fn+Delete (Mac) to remove the username from the saved list

 

What is a Timing Attack

A timing attack is a crytography issue where a hacker attempts to use the difference in processing time between two actions to gain information. Individually, the information gained is the amount time it took to run the process with a particular set of input. Performed once, the processing time is insignificant, however when performed in a series, the hacker can adjust the input (ie password for a particular username) and use the increase or decrease in time to create a security hole by drastically reducing the potential.

Normally, when software compares two strings it starts with the first character of each string and determines if the characters are the same. If the first characters are the same, it moves to the second characters and tests them. This process continues until two characters are found to be different or one string runs out of characters. Conceptually, if it took 1 second to compare two 1000 character strings and it took 1 millisecond to determine the first character, it would be useful to know that it took 54 milliseconds to compare two strings. The implication is the 54th character is the first character different between the two strings.

In a cryptographic sense, the hack would only know one of the two strings, the password inputted. If the password on the server is properly secured and hashed in the database, then every failed password will take roughly the same process through the system, with the comparison between the hashed password and the hashed version of the input password being the only real difference. Systematically changing the input password while comparing it to response times could allow the hacker to break the password in a practical amount of time by reducing the password combinations exponentially as each character is determined.

Fixes

Some programming languages and operating systems have special functions for comparing password hashes. Outside of these special functions, there three ways to disrupt a timing attack.

1) Rehash the hashes

By hashing the hashes and comparing the doubly hashed values, you are performing the comparison of two "strong password" style strings rather than the potentially "weak password" style strings. This shift drastically increases the number of potential possibilities, increasing the time it takes to perform the attack, potentially to impractical levels.

The problem with this approach is you are reducing the usefulness of the leaked information, thought not completely eliminating it. Depending on the computing power available to the hacker, theoretically this approach can still be overcome.

2) Inject a random delay

Inserting a random microsecond delay in the processing would distort the leaked information, making the information insignificant.

The problem with this approach is the speed of your own processing. This approach is dependent on the system having a microsecond processing time. If, for example, the comparison operator took tens of microseconds to process and you were adding between 0-9 microseconds, you have only distorted, and not hide, the leaked information. This means you would have to appropriately adjust the random factor per the processing speed of the current system or place the range in such a way that you would be wasting resources (ie making the script run tremendously longer than necessary).

3) Manually compare the entire strings

Because the basic string comparison operator works on an array basis, manually running through every character in the string (via a for loop and direct access to each character) removes the time difference created by the simple string comparison operator. Once a different character is identified, a bit flag is set and the comparison continues until every character is compared.

This is the preferred method and the basis for any of the special functions for password comparison.

In PHP this function would be roughly

  • function compare_hashes($hash1, $hash2) {
  •     if( !is_string($hash1)
  •             || !is_string($hash2)
  •             || empty($hash1)
  •             || empty($hash2)
  •            ) {
  •         return FALSE;
  •     }
  •  
  •     if( ($len = strlen($hash1)) != strlen($hash2) {
  •         return FALSE;
  •     }
  •  
  •     for( $i = 0; $i < $len; $i++) {
  •         $flag |= ord($hash1[$i]) ^ ord($hash2[$i]);
  •     }
  •     return (!$flag);
  •  }

 

Other Notes

In a practical sense, timing attacks are less of an issue than a copy of the password table from the database being leaked. Normal methods of blocking brute force attacks defeat most of the risks associated with Timing Attacks, but the ease of the fix is such that implementing a proper hash comparison is worth it.

All Hail King Content

This is the second part of our series on effective blogging. The series starts with "Managing Your Blog – The 5 Cs."

"Content is King" is a common sediment of digital marketing folks. You see it everywhere; if you want to have a successful blog, you must create good original content. It is easy to understand what original means, but what is "good content" and if you are starting out, how do you make sure you are creating good content? We all are not authors, so it may not necessarily be great content, but at least good content.

Like other small business owners, I was forced to deal with this problem, so I let my science background take over and created a semi-scientific study to determine what is good content. Reviewing blogs of different levels of success, as tell as testing some of the theories on a blog created on a free blogging platform, we worked out that every blog post could be segmented into one of 6 types.

Public Relations

Public Relations post simply provide information to the world about the workings of the company. They are generally simple, to the point and provide little useful information beyond the company exists and something happened. Seldom are these posts more than a couple hundred words and have anything more than blatant information.

Large corporations commonly put out PR statements in their blogs, though the practice appears to be much less prevalent with smaller firms. Some large public firms have set up "blogs" just for the purpose of sending out PR announcements, particularly associated with corporate filings and other financial issues.

For small businesses, these post types provide no real value for their readers, nor do they provide any substantial effect on search engine optimization and consequently should be avoided. If you feel like you have an announcement which fit the PR post profile, try changing the message to fit one of the other post types.

Sales

Sales posts are focused on a product or service of the company, and attempt to convince the reader to purchase said product or service.They provide little useful information beyond the fact that the company sells the product or service. Most younger readers will see the post for exactly want it is, an advertisement, and be turned off by the whole organization.

These blog types should be avoided by small businesses in all situations except when you are introducing a brand new product or service. When introducing a new offering, you should focus on how it is better than previous offerings and avoid any connotation that a reader needs to buy it.

Editorial

Editorial posts clearly state an opinion of the company or management. These posts should not try to sound like they are anything but what they are, opinions. They should include statements like "I think" or "we believe" to remove any doubt in the reader's mind that the post is objective. 

Editorial posts play an important role in a small business blog by creating a sense of humanity within the company. Large corporations are often viewed as cold, dry institutions where everything is dictated by a lawyer approved process. People work with small businesses because they want the personal touch. Creating this personal touch through the annoymity of the internet is difficult, but the opinion expressed in an Editorial reinforces the fact that at your company you have real humans working and taking care of the customers.

The biggest caveat of Editorial posts is liability associated with such posts. You are publicly putting your weight behind a stance, which can backfire. The safest bet is to choose a topic which is only controversial within your industry or a topic where you can provide a new approach to the situation, which does not have the political baggage of the more defined positions.

Educational

Educational posts involve you teaching your readers about a subject. These posts generally sacrifice details and instead strive for a solid conceptual understanding by the reader. Your goal is to teach your readers, and nothing else. These posts should use examples from unrelated topics that the reader may already understand.

The hardest part of educational posts is figuring out the topic. Systematically going through your company and identifying all the places where customers have asked questions is a great starting point for material. For every customer who asked a question, there is often a dozen customers who have been either too scared to ask or were not educated enough to know to ask the question.

Technical

Technical posts details the minutiae behind a process or product. These posts show off you are the expert in your field and reinforce the idea that you should be contacted when someone has an intricate problem.

We have create Technical posts detailing the steps of setting up WinSCP to access an AWS server and compressing HTML served using Apache.These posts do not hawk any of our products or services, but rather try to help them by going through the explicit details needed to accomplish a goal. It may seem like we are giving away our 'secret sauce' here, but rather when a Do-It-Yourselfer becomes stuck with problem you just explained, they will see your post and try to fix it themselves, or just give up and ask you to do it.

For service based small businesses think about a very technical process you occasionally perform for your clients and walk through steps providing explicit details and considerations. For product based small business, perform the same process but detail everything about a particular product, from the dimensions to the situations where you should and should not use that product.

Story

Story posts are possibly the hardest for small businesses to sincerely create because in order for them to be effective, they must be entertaining. Most small businesses do not perform tasks which are entertaining as much as functional, which makes it challenging to create an engaging story. However, if you can create an engaging story, you will hook your readers in reading the complete story and then looking for your other stories.

One way for a small business try to create Story posts is telling about their customers' stories. The problem is, while these are legitimate stories, they tend to feel forced or insincere. A better approach is telling a story from your own perspective, that of an employee or an trusted customer. These stories are considerably more sincere and endearing to customers.

 

Small business should focus on creating Educational and Technical posts while throwing in an occasional Editorial or Story post to create a sense of humanity at the company. Just make sure you are focused on the lay person within your target audience, and try to simplify everything for that lowest common understanding. This approach will annoy a few of the more knowledgeable readers, but will be endearing to those with the lowest level of understanding (ie your potential customers) nor those with a middle level of understanding (ie. more potential customers in other areas). Subconsciously, the middle and lower level readers are thinking; if this company can explain this complex topic so I can understand it, they can explain other associated topics, insignificant of my level of understanding, and I'm going to go back there next time I have a question.

Other Technical Considerations

There are also a few technical considerations to blog content. To have a real impact on search engines, you posts need to be at least a thousand (1000) words long and have decent keyword density. The word count is not a hard minimum, rather is a good target to generate a diversity of words while still being able to maintain a good keyword density. The keyword density is how many times and varieties of a particular keyword or key concept are mentioned in the post. Once you eliminate filler words (articles, prepositions, etc), you should have a 2-5% keyword density.

Managing Your Blog – The 5 Cs

Blogging is a really easy way to start spreading your special message online. A good blog will education, entertain and encourage your website's visitors while generating more traffic and therefore more potential customers. Over the next month we will be publishing a series of posts on techniques for successfully managing your blog and hopefully break down some of the ‘black magic’ out there associated with blogs and search engine optimization.

First, blogging should be fun and easy. If it wasn’t, why would so many people have their own personal blogs? You company should already have the interface setup and you are just expected to write content, manage comments, and generally build marketing pull for the company. Initially, this can seem like an overwhelm prospect, but it should be.

To simplify blogging, we have broke it down into five sections; Content, Connections, Comments, Creativity and Commitment. Each of these 5 C’s are equally important to the success of your blog, though they each require different amounts of effort during the maturation process. Over the next few weeks we will thoroughly explain each section, why it is important and some simple ways to succeed.

Content

“Content is King!” This is the battlecry of many digital marketing agencies and is commonly as a simplification of search engine optimization strategies. However, all content is not created equally. Why do some company blogs take off while others drag by the wayside?

We spent the last year investigating the differences in content and found all blog posts could be segmented into one of six types; Public Relations, Sales, Editorial, Education, Technical and Stories. The first two types (Public Relations and Sales) are not productive to any blog, and we suspect they can actually be detrimental to small businesses. Story blogs also generally do not fit well with a small business profile, which leaves Editorial, Education and Technical. Our research found that regularly cycling through these types of blogs can generate real, sustained viewership to any blog.

There are also a few technical considerations to blog content. To have a real impact on search engines, you posts need to be at least a thousand (1000) words long and have decent keyword density. The word count is not a hard minimum, rather is a good target to generate a diversity of words while still being able to maintain a good keyword density. The keyword density is how many times and varieties of a particular keyword or key concept are mentioned in the post.

Connections

Connections are all the methods for distributing the blog posts on other platforms, primarily social media. Posting links to your blog posts on social media highlights the topics you are discussing and encourages others to read and share your posts.

Another issue to consider is your sources of traffic. A large number of visitors from Facebook is different than Google. Some connections generate more traffic than other connections, and analyzing where your traffic originates helps determine how to allocate your publicity efforts and where to spend time in building more connections.

The top search engines have a set of tools for webmasters to view and manage their website's profile. An important feature of these tools are the search keyword analysis tool, where you can when your website appears on various search terms.

Comments

Feedback is essential in determining the subject of new content. Comments provide an instant, albeit subjective, to what your visitors are thinking. Their often naive questions or statements can indicate areas for explanation that you, as an expert in your field, over look as being obvious.

Encouraging users to comment can be hard for smaller, independent blogs. There are a variety of tricks, from allowing anonymous comments to integrating your blog with major social media platforms. Anything which minimizes spam while reducing a hurdle for your visitors to engage is worth investigating and using if appropriate.

Once you start getting comments, you will start getting spam comments. Most spam are comments whose only purpose is to hawk the spammers products. Most of the times these comments are obvious, though there is a growing trend of using legit sound generic comments to get a link back to their website. The website associated with the comment is often telling if the comment is just link spam.

Creativity

Your blog must demonstrate the company’s inner passion without saying a single word. The color scheme, the imagery, the navigation options and the general ease must resonate with your target viewership without them having to read anything.

Our blog, for example, is simple and to the point. We don’t have a hundred little things in the sidebar to distract you; rather we focus on the content. Our content can be complex at times, and we would rather you focus on understanding the topics than reading one paragraph and getting distracted by a flashing gif image.

Creativity should impact your content in your choice of subjects as well as your perspective on those subjects.

Commitment

Creating a blog and posting a bunch of content over a short period of time then ignoring the blog for a few months is common. It is easy to put creating blog posts on the back-burner when other things seem more pressing.

Commitment applies to two parts of the blogging process. First, it means placing a priority in regularly interacting with visitors through posting new content and responding to comments. Content can often be created in batches when you have available time then set to post automatically in the future. Responding to comments thought requires action within a short period of time of their comment.

The second portion of commitment is to your subject matter. A blog posting about dessert recipes should not deviate to a political topic, no matter the importance of said political topic. Your visitors are coming for recipes, not rants, so feed them recipes.

Conclusion

Conclusion could be thought of as the sixth C, but is just a good practice of writing. When you are writing blog posts think about who you want reading your posts and what encourages them to trust you. Blogging is about creating trust in others that you are an expert and someone they want to work with.

Over the next month or so of Fridays, we will dig deeper into each of these concepts and hopefully provide you with clear examples of what you should be doing and not doing.

Half of Internet Users at Risk

This weekend, the technology security firm FireEye revealed a flaw in Microsoft Internet Explorer which compromises the users entire system. Microsoft has announced they are rushing to fix the bug, though that doesn't help the 55% of internet users who use the browser to go online via their computer.

Security Flaw Details

The flaw exploits a memory re-allocation issue which allows for data corruption and bypasses Window's ASLR (address space layout randomization) and DEP (data execution prevention) protections. This basically allows malicious scripts to insert a virus into system and execute it, taking complete control of the computer. This flaw effects all version of IE from 6 to 11, with current reports of it being successfully utilized against IE 9, IE 10 and IE 11.

Windows XP Users

With the end of support for Windows XP, Microsoft has chosen to release any updates for IE only for the Vista, 7 and 8 versions of Windows. This prohibits about  25% of all desktop computers from using Internet Explorer. And Microsoft's solution of upgrading to Windows 7 or Windows 8 is not an option for many of these older (though still good) machines.

Solution: Switch Browsers

If you didn't already know it, Internet Explorer is notorious in tech circles for being full of security bugs as well as having inconsistent rendering of web designs. IE is only ever used for testing the cross-browser design of websites. If you regularly use IE, you should consider switching to a more robust browser. These alternative browsers are just as easy to use, and generally will make websites run faster while looking better.

If you do not switch browsers, DO NOT USE IE until the patch is released.

Mozilla Firefox

Firefox is the gold standard of web browsers. It is available on just about every operating system (Windows XP, Vista, 7, 8, OS X, iOS, Android, Blackberry10, Linux, etc) and is FREE. There are numerous skins, plugins and extensions for Firefox, allowing you customize everything. It also utilizes an open-source model, which limits the number of these fundamental security flaws and gets such flaws patched extremely quickly.

Mozilla Firefox is available here.

Opera

Opera is a free web browser for personal use. It is developed by Opera Software ASA of Norway as a bridge between a commercial browser and a personal browser. Opera provides some of the best support for the web standards on compatibility and security.

Opera is available here.

Apple Safari

Safari is Apple's base web browser which is included with OS X and iOS but is also available for Windows since XP. Safari has a clear Apple feel to its interface which is a little confusing for Windows users at first. Safari uses an open-source license, making it free for personal and commercial use.

Apple Safari is available here

Google Chrome

Chrome is the Google alternative to Internet Explorer. I generally do not recommend Google products not associated with the Search Engine because of Google's tracking and lack of control, but Chrome is an improvement over IE. Chrome is generally considered one of the fastest web browsers and has good adherence to the web standards on compatibility and security.

Google Chrome is available here

 

Conclusion

If you are still using Internet Explorer, this is a great reason to switch. Once you make the change, you will be happy. For those concerned about making the transition, contact us and we can go through everything needed to make the change.

Security Release: https://technet.microsoft.com/en-us/library/security/2963983.aspx

Impact of a tiny bug

As you may have heard, on Monday, a group of security engineers reported the existence of the 'Heartbleed' bug in OpenSSL. It has since sent waves through the internet community, and for good reason; OpenSSL is used by most of the internet for encryption.

Why is this bug so important?

Most people outside of the tech circles have never heard of OpenSSL before today and still do not understand what the software does. SSL (Secure Sockets Layer), which you probably have heard about, is the encryption protocol used to securely transfer data between an internet server and a client (like a web browser). SSL uses an asymmetric method where the server sends the client a public encryption key which can encrypt data but can not decrypt it. The server keeps a separate private key which can decrypt the data. This process is inverted for data being sent to the client.

OpenSSL is a open source application which is used by various tools to encrypt and decrypt data. This asymmetric system is valuable for a wide range of applications, including securing email transmission, FTPs and file uploads.

The actual error in the code was traced back to a change made roughly two years ago.

How to fix

Fixing the bug requires you to update OpenSSL to version 1.0.1g. This is the newest release which came out with the announcement of the bug on Monday.

AWS Server

These instructions assume you have already setup an instance and have an SSH client available.

  1. Log in to your instance via the SSH client and transfer to the root user.
  2. Run the YUM update command for the "openssl" package
  3. Press "Y" when it asks if you want to update the package
  4. Verify the installation occurred correctly by starting/restarting the httpd service
  5. Revoke and reissue all SSL certificates (including self-signed ones).

Summary of command line inputs

  • $ sudo su
  • $ yum update openssl
  • .....
  • $ service httpd restart

Other Software

  • Apache (via mod_ssl or apache-ssl)
  • nginx
  • cURL (including php curl extension)
  • WinSCP
  • cryptozilla
  • Wordpress
  • Wu-FTPd/SSL
  • RaidenFTPD for Windows
  • sNFS
  • JavaSSL
  • SSLJava
  • Samba
  • Kermit
  • Roxen Challenger
  • OpenCA
  • Postfix
  • QMail
  • SSA (Secure Sockets Agent)
  • slush
  • nsyslog
  • CashCow
  • pyCA
  • MySSL
  • M2Crypto
  • Sendmail
  • SafeGossip
  • KeyNote
  • sslproxy
  • OpenSSH
  • FISH
  • mini_httpd
  • Pavuk
  • ntop
  • start_tls-telnet
  • Fetchmail
  • Lynx
  • Courier-Imap
  • BIND
  • RubyPKI
  • TinySSL
  • XMLSec
  • OpenTSA (Open Time Stamping Authority app)
  • CSP
  • XCA
  • DelphiImport

This is not an all encompassing list. I'll update it as I find more applications which use OpenSSL.

Secondary Impact

Most people who use the internet do not directly use any of the aforementioned software.The biggest implication of this security hole is its usage with Apache and nginx, which combined run about 66%[1] of all websites in the world. If you have recently (within the last 2 years) used any website which uses Apache or nginx, your username and password might have been visible. This means you need to change EVERY username/password pair you have.

Popular Websites Known to be Vulnerable

  • Google (including gmail)
  • YouTube
  • Facebook
  • Yahoo
  • Instagram
  • Pinterest
  • Tumblr
  • Etsy
  • GoDaddy
  • Flickr
  • Minecraft
  • Netflix
  • SoundCloud
  • USAA
  • Box.com
  • DropBox
  • GitHub
  • IFTT
  • OkCupid
  • Wordpress.com
  • Wunderlist

Some of these websites use 2-step authentication to validate users identities, which makes it more challenging to use the username/password to impersonate a user, but it does prevent it. This is especially true if you use the same username/ password paring in multiple locations (like your email account and Facebook).

Simply, you should replace ALL passwords this weekend.

 

Problems Updating Desktop Software

Many open source (ie free) desktop applications that use OpenSSL for security purposes. This software needs to be updated also, however the urgency with updating it is much lower because this software is regularly shutdown (unlike Apache which must be running when someone request a website) which clears the memory, and often the information it holds in memory is the content sent by the server, which does not include the username / password pairs.

Also, if someone hacked your computer, there are much easier ways to obtain sensitive information than the Heartbleed bug. Simple key stroke recorders and screen swipers would gain much more valuable information. You still need to update these applications, but if it doesn't happen for a week or two, you do not need to freak out.

References:

  1. Netcraft's April 2014 Web Server Survey

Compressing all HTML pages with Apache2 on AWS

The Apache2 web server has two mods which can be used to compress data sent to the client (ie browser); mod_deflate and mod_gzip. The gzip mod is more versatile but more challenging to setup. For simple compression of HTML, CSS and JavaScript files, the deflate mod works just file. Compression is particularly important on Amazon Web Services (AWS) because:

  • HTML is very redundant and bulky
  • Smaller files are sent to the client faster
  • AWS charges you based upon OUTPUT bandwidth; smaller files = less bandwidth usage per file

Simple activation of mod_deflate

These instructions assume you have already setup an AWS instance and have an SSH client (like PuTTY) available and a SCP client (like WinSCP) to use when editing the configuration files.

  1. Log in to your instance via the SCP client then open the apache2 virtual hosts configuration file ("/etc/httpd/conf.d/vhosts.conf" for the default setup mentioned in other instructions here).
  2. Add the "AddOutputFilterByType DEFLATE text/html text/plain text/xml" Filter to each virtual host (virtual hosts are the groupings starting with "<VirtualHost "). You should inclose the filter in a conditional module statement ("") to make sure your web server keeps running even if you happen to remove the deflate module.
  3. Save the virtual hosts configuration file.
  4. Open the SSH client and transfer to the root user ("sudo su")
  5. Restart the apache2 service ("service httpd restart").

The changes to the virtual hosts configuration file

  • <VirtualHost *:80>
  • ....
  • <IfModule mop_deflate.c>
  • AddOutputFilterByType DEFLATE text/html text/plain text/xml
  • </IFModule>
  • ...
  • </VirtualHost>

Summary of command line inputs

  • $ sudo su
  • $ service httpd restart

Error with phpMyAdmin 3.5.4 showing Blank Screen

Earlier this week, I was going to update some database tables and attempted to log in to phpMyAdmin when I got a blank screen. If you've ever programed much in PHP, a blank screen almost always means one of two things:

  1. You never accessed the PHP file
  2. The PHP Script had a fatal error and error codes are set to off

After some debugging (detailed below) it turns out phpMyAdmin v3.5.4 has a fatal error where the script files are loaded in the wrong order. With PHP errors fully on, PHP kicked "Fatal error: Call to undefined function PMA_sanitize() in /usr/share/phpMyAdmin/libraries/Message.class.php on line 540". All it took to fix was adding a line to call the sanitizing libraries before allowing the message class to be loaded. Hopefully Amazon's repository will be updated with v3.5.5 soon, so no one else encounters this problem.

Debugging Blank Screen

Accessing the PHP Issue

For me, I found out after the fact that this step was not even necessary, but that is how debugging goes.

  1. Log into your AWS via SCP (like WinSCP)
  2. Find you installation of phpMyAdmin (the default YUM installed phpMyAdmin on an AWS Linux system is /usr/share/phpMyAdmin)
  3. Open the file "index.php" and add the following two lines on two new lines directly after the "
    • echo "I AM phpMyAdmin";
    • exit;
    • /* vim: set expandtab sw=4 ts=4 sts=4: */
    • /**
  4. Attempt to access phpMyAdmin as you normally would. You should see a white screen with "I AM phpMyAdmin" on it. If you do, delete the two lines you just added, save the file and try to access phpMyAdmin again. If you get a blank screen this time then skip to the next section, since the web server is accessing phpMyAdmin.
  5. Log into your AWS server via a SSH client (like PUTTY)
  6. Type "sudo su" to transfer to the root user
  7. Restart the Apache2 web server (type "service httpd restart"). You should get two "OK"s
  8. Attempt to access phpMyAdmin as you normally would. You should see a white screen with "I AM phpMyAdmin" on it. If you do, delete the two lines you just added, save the file and try to access phpMyAdmin again. If you get a blank screen this time then skip to the next section, since the web server is accessing phpMyAdmin.
  9. Open the "phpMyAdmin.conf" file for apache2. The default AWS Linux location is /etc/httpd/conf.d/phpMyAdmin.conf.
  10. The default installation prevents everything but the localhost from accessing phpMyAdmin. Most likely you will add an exception for your computer's IP address, or that of your VPN system. DO NOT, as per phpMyAdmin's instructions, add the line "Require 0.0.0.0" or "Allow All" or "Allow 0.0.0.0". All three of these settings create significant security holes. The resilience to brute force attacks is minimal and you will be hacked eventually.
  11. Restart the Apache2 web server (type "service httpd restart"). You should get two "OK"s
  12. Attempt to access phpMyAdmin as you normally would. You should see a white screen with "I AM phpMyAdmin" on it. If you do, delete the two lines you just added, save the file and try to access phpMyAdmin again. If you get a blank screen this time then skip to the next section, since the web server is accessing phpMyAdmin.
  13. Remove phpMyAdmin and reinstall it.

Identifying Fatal PHP Error

These steps identified the real problem and allowed for the quick patch.

  1. Log into your AWS server via a SCP client (like WinSCP)
  2. Open the apache2 configuration file for phpMyAdmin ("/etc/httpd/conf.d/phpMyAdmin.conf") and add the following lines to the "" then save the file.
    • php_admin_flag engine on
    • php_admin_value display_errors on
    • php_admin_value error_reporting 30711
    • php_admin_flag ini_set on
  3. Log in to your AWS server via SSH and restart apache2 ("service httpd restart")
  4. Attempt to access phpMyAdmin as you normally would. Instead of a blank screen, you should get an error message along the lines of "Fatal error: Call to undefined function PMA_sanitize() in /usr/share/phpMyAdmin/libraries/Message.class.php on line 540"
  5. Open the file "/usr/share/phpMyAdmin/libraries/Message.class.php"
  6. At the top of the header comments, add the line "require_once('./libraries/sanitizing.lib.php');"
  7. Save the Message.class.php file.
  8. Attempt to access phpMyAdmin as you normally would. It should work fine now. If you want to, you can go back to the apache2 phpMyAdmin configuration file (/etc/httpd/conf.d/phpMyAdmin.conf) and remove the lines you entered. If you have a public installation of phpMyAdmin, then you should remove them for security reasons.

Installing and Configuring phpMyAdmin on AWS Amazon Linux AMI running Apache2 PHP and MySQL

 This is actually really easy, assuming you are using the base version of PHP (5.3.X) from the AWS package repository. YUM has phpMyAdmin as a package and most of the default settings work just fine. The first time I install on an AWS instance it took maybe 15 minutes to complete.

Installing phpMyAdmin

These instructions assume you have already setup an AWS instance and have an SSH client (like PuTTY) available and a SCP client (like WinSCP) to use when editing the configuration files.

  1. Log in to your instance via the SSH client. Transfer to the root user ("sudo su").
  2. Use YUM to install phpMyAdmin
  3. Press "Y" when it asks if you want to install phpMyAdmin
  4. Open the SCP client and go to the apache2 configuration files directory (default is "/etc/httpd/conf.d")
  5. Open the "phpMyAdmin.conf" file.
  6. Add an access exception to apache2 authentication protocol. There are three safe ways to allow access to phpMyAdmin;
    1. Allow Exception from a static IP Address Under the Directory tag "/usr/share/phpMyAdmin/", add the following line at the end of the tag, "Require ip XXX.XXX.XXX.XXX" and the following line at the end of the tag, "Allow from XXX.XXX.XXX.XXX". In each situation you should be replace XXX.XXX.XXX.XXX with the actual IP address.
    2. Allow access from a VPN You will need a Virtual Private Network setup already, which is well beyond these instructions. Under the Directory tag "/usr/share/phpMyAdmin/", add the following line at the end of the tag, "Require ip XXX.XXX.XXX.XXX" and the following line at the end of the tag, "Allow from XXX.XXX.XXX.XXX". In each situation you should be replace XXX.XXX.XXX.XXX with the actual IP address.
    3. Use SSL Certificate for authentication These instructions are not complete yet.
  7. Save the edited "phpMyAdmin.conf" file.
  8. Verify the installation occurred correctly by starting/restarting the httpd service (in SSH "service httpd restart")

Summary of command line inputs

  • $ sudo su
  • $ yum install phpmyadmin
  • .....
  • Do you want to install phpMyAdmin 5.x (Y/N): Y
  • $ service httpd restart

First few lines of phpMyAdmin.conf file with default installation path, edited for access by a single IP address

  • # phpMyAdmin - Web based MySQL browser written in php
  • #
  • # Allows only localhost by default
  • #
  • # But allowing phpMyAdmin to anyone other than localhost should be considered
  • # dangerous unless properly secured by SSL
  • Alias /phpMyAdmin /usr/share/phpMyAdmin
  • Alias /phpmyadmin /usr/share/phpMyAdmin
  • <Directory /usr/share/phpMyAdmin/>
  • <IfModule mod_authz_core.c>
  • # Apache 2.4
  • <RequireAny>
  • Require ip 127.0.0.1
  • Require ip ::1
  • Require ip XXX.XXX.XXX.XXX
  • </RequireAny>
  • </IfModule>
  • <IfModule !mod_authz_core.c>
  • # Apache 2.2
  • Order Deny,Allow
  • Deny from All
  • Allow from 127.0.0.1
  • Allow from ::1
  • Allow from XXX.XXX.XXX.XXX
  • </IfModule>
  • </Directory>
  • ...

Configuring phpMyAdmin

The default configuration of phpMyAdmin needs only a few changes to get it working correctly.

  1. Log in to your instance via the SCP client (like WinSCP)
  2. Open the phpMyAdmin base directory (default AWS installation directory is "/usr/share/phpMyAdmin")
  3. Open the file "config.sample.inc.php"
  4. Go down to the line "$cfg['blowfish_secret'] = 'XXXXXXXX';" where XXXXXX is some alphanumeric combination. Add a bunch more letters and numbers within the single quotes.
  5. Go down to the line "$cfg['Servers'][$i]['controlhost']" and make sure it is uncommented. After it, add "= 'localhost';"
  6. The next line should be "$cfg['Servers'][$i]['controluser']"and make sure it is uncommented. After it, add "= 'USERNAME';" where USERNAME is the username you want to log into phpMyAdmin using.
  7. The next line should be "$cfg['Servers'][$i]['controlpass']"and make sure it is uncommented. After it, add "= 'PASSWORD';" where PASSWORD is the password associated with the previously entered username.
  8. Save the file as "config.inc.php".
  9. Use YUM to install phpMyAdmin
  10. Press "Y" when it asks if you want to install phpMyAdmin
  11. Open the SCP clint and go to the apache2 configuration files directory (default is "/etc/httpd/conf.d")
  12. Open the "phpMyAdmin.conf" file.
  13. Direct your browser to "http://XXX.XXX.XXX.XXX/phpMyAdmin" where XXX.XXX.XXX.XXX is the IP address of your server. You should be prompted for a username and login. Enter the pair you just saved in the config file and you should run phpMyAdmin.

Creating a New Cron Job on AWS Linux AMI

Cron is a time-based program used explicitly to initiated other programs at particular times on a Linux system. AWS Linux AMI comes with cron pre-installed and configured, like every other modern Linux installation. The base configuration allows for set up of a task that should be run hourly, daily, weekly or monthly as well as any other time period.

Quick Job Setup

Setting up a job to run hourly, daily, weekly or monthly is very quick. These instructions assume you have already setup an AWS instance and have an SSH client (like PuTTY) available.

  1. Log in to your instance via the SSH client. Transfer to the root user.
  2. Go to the '/etc' directory
  3. Open the appropraite 'cron.XXXXX' directory. For example, if you want to add an hourly task, open the 'cron.hourly' directory.
  4. Create a new file (shift+F4)
  5. Add a single line for each job you want run. For example if you want to run a PHP script, type "/usr/bin/php -q /path/to/script/script.php", substituting for the correct path to PHP and the script file.
  6. Save the file. The name doesn't really matter so long as it is unique. Cron will call every file in the directory at the appropriate time and run the commands in each file.

Custom Job Setup

Setting up a job to run at custom time requires you to understand the crontab syntax. The syntax is not terribly complex (it is similar than Regular Expressions) but is complex enough that you don't want to deal with it if you do not need to.

Crontab Syntax

Crontab is the configuration structure for cron jobs. Simply, the files are composed of two parts: the settings followed by the jobs.

Crontab settings

The settings section states what should be run, where it is located, who should run it (as in Linux User) and a few other special commands. In the below example, the first 4 lines are settings and the last line is a job.

Example crontab file

  • SHELL=/bin/bash
  • PATH=/sbin:/bin:/usr/sbin:/usr/bin
  • MAILTO=root
  • HOME=/
  • 01 * * * * root run-parts /etc/cron.hourly
  • "SHELL" identifies which shell state you want to run the scripts under. If it is not included, most systems will default the shell indicated in '/etc/passwd' or just fail to run.
  • "PATH" is location of the cron initiated scripts. If you are regularly running scripts in '/usr/bob/scripts' you could add the path here to avoid having to type '/usr/bob/scripts' for every script. In the above example, the PATH line would become "PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/bob/scripts"
  • "MAILTO" is the address of the webmaster who will receive an email every time these scripts are run.
  • "HOME" is the home directory for cron. It is basically prefixed to any relative path name you use in any script. It is only actually useful if you are going to be running many scripts from the same crontab file.

Crontab jobs

The settings section is the easy part of crontab files. Each job is composed of seven fields each separated by a space. Any field that is not set identified with an asterisk (*). minute hour day month weekday user cmd

  • minute is the minute the job is to be run.
  • hour is the hour (on a 24 hour clock) when the job is to be run.
  • day is the numerical representation of the day of the month. The values range from 1-31.
  • month is the month's numerical representation. The values range from 1-12.
  • weekday is the day of the week you want the job to be run on. The values range from 0-7, with Sunday being 0 & 7. If the day and weekday are both specified, the command runs when EITHER is true.
  • user is the Linux user you want to run the command.
  • cmd is the command to be run. This is no different than the commands in the quick job setup section above.

With the exception of day and weekday, all of the other time commands must be true in order for the script to run. Crontab also supports step function (non-whole number inputs) and ranges. Ranges work just like page ranges do in most word processors with comma (,) separating individual values and dashes (-) representing all values from the first number to the second number. For example, "3-5,7" would fire on 3, 4, 5, and 7. Step functions use a forward slash to fire when the value becomes a whole number. For example a minute value of "*/15" would fire every 15 minutes, when the current minute divided by 15 is a whole number.

Creating a new Crontab File

These instructions assume you have already setup an AWS instance and have an SSH client (like PuTTY) available.

  1. Log in to your instance via the SSH client. Transfer to the root user.
  2. Go to the '/etc/cron.d' directory
  3. Create a new file (shift+F4)
  4. Add each setting value as its own line at the beginning of the file. The above example file's values should work on the AWS Linux AMI installation.
  5. Add crontab job lines for each job you want to run.
  6. Save the file. The name doesn't really matter so long as it is unique. Cron will call every file in the directory at the appropriate time and run the commands in each file.

Installing Common PHP Extensions

PHP is a simpler programming language which offers the power of the more complex object orientated languages without some of the more complex data management issues. PHP is commonly used to develop dynamic web content, especially content based upon a database like MySQL. PHP is an on-demand compiled language, where it uses the Apache2 web server to compile the PHP code when the script is run.

In a practical sense, you must have Apache installed to use PHP on your server. If you do not have Apache currently installed, instructions can be found here. Instructions for installing PHP after you have installed Apache can be found here.

The base PHP distribution comes with a lot of the core features, but only core type features. You will often run across situations where you need a PHP Extension or Application Library. PHP extensions are divided between 2 repositories; PECL and PEAR. The difference between the repositories is the type of files each contain. PECL contains C compiled files while PEAR contains special PHP classes. This makes the PECL extensions faster and more powerful than PEAR extensions, however they can have robustness issues since programming in C is much more challenging.

Installing PECL

These instructions assume you have already setup an AWS instance and have an SSH client (like PuTTY) available.

  1. Log in to your instance via the SSH client. Transfer to the root user.
  2. Use PECL to install pecl extension
  3. Press "Y" when it asks if you want to install the extension. Depending on the extension, there may be multiple options you can choose during the installation.
  4. Verify the installation occurred correctly by starting/restarting the httpd service

Summary of command line inputs (example uses pecl_http extension)

  • $ sudo su
  • $ pecl install pecl_http
  • .....
  • $ service httpd restart

Popular PECL extensions

ExtensionDescriptionPHP-Devel?
pecl_http HTTP request & response processing Y
mailparse Parsing email addresses N

Installing PHP-Devel

Some of the extensions, the PECL ones in particular, require the php-developer package to work properly. If you get an error like "needs php-devel to be installed' when you attempt to install a package, you will need to install the php-developer package.These instructions assume you have already setup an AWS instance and have an SSH client (like PuTTY) available.

  1. Log in to your instance via the SSH client. Transfer to the root user.
  2. Use YUM to install PHP-Devel
  3. Press "Y" when it asks if you want to install the extension.
  4. Verify the installation occurred correctly by starting/restarting the httpd service

Summary of command line inputs

  • $ sudo su
  • $ yum install php-devel
  • .....
  • Do you want to install PHP-Devel (Y/N): Y
  • $ service httpd restart

Note about PHP-Devel

When I installed PHP-Devel it changed the ownership and permissions of my session directory. This caused session_start() to fail with a "Permission Denied (13)" error. To fix the error I had to change the ownership back to the Apache user/group that is used when PHP is run on the session directory.

Installing and Configuring PHP on AWS Amazon Linux AMI with Apache2

Apache2 is the standard Linux web server. It deals with all of the http and https requests sent to the server and complies PHP scripts. PHP is a simpler programming language which offers the power of the more complex object orientated languages without some of the more complex data management issues. PHP is commonly used to develop dynamic web content, especially content based upon a database like MySQL.

In a practical sense, you must have Apache installed to use PHP on your server. If you do not have Apache currently installed, instructions can be found here.

Installing PHP

These instructions assume you have already setup an AWS instance and have an SSH client (like PuTTY) available.

  1. Log in to your instance via the SSH client. Transfer to the root user.
  2. Use YUM to install php
  3. Press "Y" when it asks if you want to install PHP
  4. Verify the installation occurred correctly by starting/restarting the httpd service

Summary of command line inputs

  • $ sudo su
  • $ yum install httpd
  • .....
  • Do you want to install PHP 5.x (Y/N): Y
  • $ service httpd restart

Configuring PHP

The default configuration of PHP is just fine to use for 90% of applications. If you are going to be doing development on the server, it would be appropriate to make a few changes to the php.ini file for the particular development server. These changes should occur in the Apache2 hosting configurations ("/etc/httpd/conf.d/vhosts.conf" in the previous Apache2 instructions). The major setting you would want to change is turning off safe mode.

Installing and Configuring Apache2 on AWS Amazon Linux AMI

Apache2 is the standard Linux web server. It deals with all of the http and https requests sent to the server. Apache2 modules are also used to compile php scripts.

Installing Apache2

These instructions assume you have already setup an AWS instance and have an SSH client (like PuTTY) available.

  1. Log in to your instance via the SSH client. Transfer to the root user.
  2. Use YUM to install httpd (the apache2 web server application)
  3. Press "Y" when it asks if you want to install Apache
  4. Verify the installation occurred correctly by starting the httpd service

Summary of command line inputs

  • $ sudo su
  • $ yum install httpd
  • .....
  • Do you want to install httpd (Y/N): Y
  • $ service httpd start

Configuring Apache2

Configuring Apache2 is easiest done with a visual text editor, like included in WinSCP rather than through the command line and vi. You will need to restart the httpd daemon after changing the configuration files in order for the settings to take effect.

Examples settings
Apache system user webserv  
System group webcln  
Domain 1 example.com  
Domain 1 subdomain sub.example.com  

Basic Configuration

These settings will need to be changed whether you use a single domain or virtual domains.

  1. Open the file "/etc/httpd/conf/httpd.conf". Httpd uses the standard C-type commenting, so any line starting with a "#" is commented out and not used in configuring apache2
  2. Make sure "Listen 80" is uncommented.
  3. Change "User" to the desired linux user that you want apache to run as. The example user is "webserv"
  4. Change "Group" to the desired linux user that you want apache to run as. The example group is "webcln"
  5. Set the "ServerAdmin" to the server admin's email address.
  6. Add any other index files to "DirectoryIndex" list. Apache will search for the files in order they are listed. Separate multiple file names with spaces.
  7. Finish the configuration via the Single Domain Configuration OR the Virtual Domains Configuration. I recommend using the Virtual Domains Configuration model, because it easily allows for adding subdomains or redirecting other domains.

Single Domain Configuration

  1. Open the file "/etc/httpd/conf/httpd.conf" A single domain is setup fully within the core configuration file.
  2. Uncomment and make the appropriate changes to the following directives.
  3. Log in to your instance via the SSH client. Transfer to the root user ("sudo su").
  4. Verify the installation occurred correctly by starting the httpd service.
  5. Log in to your domain hosting account and change the DNS records to point to the correct IP address.
  • ServerAdmin webmaster@example.com
  • ServerName www.example.com:80
  • ServerAlias www.example.com
  • UseCononicalName off
  • DocumentRoot "/var/www/html"
  • ErrorLog /var/logs/error_log

Virtual Domains Configuration

  1. Open the directory "/etc/httpd/conf.d/" and create a new file called "vhosts.conf"
  2. Copy the below configurations and exchange the example values for your server's values. You should leave a copy of the 'default' server at the top of the vhosts file. The first listing of either port (80 for http and 443 for https) will be used when a request does not match any other server name or server alias.
    Meaning of each parameter
    • NameVirtualHost - Indicated that the particular IP:PORT combination is a virtual host. Need this to instigate the VirtualHost tags later. The value should be structured as IP:PORT. The wildcard "*" can be used to identify any IP address. Port 80 is used for http connections while port 443 is used for https (secure) connections.
    • IfModule - Checks to see if a module is installed and usable. Anything within the tags will be processed only if the module indicated in the open tag is installed and usable.
    • VirtualHost - This tag identifies a particular virtual host. The contents of the tag must contain the parameters ServerName, and DocumentRoot in order to work. The IP:PORT combination listed in the opening tag must be initiated using the NameVirtualHost parameter.
    • ServerName - The name of the webserver, which is normally the web address, in quotes. Apache will be asked for the ServerName by the user's browser. Note: I use the value "default:80" as a catchall for incorrect inquiries to the server. If a user queries your server, on port 80, for ServerName which doesn't exist, the first VirtualHost will be returned as a default. A DNS error can create this situation, but a user can intentionally create this situation. This is possible by directly accessing the server IP address then spoofing the HTTP header with a different web address. You can actually test your own settings this way.
    • UseCononicalName - This is a name allocation directive for self-referential URLs. Setting it to 'on' forces Apache to use the hostname and port specified by ServerName where setting it to "off" allows it to first try the hostname and port supplied by the user then use the server values. Setting it to "off" can be a slight security issue, but will generally allow for faster processing of complex situations, especially those involving intranets.
    • ServerAdmin - This is the email address of the admin for the particular server, in quotes. This is not essential, but should be included to control the distribution of spam.
    • DocumentRoot - This is the directory apache will look for the appropriate web files.
    • ErrorLog - This is the error log file to be used for errors occuring with this virtual host.
    • SSLEngine - This runs the Apache mod_ssl engine which allows for secure connection and encryption of the information set to the user. You have to use this if you want to use the https protocol.
    • SSLVerifyClient - This forces the client to provide the certificate confirmation before receiving any information. This is impractical for most situations, except when using a company intranet. The client must already have the correct certificate in order to authenticate with the server.
    • SSLCertificateFile - The location of the ssl certificate file.
    • SSLCertificateKeyFile - The location of the ssl certificate key file..
  3. Create the directories for each virtual account. The example uses the home directory of "/var/www/vhosts" for all of the virtual hosts. Within this directory there is a directory for each domain and within each of those is a directory for the http files (httpdocs), the https files (httpsdocs) and the server files (var). You also need to create a blank "index.html" file in the http and https directories and an error log in the logs directory.
    • /var/www/vhosts/example.com/httpdocs/
    • /var/www/vhosts/example.com/httpsdocs/
    • /var/www/vhosts/example.com/var/logs/
    • /var/www/vhosts/example.com/var/certificates/
  4. Log in to your instance via the SSH client (PuTTY). Transfer to the root user ("sudo su").
  5. Verify the installation occurred correctly by starting the httpd service ("service httpd start").
  6. Log in to your domain hosting account and change the DNS records to point to the correct IP address.
Example vhost.conf file
  • NameVirtualHost *:80
  • <IfModule mode_ssl.c>
    • NameVirtualHost *:443
  • </IfModule mode_ssl.c>
  • <VirtualHost *:80
    • ServerName "default:80"
    • UseCononicalName off
    • ServerAdmin "webmaster@example.com"
    • DocumentRoot "/var/www/vhosts/default/httpdocs"
    • ErrorLog "/var/www/vhosts/default/var/logs/error_log"
    • <IfModule mode_ssl.c>
      • SSLEngine off
    • </IfModule mode_ssl.c>
  • </VirtualHost>
  • <IfModule mode_ssl.c>
    • <VirtualHost *:443
      • ServerName "default:443"
      • UseCononicalName off
      • ServerAdmin "webmaster@example.com"
      • DocumentRoot "/var/www/vhosts/default/httpsdocs"
      • ErrorLog "/var/www/vhosts/default/var/logs/error_log"
      • SSLEngine on
      • SSLVerifyClient none
      • SSLCertificateFile/var/www/vhosts/default/var/certificates/default.crt
      • SSLCertificateKeyFile /var/www/vhosts/default/var/certificates/default.key
    • </VirtualHost>
  • </IfModule mode_ssl.c>

Setting up WinSCP for AWS access

I am assuming you have already setup PuTTY for AWS access. If haven't yet, please follow the instructions at Setting up PuTTY for AWS access. Also, obviously, you need to have an AWS Instance setup. If you haven't setup an AWS Instance, you can find help at Setting up a Free Tier Amazon EC2 Instance.

These instructions assume you have already installed WinSCP on your computer. If you need WinSCP, it can be found at www.winscp.net. It is really easy to install on windows machines.

Configuration for AWS Instance access

You need to access your AWS dashboard as well as WinSCP.

  1. Open your AWS Console (go to http://aws.amazon.com and login)
  2. Go to "EC2" under "Compute and Networking"
  3. Click on "Instances" under the "Instances" section of the Navigation pane. This will display all of the instances you currently have running. Clicking on the name of the instance will show the details of that instance below. Select the instance you want to configure WinSCP for then find the "Key Pair Name" and "Security Groups" values under the "Description" tab. If you haven't already done so for PuTTY, you will need to edit the security group in order to allow an SSH client (WinSCP in this case) to access your instance then confirm the security key with the key pair name.
  4. Find the value for "Public DNS" under the "Description" tab then highlight it (shift+ left click while selecting the text) and press CTRL+C to copy the text. You will need this value when setting up WinSCP and I find copy & pasting a whole lot easier than retyping something.
  5. Click on "Security Groups" under the "Networking & Security" section of the Navigation pane. This will show your security groups for this region. Click on the instance's security group to see the details of that group.
  6. Click on the "Inbound" tab to edit the firewall associated with this security group.
  7. SSH clients use port 22 for access, so you will need to verify that TCP port 22 (SSH) is listed on the table to the right. If it is not listed, or there is no table, select "SSH" under for "Create a new rule" then add your computer's ip address to the source line followed by "/32". AWS security groups use CIDR notation for IP address ranges. Simply, "/32" limits the range to a single IP address. Click "Add Rule" then click "Apply Rule Changes"
  8. Click on "Key Pairs" under the "Networking & Security" section of the Navigation pane. The "Fingerprint" for the "Key Pair Name" will be needed later to confirm your connection to the AWS Instance.
  9. Open WinSCP.
  10. Click on "New" to add a new session. Note, if this is the first time you've used WinSCP, you will automatically be prompted for a new session.image
  11. Choose "SCP" as the "File protocol"
  12. Choose "22" for "Port number". Note, you can actually use a different port than the default 22 to connect with the AWS Instance. You would have to make the appropriate adjustments to the ssh shell and the AWS Security Group. This can be good from a security standpoint, but is extremely risky from a setup standpoint. If you mess up the settings you will be permanently locked out of SSH access to the instance, generally making it worthless.
  13. Paste your instances' "Public DNS" value in the "Host name" box.
  14. Enter "ec2-user" as the "User name" and leave the "Password" box blank..
  15. Click on the "..." button in the "Private key file" box and open your private key that corresponds to the Key Pair Name" you generated when setting up the instance. This was the same file you opened in the PuTTYGen program earlier.
  16. Click "Save". There's no point in reentering this info every time you want to login.
  17. The first time you log in you will get a security fingerprint confirmation. This value should be the same as the one provided through the AWS console.
  18. Click "Login". This will log you in as the ec2-user user. This is fine for some stuff, but you won't be able to change to the root user without completing the last few steps.
  19. Open the file "/etc/sudoers"
  20. Find the line "Defaults reguiretty" and add "Defaults:ec2-user !requiretty" as the next line. This will allow WinSCP to transfer itself to the root user after logging on by using sudo su, just like in PuTTY.
  21. Disconnect. The disconnect option can be found under the "Sessions" menu.
  22. Click on the session you just created then click "Edit"
  23. Click on "SCP/Shell" on the left options. Note, "SCP/Shell" isn't listed under "Environment" check the "Advanced options" box at the bottom to display the option.
  24. For "Shell:" select "sudo su -" as the option. Make sure "Return code variable" is set to "Autodetect".image
  25. Click "Save"

When you log in, your shell access will automatically be changed to the root user allowing for complete access to all files. For most web development activities, root access isn't needed, however it makes life easier AND is essential for installing and configuring most of the software.

Setting up PuTTY for AWS access

PuTTY is a free open source SSH client. You will need to install it (basically download the installer and run it) if you have not already done so. Make sure you have both PuTTY and PuTTYgen programs.

Configuration for AWS Instance access

These instructions assume you have already setup an AWS instance. If you haven't setup an AWS Instance, you can find help at Setting up a Free Tier Amazon EC2 Instance.

  1. Open your AWS Console (go to http://aws.amazon.com and login)
  2. Go to "EC2" under "Compute and Networking"
  3. Click on "Instances" under the "Instances" section of the Navigation pane. This will display all of the instances you currently have running. Clicking on the name of the instance will show the details of that instance below. Select the instance you want to configure PuTTY for then find the "Key Pair Name" and "Security Groups" values under the "Description" tab. You will need to edit the security group in order to allow PuTTY to access your instance then confirm the security key with the key pair name.
  4. Find the value for "Public DNS" under the "Description" tab then highlight it (shift+ left click while selecting the text) and press CTRL+C to copy the text. You will need this value when setting up PuTTY and I find copy & pasting a whole lot easier than retyping something.
  5. Click on "Security Groups" under the "Networking & Security" section of the Navigation pane. This will show your security groups for this region. Click on the instance's security group to see the details of that group.
  6. Click on the "Inbound" tab to edit the firewall associated with this security group.
  7. SSH clients use port 22 for access, so you will need to verify that TCP port 22 (SSH) is listed on the table to the right. If it is not listed, or there is no table, select "SSH" under for "Create a new rule" then add your computer's ip address to the source line followed by "/32". AWS security groups use CIDR notation for IP address ranges. Simply, "/32" limits the range to a single IP address. Click "Add Rule" then click "Apply Rule Changes"
  8. Click on "Key Pairs" under the "Networking & Security" section of the Navigation pane. The "Fingerprint" for the "Key Pair Name" will be needed later to confirm your connection to the AWS Instance.
  9. Open PuTTYgen. Click on "Load" then choose the Key Pair file for the "Key Pair Name" of the instance. If you just created the instance following the above instructions, the key file is the one you had to save after you generated the "Key Pair Name."
  10. Click "Generate" to create the PuTTY usable security key. Save the file somewhere you will remember and can control, since access to this file will allow access to the AWS instance. Close PuTTYgen.
  11. Open PuTTY. The default "Category " should be "Session." If "Session" is not selected, select it.
  12. Click on the "Host Name (or IP address)" input and press CTRL+C to past your "Public DNS" address as the host name. Make sure "Port" is set to "22" and "SSH" is selected as the "Connection type:".
  13. Expand the "Connection" Category and expand the "SSH" section and click on "Auth."
  14. Click on "Browse" and open the Putty key you just created with PuTTYgen.
  15. Click on the "Session" Category again and choose "Save." This way you won't have to repeat setting up PuTTY every time you want to use it. NOTE: This is security weakness, because anyone with access to your computer would then be able to access your AWS Instance, however most people have their own private computer which limits the security risk. I just find it a pain to redo everything every time I want to access the server.
  16. Click "Open" to open the SSH connection.
  17. Type "ec2-user" at the "login as:" prompt. The "ec2-user" is the default user for the Amazon Linux AMI. You cannot login as "root" as a security measure.
  18. To transfer to the "root" user, type:
    $ sudo su

Installing the necessary software on an AWS Amazon Linux AMI server

There is a variety of software you will need to get your new AWS web server up and running. You probably already have the desktop clients if you every did any server work previously, the core server software however will need to be installed, depending on your purposes for the server. This page will be updated from time to time as new installation and configuration guides are added.

Desktop Clients

SoftwareDescriptionAvailable at:Documentation
  Free SSH client. Utilizes basic command line style interface   |
  Free SCP/SFTP/FTP client for Windows. Offers a graphical user interface to move and edit files.    

I am bias to Windows software. All of these programs run on Windows XP and Windows 7 (32-bit & 64-bit systems). If you are running a Linux or Mac system....well...they may work. The program's name link will go to instructions on configuring the software to access your AWS Instance.

Core Server Software

SoftwareUsageDescriptionDocumentation
Apache2 Website hosting The basic web server which deals with internet (http/https) traffic to the server.  
PHP Dynamic Websites (optional) Requires:Apache2 Scripting language for creating dynamic webpages. Used by most CMS, Wiki & Blog systems to manage content  
MySQL Database The basic free SQL database server. Used by many CMS, Wiki & Blog systems to store content.  
phpMyAdmin Database Administration (optional) Requires:Apache2, PHP & MySQL Graphical, HTML based admin tool for accessing and managing mySQL databases.  
Postfix Mail-Transfer-Agent (ie: email server) Accepts and sends email. Versatile and can be used with a variety of database structures.  
Courier Email Client Portal (optional) Requires:Postfix Offers a portal to access email via any client, including MS Outlook, Thunderbird & smart phones. Offers IMAP and POP3 systems.  
Spamassassin Email Spam filter (optional) Requires:Postfix Works with MTAs to prevent spam from arriving on server  
BIND9 DNS Server (optional) DNS server which allows you to create your own dns records.  

Note all of these programs are free, and most are open source. All of the installation instructions are specific to the Amazon Linux AMI. This stripped down version of Linux is a special Amazon derivative of Fedora. When I was originally setting up our servers, some of the differences between RedHat, Ubuntu, Debian and this version of Linux drove me crazy, therefore all of these instructions worked on the newest Amazon Linux AMI version (currently 2012.03).

Setting up a Free Tier Amazon EC2 Instance

Amazon AWS is currently offering a 'free tier' for 1 year. Simply you get a micro instance to get your server up and running, play with different settings and such. It is the standard free trial offer, but with a virtual server. If you've never used AWS before, I recommend using the free tier server to get acquainted with the capabilities of AWS then move to a real server later. Also, once you have all the settings working on the free tier instance, you can transfer to paid instance in 15 minutes. AWS Free Tier

  • 750  hrs/month Micro instance (613 MB of RAM, Linux or Windows)
  • 750 hrs/month Elastic Load Balancer (15 GB of data processing)
  • 30GB of EBS space
  • 5 GB of Amazon S3 standard storage

Setting up a New Instance

  1. Go to and login to your .
  2. Click on "EC2" under the Compute & Networking section. (Note you may have to choose your region at this point if it hasn't been setup yet.)
  3. At the "Amazon EC2 Console Dashboard" there should be a button in the middle of screen called "Launce Instance", Click it.
  4. The wizard will pop-up asking you to choose a type of wizard. Select "Class Wizard" and click "Continue" at the bottom right.
  5. Select the "Amazon Linux AMI ####.##"  AMI. It should be the top option under the "Quick Start" tab. Note, you can use any of the AMIs with a yellow star next to the select button for the free tier. The 32-bit version will be slightly easier to deal with later, but 64-bit version works just fine also.
  6. You will now need to determine the basic instance details. For the free tier, make sure the "Number of Instances" is set to 1 and the "Instance Type" is "Micro  (t1.micro, 613 MiB)". The "Availability Zone" doesn't matter right now so "No Preference" is fine. Click the "Continue" when the settings are correct.
  7. You now can determine some of the advanced options. The only thing you need to be concerned about is the "Shutdown Behavior" which should be set to "Stop". Click the "Continue" when the settings are correct.
  8. The next page is a the storage details. New instances default to a "Root Volume" which is effectively a new blank standard EBS volume.I recommend you uncheck the "Delete on Termination" checkbox to prevent you from accidentally erasing your data when the instance dies.Click the "Continue" when the settings are correct.
  9. Now you can set metadata you want to correspond with this instance. These key/value pairs will help with searching and administrating large clouds of multiple servers. In addition to the "Name" key, we generally always place a "admin" key with the value equaling the programmer who administrates the instance. You can place up to 10, and you can always change them later. Enter something for the "Name" key's value then press "Continue".
  10. The next step is absolutely essential to run a secure instance and have access via an SSH client. Instead of using usernames and passwords, AWS uses usernames and encryption keys, called "Key Pairs." This encryption level prevents brute force attacks against your instance. Enter a name (alphanumeric without spaces) then press the "Create and Download your Key Pair" button. You will be expected to save the key file somewhere on your locale computer; remember where because you will need this file later when setting up your SSH and SCP clients. Click "Continue" once you have created your Key Pair.
  11. The last setting you need to determine is the firewall. Amazon allows you to create an off-instance firewall to limit access to your instance. Click on the "Create a New Security Group" radio button then enter a "Group Name" and "Group Description". Leave the the "Inbound Rules" empty for now. Typically when you create an instance, you will you use a pre-created security group that you already setup for the purpose of the instance.  Click "Continue" once the new security group is created.
  12. This last page is just a review of the settings for your new instance. Look over them and make sure everything is correct then click "Launch." A few moments later your simple Free Tier instance will be up and running. The next step is getting access to it, then installing software and configuring everything. These will be discussed in future posts.

Amazon Cloud Hosting

Amazon is a huge player in the cloud hosting space. Cloud hosting is basically where a company fills a server farm with racks upon racks of physical computers, hard drives and routers. The company then uses software to combine the individual computers into a super computer which is then partitioned off into a series of virtual servers of varying sizes and types. The company then resells usage of these virtual servers to their clients. Amazon Web Services (the division which provides the service) offers a variety of different types of virtual servers, but the basic, and most flexible, is called Elastic Cloud Compute (EC2).

Amazon EC2

Instances

Instances can be thought of as the virtual processor, motherboard and RAM of the virtual server. Amazon offers three different types of Instances (On-Demand, Reserved, and Spot) and of varying different sizes.

On-Demand Instances

On-demand Instances are those you intend on using on a temporary basis. You are paying only for the amount of time you actually use the instance, so they are excellent for short-term projects and to get settings worked out.

Reserved Instances

Reserved Instances are instances which are dedicated to your account. They do not go away if you stop, or terminate them. Well, that is not quite correct. You are actually reserving usage of a particular type of instance, rather than a particular instance. The different levels of Reserved Instances are basically usage structures. You prepay to reserve an instance and in exchange get a discount on the hourly rate. Reserved Instances are ideal for long-term server applications, like website host, email servers, etc.

Reserved Instances Utilization Rates
  • Heavy Utilization - These instances are used 80%+ of the month.The core website and email servers.
  • Medium Utilization - These instances are used for 40-79% utilization rates. If you run a few heavy traffic websites, then these instances would be the load-balanced servers to support demand during peak times like the evenings and weekends.
  • Light Utilization - These instances are used for 17-30% utilization rates. This time frame corresponds really well with development servers that are started in the morning, run for 7-8 hours then turned off in the evening.

Spot Instances

Spot Instances are similar to on-demand instances, but are designed for special project type circumstances. Amazon obviously wants to keep all of their servers running all the time (ie. 100% utilization), however with the on-demand type structure, there are times when some servers are not being used. During these low slow times, Amazon would rather sell time on them temporarily for a discount rather than let them run empty. These temporary discounted servers are the spot instances. Spot instances work really well for periodically maintenance activities. To use a spot instance, you indicate the size of instance and the maximum price you bid for usage of that instance. Once the price for that size of instance goes below the bid price, the instance starts up and you get it until the prices goes back over your max bid price. Note you are only charged the actual price, not your bid price, so you can often pay less per hour than your bid price for spot instances.

EC2 Resources

Elastic Block Store Volumes

Elastic Block Storage (EBS) volumes are the virtual hard drives of the virtual server. There are two types of EBS Volumes, Standard and Provisioned  IOPS (Input/output Operations Per Second).

Standard EBS Volumes

Standard EBS volumes correspond the best to physical media hard disks. You can read and write to them at average rates and deliver about 100 IOPS. Unless you need high writing/ reading capabilities, a standard EBS is what you'd use.

Provisioned IOPS Volumes

Provisioned IOPS are for high read/write type situations. The most common examples is a database server. These volumes are very powerful, but also very expensive (relatively). There are other AWS Services offered, like S3, SES and RDP, but I currently don't use them some will avoid going into detail on those services until I use them.