Blog

Recent Posts

MITM and Why JS Encryption is Worthless

You build this great web app that is loaded with JavaScript-based features with a spectacular AJAX setup where content is being pulled instantly as the user wants it. The application is real work of art.

Your development version works flawlessly and your client loves the new website, but, as always happens, they want a few changes. This button gets moved from the bottom of the page to the top of the page. The order of that menu gets rearranged. Every web designer has experienced these general design and layout changes, which take a bit of time to complete, but are technologically easy and take client satisfaction to a whole new level.

The client then asks about security, and how their customer information is protected. They had heard about a friend’s website getting hacked and want to make sure it doesn’t happen to their new beauty. You tell them about how the server can only be accessed ftp connection via cPanel and how you used this great input filtering class so bad stuff can be uploaded. The client misunderstands your conversion of programming jargon into “Real English”, and infers that all data their customers send.

You know the correct answer is to serve the website over HTTPS and use TLS to encrypt data between the customer’s browser and the server. The problem with doing that with this particular client is they went cheap on hosting, and use one of those ultra-discount co-hosting sites, so to deploy TLS on their site would add $10 a month to their hosting bill, in addition to the cost of the certificate, setup and annually updating the certificate. You know the client is not going to like this increased expense so you run through every option you can think of to protect customer data between the browser and the server. Having done all this great work with JavaScript already, the obvious solution is to use JavaScript to encrypt the customer data in the browser and then just decrypt it on the server.

WRONG!

First, encryption via JavaScript in the browser is nearly worthless. There is a single, very limited situation where encryption can be performed in the browser, and I will discuss that at the end. However, in general, JavaScript based encryption provides no real level of security. It does not matter how good of an algorithm you use, or the key length or whether you use RSA/Asymmetric encryption or AES/symmetric encryption. JavaScript based encryption provides no real level of security.

The weakness in every situation revolves around the classical man-in-the-middle (MITM) attack. A hacker filters the connection between your client’s server and the customer’s browser, making changes as the hacker sees fit, and capturing all data they want.

This is how it would work:

  1. Visitor connects to the internet through a compromised router (think encrypted coffee house wi-fi)
  2. Hacker monitors the request data going through router and flags the Visitor’s connection as an attack point
  3. Visitor requests a page on the standard, http domain
  4. Hacker intercepts request and passes it along unadultered after recording all the data tied with the request (like the URL and cookies)
  5. Server hosting domain builds up web page normally and sends it back to Visitor
  6. Hacker intercepts the response html and makes a change to remove the JavaScript encryption mechanism and records any encryption keys
  7. The Visitor gets the web page which looks perfectly normal and proceeds to enter their address, credit card or sensitive data in a form and send it back
  8. Hacker captures this sensitive data then mimics the JavaScript encrypt mechanism and forwards on the correctly encrypted data to the server
  9. The server hosting the domain decrypts the data and continues on, never realizing someone intercepted the data

This general methodology will work for any situation where a response is served over http, without the protections offered by HTTPS via TLS. Unless the HTML response and all JavaScript associated with the browser-based encryption mechanism is served over TLS, there is no guarantee that the end user received correct algorithm, ran the algorithm and sent only encrypted data back to the server.

This guarantee cannot be short-cutted by serving the JavaScript files over HTTPS, but not the original HTML content, as a hacker would either remove the JavaScript or simply change URL of the JavaScript files to a version they controlled. Serving the HTML content via HTTPS but not the JavaScript allows the Hacker to modify the JavaScript during transit and will create mixed-content errors for the user when the browser sees content being served over HTTP and HTTPS.

The Crux

The crux of the problem with encryption via JavaScript is disseminating an encryption key that only the user can create and keeping the key local to the user. Assuming the encryption key can be securely disseminated to the user via a secondary channel, like email, SMS or even physically off-line, how do you keep the key only with the user when they need to use the key on various pages of the website.

The short answer is you cannot do it.

Without HTTPS, the developer does not control the authenticity of the content being delivered. If you cannot control the authenticity of the content being delivered, you have no way to make sure additional methods of data extraction are not coupled with your intended methods.

Hashing is not Encryption

Hashing is generally thought of as one-way encryption, while symmetric and asymmetric are viewed as two-way encryption. Hashing, however, has the same issues as two-way encryption when used in the browser. In order for it to provide any value, the entire connection has to occur over TLS; generally mitigating the value that hashing hoped to create.

A few months ago, after giving a talk at Atlanta PHP on API Security, I was asked about the concept of hashing a password in browser then transmitting the digest (hexadecimal version of the hash) to the server and querying the database for the digest. After having him break down exactly what was happening, he realized that the whole process still must occur over TLS and it provided no more security than transmitting the raw password to the server and having the server do the hashing. From an attack standpoint, the digest simply becomes the password, and due to performance issues of running JavaScript hashing on a variety of platforms, users will only accept the time delay of certain hashing algorithms and a certain number of iterations of the give algorithm.

The gentleman next suggested using a 2-fator key as the salt for the hash and sending this new digest to the server. This makes the situation actually less secure, because in order to continually validate the password, the server must store it the password in plain text (or a symmetrically encrypted, which is only marginally better). If/when the database is hacked; all the passwords are then immediately compromised, rather than the significant delay using current key-lengthening techniques with robust hashing algorithms.

I have actually seen another situation where hashing in the browser reduced the overall security of the application. In this circumstance, the username was concatenated to the password, hashed, and the digest was sent to the server for validation. The app did not even send the raw username, it simply sent the digest. The digest was then queried in the database and whichever user happened to have that digest as their password became the authenticated user. I should correct that, the active user. Authentication is a much too strong of a word to describe what was happening. This methodology created a significant reduction in entropy of the user credentials, allow for real chances of digests collisions where User B has the same credentials as User A, and therefore the system thinks User B is User A.

Minimal Value

At the very beginning I mentioned one particular situation where JavaScript encryption (or hashing for that matter) has some minimal value. When the server and the browser connects 100% over HTTPS, and all the content is encrypted during transmission and authenticated at the user-end, but the JavaScript in the browser must communicate with a third-party server which does not run support TLS. In this situation, JavaScript can be trusted to securely encrypt the data being sent over HTTP to the third-party server which can securely decrypt the data. This whole situation only makes sense if the third-party does not support TLS, but your server supports it completely. I have seen this setup once in a payment processing situation.

LivingSocial applies the third-party server principle to their internal subnet. The browser receives everything over TLS and uses asymmetric encryption to encrypt their customer’s credit card data. The browser then posts this encrypted data to the LivingSocial domain, which really is a Network Access Point (NAT) for their internal subnet. The data is then directed all the way to their gateway processor (BrainTree), without ever being decrypted within their subnet. This effectively provides their customers full end-to-end encryption of their credit card data, without having to deal with redirects and other tricks common in payment processing industry.

JavaScript based hashing has a different situation where value can be created; weakening brute-force attacks. As I mentioned earlier, hashing public form data prior to submitting the data, can increase the cost of form spam to spammers, though the hash can be validated on the server at minimal cost.

Summary

Do not expect JavaScript to impart any security in your web application. Encryption and hashing in the browser, for the purposes of security, is a pointless task, and simply results in another situation of theatrical security.

Security Theater Coined: https://www.schneier.com/crypto-gram/archives/2003/1115.html

Living Social Payment Processing: https://www.braintreepayments.com/blog/client-side-encryption/

Defending Aganist Spambots - Dynamic Fields

One of the things spambots often cannot do is run JavaScript. A simple preventative measure, therefore, is to dynamically create a form field via JavaScript that requires some kind of user interaction to pass the server-side validation.

Initially this concept was applied to a simple check box that had the label “Check if you are human.” Spambots would not create nor check the box and the presence of the checkbox field was used to determine if the form was submitted via a human.

More advanced spambots utilize the V8 JavaScript engine and can mimic the loading of the page, where the dynamic field is created. The bot then would use this dynamically created DOM as the source to pull the form element, and the associated field names, to be submitted. This level of sophistication is relatively rare in comment-spam bots, but for spambots focused on user account forms (login, password reset and account setup) it is being more common due to the increase value associated bypassing these form’s validation methodologies.

The big caveat with this defense is the 10% or so, of users who have JavaScript disabled will never see the dynamic field and submit the form without the field just like a spambot. An alternative method to JavaScript creating the fields is to use the new HTML5 input field range and have the user move the slider from left to right, or to the center, depending on the instructions in the associated label. This only works for newer browsers, but helps reduce some of that 10%.

Request Based Field Names

Merging the underlying concepts behind honey pot fields, form expirations and dynamic fields creates request based field names. In this situation, every request has a unique set of field names, and the field names are validated against the source of the request. If the field names have been reused, the submission is deemed spam. This requires every submission of the field to be fetched individually, which often isn’t the case in spam bots. The parsing of the HTML requires significant processing power (from a computer or person) and would limit the cost effectiveness of spam, whose value proposition is often based upon volume.

Defending Aganist Spambots - CAPTCHAs

CAPTCHA is a backronym for “Completely Automated Public Turing test to tell Computers and Humans Apart” and is generally the bane of any user trying to submit a public form. The concept involves displaying an image containing characters and has the human retype the characters into a text box. Computers are supposed to not be able to understand the characters in the image while humans can easily understand the characters.

This worked well in 1997, when the concept was developed, but the advances in image processing have required the images to become more and more obscured from simple plain text. Adding colors, lines, as well as messing with the shapes of the letters obscure image processing applications from detecting the text. This obscurity also makes it challenging for anyone with visual impairments to read the text and get the CAPTCHA response correct.

The user experience issues make CAPTCHAs make them an undesirable solution to spambots, but one that can be implemented when the other solutions are inadequate. UX focused sites often use CAPTCHA only in situations where other protections have returned multiple failures though the system does not want to prevent a potentially legitimate user from accessing the material. These are situations like password resets, login screens, account creations and search pages.

 Integration of a CAPTCHA solution involves either integrating a third-party source into your form, or generating the images yourself. Generating the images locally via an image manipulation library sounds like a good, cheap method for using implementing CAPTCHA , however there has been a significant effort placed on defeating the protection and everything you can think of doing to the text to prevent analysis, but still be readable by a human has been reverse-engineered. Good CAPTHCHA solutions test their image database against the best tools on a regular basis, eliminating those images defeated by the analysis tools. Consequently, homebrew CAPTCHA’s are often little better than having no protection while providing a noticeable depredating in the user experience.

Integrating a third-party solution generally involves embedding a JavaScript in your form which fetches the image and a unique ID code from the provider’s servers. The user then provides you with the plain text version and you check this, along with the image ID code which was submitted as a hidden form field, with the provider to get a pass or failure response. All of the good CAPTCHA providers have nice clear documentation about this process and attempt to make it as easy as possible to integrate their solution.

I have avoided CAPTCHAs primarily due to the poor user experience factor. Different combinations of the other methods, especially the hashcash and the Bayesian analysis have provided good protection so far.

Defending Aganist Spambots - Form Expirations

Humans are inherently slower than computers when it comes to reading and filling out a form. Even simple login forms where everything is auto-completed and you just have to click the “login” button takes a second, while a computer can do it in milliseconds. More complex forms require even more time to for a human to read, understand and complete. Recoding the timestamp of the form request and requiring the response to occur with a set range makes the automatic completion of the form more expensive for a spambot.

The timestamp can be sent along with the normal form fields as a hidden input field, so long as the validity of the timestamp is checked when validating the form submission. The easiest method is a HMAC check with a server-specific key. This actually allows for integration of additional data into the timestamp field, like the requester’s IP address and User agent.

Example Creation and Validation of a Form Timestamp

// globals $gvServerPublicFormKey = '5f4dcc3b5aa765d61d8327deb882cf99'; //! Create Timestamp Field //! @return string HTML of timestamp field function createTimestampField(){ global $gvServerPublicFormKey; // get current unix timestamp $t = time(); // compose raw value as time:IP:user agent $data = $t . ':' . $_SERVER[‘REMOTE_ADDR’] . ':' . $_SERVER['HTTP_USER_AGENT']; // generate HMAC hash $hash = hash_hmac('sha512', $data, $gvServerPublicFormKey); // build input $html = ""; return $html; } //! Validate Timestamp Input //! @param int $min Minimum delay time in seconds @default[5] //! @param int $max Maximum delay time in seconds @default[1200] //! @returns bool Returns the validity of the timestamp input field function validateTimestampInput($min = 5, $max = 1200) { global $gvServerPublicFormKey; $t = 0; $hash = ''; // field field foreach(($_REQUEST as $key => $val) { if(strpos($key, 'ts-') !== 0) continue; $t = substr($key, 3); // validate potential timestamp value if(!$t || intval($t) != $t || $t + $min < time() || $t +$max > time() ) { continue; } $hash = $val; break; } // potentially valid timestamp not found if(!$hash) return FALSE; // generate hash based upon timestamp value $data = $t . ':' . $_SERVER['REMOTE_ADDR'] . ':' . $_SERVER['HTTP_USER_AGENT']; $correctHash = hash_hmac('sha512', $data, $gvServerPublicFormKey); // return validity of hmac hash return hash_equals($hash, $correctHash); }

Defending Aganist Spambots - Honeypots

Honeypots are a concept taken straight from email spam prevention and come in 2 types: honey pot fields and honey pot forms. Honeypots are basically a very tempting submission location that should never receive real data. Any submissions to the honeypot are automatically labeled as spam.

Honey pot fields are fields within a form that should always be left blank and are indicated as such to the user via a label. When a form is submitted with that field completed, it can be quickly marked as spam, discarded and the submitter fingerprint recorded for tracking. In order to make the field tempting, the field name and field type should be chosen wisely. An input field with a name of “website” and a type of “url” is more tempting to a spambot than an input field with a name of “honeypot” and a type of “text”. Good spambots will detect the field type and name and try to inject appropriate content to bypass automated validation mechanisms.

Example Honey pot field

<style> form>div#form_hp { position: absolute; left:-99999px; z-index:-99999; } </style> <form method="POST" action=""> <div id="form_hp"> <label for="another_email">Leave this field blank</label> <input id="another_email" name="another_email" type="email" value=""/> </div> <!--- the real form content--> </form>

When hiding the honey pot field, the best method is to use embedded CSS to shift the field wrapper off the screen. A good quality bot will check to see which fields are natively displayed and only submit information to those displayed. Fields with “display:none” or “visibility:hidden” can be easily marked as hidden. Even situations where the field itself is absolutely positioned off screen can be detected without too much difficulty. Moving the wrapper off screen via CSS requires considerably more programming to detect, as all the CSS needs to be parsed and applied prior to evaluating the display nature of any field. The CSS should be embedded into the HTML to prevent loading issues, where an external CSS file is not loaded, and the wrapper, with the honey pot fields are displayed to the user.

Honey pot forms are entire forms that a real user should never find or submit information to, though are easily detected via automated scripts. Hidden links to the page containing the form are embedded in the footer or header and indicated that they should not be followed by bots. The page then contains a description that clearly states the form should not be used and a bunch of tempting fields to submit. Any submissions by this form are consequently deemed a bot and appropriate measures are taken. This type of honey pot can be integrated into a web-server layer filter (via a web application firewall like modsecurity) where the submissions are track prior to the application layer and attacks are mitigated at the web server.

The biggest concern with honey pot forms are search engines, and their bots finding the pages, and then displaying the page as a result in search results. Appropriate steps should be taken to minimize bots taking the honeypot links via usage of the rel=”nofollow” attribute in the hidden links, the ‘’ tag in the html head section of the form page and clear text on the page saying not to submit this form.

Defending Aganist Spambots - Request & Response Header Validation

Taking a step back from the HTML side is the validation of the HTTP header of the request for the form html and the associated headers of the response to posting the form values. Running some basic checks of the HTTP headers can provide an early warning to the presence of a spambot.

Before serving the form HTML, the server validates that the request has the appropriate headers for a real browser. If the “Host”, “User-Agent” or “Accept” headers are not sent, it’s likely a cURL attempt to access the web page, and therefore a script to harvest and attack the form. This provides some basic security by obscurity, and as such should be viewed just as attack limiting approach, and not actual security of the form. An attacker can just as easily go to the actual page with a web browser, cut & paste the html into their script and attack the form. Limiting the display of the form limits the amount of this process done via scripts, particularly poorly written spambots.

The other side of the coin is the Response headers to the form page. In addition to checking for the headers required when initially serving the form, you should also check for the correct HTTP method (GET vs POST), and the “Cookie” (if applicable) and the “Referer” headers (yes, the referer header is misspelled). A real browser will never mess up the HTTP method and switch between GET and POST responses, while a spambot may default to the wrong method. Bots are also often mediocre about managing cookies, so the lack of a cookie field can be indicative of a spambot, or a paranoid user.

The “Referer” header should not be used conclusively to determine if the page was sent from a web browser. Some internet security suite and browser plugins will mess with the “Referer” header, either erasing it, or replacing it with the destination domain. Further, pages not served over TLS should not receive the “Referer” header when sent from a page served over TLS. (Forms served over TLS should never post to a page not served over TLS anyways.) Lastly, the HTML5 standard includes the HTML meta header ‘referer’ that can be set to ‘no-referrer’ where the browser is not supposed to send the referrer from the source page.

The last check that should be performed is the geolocation of the source IP address within the context of the form. For example, if the form is for online ordering of a pizzeria in Chicago, an request or submissions from an IP addresses geolocated in Australia has a very low probability of being legitimate.

There is one caveat of filtering based upon IP addresses: VPNs and Proxies. Smart phones in particular should be the biggest concern for most implementations, since the IP address of a phone on the network is often geolocated to the corporate headquarters rather than the location of the user.

Example HTTP Header Validation Function

//! Checks for HTTP Request Headers of a submitted form page //! @param string $formUrl URL of the form page //! @return bool Returns TRUE if all the checks passed, else FALSE function checkRequestHeaders($formUrl = ‘’) { // make sure $formUrl is a legit url if(!empty($formUrl) && !filter_var($formUrl, FILTER_VALIDATE_URL, FILTER_FLAG_SCHEME_REQUIRED & FILTER_FLAG_HOST_REQUIRED) { return FALSE; } // verify presence of basic headers if(empty($_SERVER[‘HTTP_USER_AGENT’]) || empty($_SERVER[‘REMOTE_ADDR’]) || empty($_SERVER[‘HTTP_ACCEPT’]) || empty($_SERVER[‘HTTP_ACCEPT_LANGUAGE’]) ) { return FALSE; } return TRUE; }

A complete list of the HTML5 input fields can be found on Wikipedia:

https://en.wikipedia.org/wiki/List_of_HTTP_header_fields

W3C Referrer Policy Working Draft

https://www.w3.org/TR/referrer-policy/

Defending Aganist Spambots - Field Specificity & Validation

Field specificity and absolute validation of the replied values should be the first level of defense. Whenever a public form is created, you create the inputs with as much specificity as possible, then validate strictly against this specificity. The HTML5 specification has made this much easier with the expansion of the types of input fields.

For example, if you are asking for a user’s age, use an input with ‘type=“number”’ and ‘step=”1” min=”5” max=”120”’ instead of a simple ‘type=”text”’. This forces the user to input an integer between 5 and 120 (max range of legitimate possible ages of a user) otherwise the form field should indicate it is an illegal value and prevent submission of the form. Then on the server side, you validate strictly against these criteria, immediately tossing any submission that contains an invalid value. There is an added bonus as the error messages for HTML5 compliant browsers don’t need to be as robust, since the user already should have received an error when they attempted to input the field the first time.

Example Validation Function

//! Validate input value of a Number Input //! @param string $input Inputted value //! @param int $min Minimum Value @default[0] //! @param int $max Maximum Value @default[100] //! @param string $step Incremental increase between minimum and maximum value @default[1] //! @success string Returns inputted value on success (including potentially 0) //! @failure FALSE Returns FALSE on validation failure function validateInputNumber($input, $min = 0, $max = 100, $step = 1) { // verify all inputs are numbers if(!is_numeric($input) || !is_numeric($min) || !is_numeric($max) || !is_numeric($step) ) { return FALSE; } // verify $input is within appropriate range if($input < $min || $input > $max) return FALSE; // check that $input is at a valid step position $inc = ($input - $min) / $step; if($inc != intval($inc)) return FALSE; // all checks passed, return $input return $input; } // example pass ($input == ’32.5’) $input = validateInputNumber(’32.5’, 0, 100, 2.5); // example fail ($input === FALSE) $input = validateInputNumber(’32’, 0, 100, 2.5);

A complete list of the HTML5 input fields can be found at MDN:

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input

Defending Against SpamBots

SPAM is THE four-letter of IT. Nothing makes users, developers and IT managers more annoyed than filtering this frivolous data coming into their systems. SPAM as it relates to email has some really good utilities that can limit a large amount of the unwanted messages while having relatively low false positive and false negative rates. Most of these utilities are so mature that you simply install them, configure a few settings and generally forget about it, with the filter taking care of everything.

Comment or Form Spam, on the other hand, does not have a drop-in type solution because of the level of integration a form has within the larger system. The field types and names of each form vary drastically when compared to the MIME headers of an email. Drop-in solutions have been attempted for form spam, however they often have limited success when they are run independent of more integrated methods.

The various form spam prevention methods can be grouped into one of 10 general categories.

Field Specificity & Validation

Field specificity and absolute validation of the replied values should be the first level of defense. Whenever a public form is created, you create the inputs with as much specificity as possible, when validate strictly against this specificity. The HTML5 specification has made this much easier with the expansion of the types of input fields.

Request & Response Header Validation

Taking a step back from the HTML side is the validation of the HTTP header of the request for the form html and the associated headers of the response to posting the form values. Running some basic checks of the HTTP headers can provide an early warning to the presence of a spambot.

Honeypots

Honeypots are a concept taken straight from email spam prevention and come in 2 types: honey pot fields and honey pot forms. Honeypots are basically a very tempting submission location that should never receive real data. Any submissions to the honeypot are automatically labeled as spam.

Form Expirations

Humans are inherently slower than computers when it comes to reading and filling out a form. Even simple login forms where everything is auto-completed and you just have to click the “login” button takes a second, while a computer can do it in milliseconds. More complex forms require even more time to for a human to read, understand and complete. Recoding the timestamp of the form request and requiring the response to occur with a set range makes the automatic completion of the form more expensive for a spambot.

Dynamic Fields

One of the things spambots often cannot do is run JavaScript. A simple preventative measure, therefore, is to dynamically create a form field via JavaScript that requires some kind of user interaction to pass the server-side validation. This can be as simple as a check box that the user needs to check to indicate they are human or a slider that needs to be moved to a specific position.

Request Based Field Names

Merging the underlying concepts behind honey pot fields, form expirations and dynamic fields creates request based field names. In this situation, every request has a unique set of field names, and the field names are validated against the source of the request. If the field names have been reused, the submission is deemed spam. This requires every submission of the field to be fetched individually, which often isn’t the case in spam bots. The parsing of the HTML requires significant processing power (from a computer or person) and would limit the cost effectiveness of spam, whose value proposition is often based upon volume.

CAPTCHAs

CAPTCHA is a backronym for “Completely Automated Public Turing test to tell Computers and Humans Apart” and is generally the bane of any user trying to submit a public form. The concept involves displaying an image containing characters and has the human retype the characters into a text box. Computers are supposed to not be able to understand the characters in the image while humans can easily understand the characters.

Hashcash

Hashcash is an iterative hash algorithm that requires the client (ie web browser) to repetitively hash a set of data (the serialized form fields) until a bitmask can be cleared. The iterative nature of a hashcash requires the web browser to expend a minimal amount of energy to get the correct output of the hash while the server simply needs to take the inputs and perform the hash once.

Blacklists & Keyword Filtering

Blacklists and keyword filter involves running regular expressions against the submitted content to extract html tags, urls, email addresses and specific keywords. The results of the regular expressions are checked against a blacklist of banned results, with any found results indicating a spammy submission. This method is strictly dependent upon the quality and completeness of the blacklist database.

Bayesian Analytics

Bayesian analysis is the basis for most of the good email spam filters. The overall principle is to run statistical analysis on the response headers and the posted values with respect to a database of known good content and spam content. The Bayesian analysis outputs a probability of the content being spam, which is then filtered against set level and discarded if the probability is too high. Bayesian analysis can be the most effective since it based upon the actual content of the form submission, the effectiveness is highly dependent upon the training against good and bad content. Also, Bayesian analysis is by far the most complex to implement and requires the most resources to run.

The source code required to implement some of these methods can be long and a little complex. So, over the next month, I will be publishing posts with more details on how to implement each of these protections as well as some notes on when each methodology should be implemented.

CSS for Mobile Devices - Media Query Essentials

In the modern age of web design, understanding media queries is essential if you want the website to be functional on multiple devices platforms. Media queries allow you to conditionally apply CSS selectors based upon the view port, screen size and resolution.

Implementation Theory - Smallest to Largest

The most common implementation theory for responsive website design is the mobile first approach. The concept is you start with the smallest screen and build your core CSS file for those dimensions. Then, for each subsequently larger screen size, you add media queries and additional CSS selectors. Applying selectors from the smallest to largest screen size allow you to minimize the bandwidth requirements for the smartphones by including only small, low-resolution images while allowing for large, high-resolution images to be included on larger screens.

Media Query Syntax

Media queries are implemented using the @media css selector followed by the constraints. The CSS selectors to be applied to the media query are grouped within curly brackets, just like any other CSS selector.

@media (max-width:461px) and screen {

  body { background-color:red; }

}

This would change the background of the body element to red for screen sizes smaller than 461px.

If all of the conditions are true then the media query results to true and the content is applied to the page. Individual constraints can be coupled together with the AND keyword to require multiple constraints to be true. Multiple sets of constraints can be coupled together using a comma separated list (just like multiple CSS selectors) to apply a logical OR situation. Unless otherwise specified, the "all" media type is added to all media query sets, which means an empty set is the same as having no media query wrapper and the selectors are applied in all situations.

Constructing Media Queries

Media query constraints use some of the CSS properties and add a few more to be applicable on the device level. Note, constraints do not always need a value to be applied.

Viewport

The view port is the box in which the page is constructed. This is not always the same as the device width or the browser width. Users can set automatic zoom features which change the ratio between the browser width and the viewport, causing web developers headaches. You can set a scaling ratio for the viewport using the media tag in the html head section.

Constraints

Constraint Value* Effect
all - Apply to all media types. This is the default behavior of any media query, so it is only needed when using complex constraints.
screen - Apply to only screen media types
print - Apply to only print media types. This is useful for creating a custom layout when a visitor want to print the site
handheld - Apply to only handheld media types.
only - Limits application of the media query, particularly in the situation of older browsers which do not properly support the queries, as these browsers do not recognize the keyword, causing an error in the processing.
not - Apply to all situations except the identified one.
width px/em/cm Limit to browsers with a specific RENDERING width. This turns out to be less useful than min-width or max-width.
min-width px/em/cm  Limit to browsers with a RENDERING width of at least the set amount. Used when applying media queries from smallest to largest.
max-width px/em/cm Limit to browsers with a RENDERING width up to set amount. Used when applying media queries from largest to smallest, or to further constrain selectors when used with min-width.
height px/em/cm Limit to browsers with a specific RENDERING height. This turns out to be less useful than min-height or max-height. Height is not often used since width can often dictate the specific device and height becomes less important for vertically scrolling pages.
min-height px/em/cm Limit to browsers with a RENDERING height of at least the set amount.
max-height px/em/cm Limit to browsers with a RENDERING height up to set amount.
device-width px/em/cm Limit to browsers with a specific SCREEN width. This turns out to be less useful than min-device-width or max-device-width.
min-device-width px/em/cm Limit to browsers with a SCREEN width of at least the set amount. Used when applying media queries from smallest to largest.
max-device-width px/em/cm Limit to browsers with a SCREEN width up to set amount. Used when applying media queries from largest to smallest, or to further constrain selectors when used with min-device-width.
device-height px/em/cm Limit to browsers with a SCREEN height of at least the set amount.
min-device-height px/em/cm Limit to browsers with a SCREEN height of at least the set amount.
max-device-height px/em/cm Limit to browsers with a SCREEN height up to set amount.
orientation portrait landscape Limit to browsers with a particular orientation. This effectively only used when dealing with mobile devices which are orientation conscious.
aspect-ratio ratio Limit to a ratio between the "width" and the "height" values. 
min-aspect-ratio ratio Limit to a minimum ratio between the "width" and the "height" values.  
max-aspect-ratio ratio Limit to a maximum ratio between the "width" and the "height" values.
device-aspect-ratio ratio Limit to a ratio between the "device-width" and the "device-height" values. Common values include 1/1, 4/3, 5/3, 16/9, 16/10.
min-device-aspect-ratio ratio Limit to a minimum ratio between the "device-width" and the "device-height" values. 
max-device-aspect-ratio ratio Limit to a maximum ratio between the "device-width" and the "device-height" values.
resolution dpi/dpcm Limit to devices with a specified resolution. dpi = dots per CSS inch, dpcm = dots per CSS centimeter.
min-resolution dpi/dpcm Limit to devices with a minimum resolution.
max-resolution dpi/dpcm Limit to devices with a maximum resolution.
color -/integer Limit to a specific color depth per component. For example, 0 would indicate monochrome while 2 would indicate 8 bit colors (256-color palette) and 8 indicates the standard full RGB palette.
min-color integer Limit to a minimum color depth per component.
max-color integer Limit to a maximum color depth per component.
color-index -/integer Limit to a specific total color depth. For example, 1 would be monochrom while 8 would indicate 8 bit colors (256-color palette) and 24 indicates the standard full RGB palette.
min-color-index integer Limit to a minimum total color depth. This is must effective at displaying different background images based upon the displayable colors, saving bandwidth on monochrome and greyscale displays.
max-color-index integer Limit to a maximum total color depth.
monochrome -/integer Limit to a specific greyscale color depth on a monochrome device. This is valuable when creating a custom display for printing out the page.
min-monochrome integer Limit to a minimum greyscale color depth.
max-monochrome integer Limit to a maximum greyscale color depth.
scan progressive interlace Limits to TV media with progressive scanning or interlace scanning. Seldom used.
grid -/0/1 Limit to displays running on a pure grid. Seldom used.

* Dashes (-) indicate the value can be omitted and still work fine.

Legacy and Browser-Specific Constraints

Legacy Constraint Browser Modern Constraint
-moz-images-in-menus Firefox 3.6+ none; Used to determine if images can appear in menus. Accepts 0/1. Corresponds to constraint "-moz-system-metric(images-in-menus)".
-moz-mac-graphite-theme Firefox 3.6+ none; Used to determine if user is using the "Graphite" appearance on Mac OS X. Accepts 0/1.Corresponds to constraint "-moz-system-metric(mac-graphite-theme)".
-moz-device-pixel-ratio -webkit-device-pixel-ratio Firefox 4-15 resolution
-moz-os-version Firefox 25+ none; Used to determine which operating system is running the browser. Currently only implemented on windows, with values of "windows-wp", "windows-vista","windows-win7","windows-win8"
-moz-scrollbar-end-backward Firefox 3.6+ none; Used to determine if user's interface displays a backward arrow at the end of the scrollbar. Accepts 0/1. Corresponds to constraint "-moz-system-metric(scrollbar-end-backward)".
-moz-scrollbar-start-forward Firefox 3.6+  none; Used to determine if user's interface displays a forward arrow at the start of the scrollbar. Accepts 0/1. Corresponds to constraint "-moz-system-metric(scrollbar-start-forward)".

Screen Sizes

Device Display (WxH) Viewport (WxH) Resolution Render
iPhone 2G, 3G, 3GS 320x480 320x480   163 dpi 1 dppx
iPhone 4, 4S 640x960 320x480  326 dpi 2 dppx
iPhone 5, 5C, 5S 640x1136 320x568 326 dpi 2 dppx
iPhone 6 750x1334 375x667 326 dpi 2 dppx
iPhone 6 Plus 1080x1920 414x736 401 dpi 3 dppx
iPad, iPad 2 768x1024 768x1024  132 dpi 1 dppx
iPad Air, iPad Air 2 1536x2048 768x1024 264 dpi 2 dppx
iPad mini 2, 3  1536x2048 768x1024 326 dpi 2 dppx
iPad mini 768x1024 768x1024  163 dpi 1 dppx
iMac 2560x1440 2560x1440 109 dpi 1 dppx
iMac Retina 5120x2880 5120x2880 218 dpi 1 dppx
MacBook Pro Retina -13.3" 2560x1600 1280x800 227 dpi 2 dppx
MacBook Pro Retina -15.4" 1800x2880 900x1440 220 dpi 3 dppx
Galaxy Nexus 720x1280 720x1280 316 dpi 1 dppx
Galaxy Mini 2 320x480 320x480 176 dpi 1 dppx
Galaxy S3 720x1280 360x640 306 dpi 2 dppx
Galaxy S4 1080x1920 360x640 441 dpi 3 dppx
Galaxy S5 1080x1920 360x640  432 dpi 3 dppx
Galaxy Tab 7 Plus 600x1024 600x1024  169 dpi 1 dppx
Galaxy Tab 8.9 800x1280 800x1280  169 dpi 1 dppx
Galaxy Tab 10.1 800x1280 800x1280 149.45 dpi 1 dppx
Google Nexus 4 768x1280 768x1280  318 dpi 1 dppx
Google Nexus 5 1080x1920 360x640 445 dpi 3 dppx
Google Nexus 6 1440x2560 1440x2560  493 dpi 1 dppx
Google Nexus 7 1200x1920 600x960 323 dpi 2 dppx
Google Nexus 9 1536x2048 1536x2048  288 dpi 1 dppx
Google Nexus 10 1600x2560 800x1280 300 dpi 2 dppx
HTC Evo 480x800 480x800 217 dpi 1 dppx
HTC One V 480x800 480x800 252 dpi 1 dppx
HTC One X 720x1280 720x1280 312 dpi 1 dppx
HTC One 1080x1920 360x640 469 dpi 3 dppx
HTC One Mini 720x1280 720x1280  342 dpi 1 dppx
HTC One Max 1080x1920 1080x1920  373 dpi 1 dppx
HTC Pure 480x800 480x800  292 dpi 1 dppx
HTC Desire Z, T-Mobile G2 480x800 480x800 252 dpi 1 dppx
Blackberry Q5, Q10 720x720 360x360 330 dpi 2 dppx
Blackberry Z10 768x1280 384x640 356 dpi 2 dppx
Blackberry Z30 720x1280 360x640 295 dpi 2 dppx
Blackberry Passport 1440x1440 1440x1440 453 dpi 1 dppx
Lumia 520, 521 480x800 480x800 233 dpi 1 dppx
Lumia 620 480x800 480x800 246 dpi 1 dppx
Lumia 625 480x800 480x800 199 dpi 1 dppx
Lumia 720, 820, 822 480x800 480x800 217 dpi 1 dppx
Lumia 920, 928, 1020 768x1280 480x800 332 dpi 1.6 dppx
Moto X 720x1280 360x640 312 dpi 2 dppx
Moto G 720x1280 360x640 326 dpi 2 dppx
Kindle Fire 600x1024 600x1024 169 dpi 1 dppx
Kindle Fire HD - 7" 800x1280 800x1280 216 dpi 1 dppx
Kindle Fire HD - 8.9" 1200x1920 1200x1920 254 dpi 1 dppx
Kjndle Fire HDX - 8.9" 1600x2560 1600x2560 339 dpi 1 dppx
Kindle Fire HDX - 7" 1200x1920 1200x1920 323 dpi 1 dppx
Surface 768x1366 768x1366 148 dpi 1 dppx
Surface 2, Pro, Pro 2 1080x1920 1080x1920 208 dpi 1 dppx
Surface Pro 3 1440x2160 1440x2160 216 dpi 1 dppx
Yoga 2 Pro 1800x3200 1800x3200 276 dpi 1 dppx
ThinkPad Edge E531 1920x1080 1920x1080 141 dpi 1 dppx
IdeaPad U310 1366x768 1366x768 118 dpi 1 dppx
UltraSharp UP2414Q 3840x2160 3840x2160 185 dpi 1 dppx
UltraSharp U2412M 1920x1200 1920x1200 94 dpi 1 dppx
UltraSharp U2414H 1920x1080 1920x1080  93 dpi 1 dppx

If you set the viewport scale to 1, the display dimension is the maximum size of an image you want to include while the viewport dimension is the one you use for your media queries.

In  Practice

It is not practical to create a media query for every single device, specifying its particular dimensions and resolution. Creating a set of breakpoints allow you to format a group of devices, instead of each individual device/screen. Also, setting the viewport scale to 1, simplifies all the artificial display dimensions into a handful of sizes. Adding a min-resolution constraint allows for special styling for those high resolution smart phones.

To assist in the development process, we start with a mobile framework which already has the major break points identified.

Download our framework

Reference

The specifications for media queries can be found at:

Managing the Postfix Mail Queue

Postfix is one of the most common open-source mail transfer agents (MTAs), and is the one we run for ourselves and clients. Like just about every other MTA, once Postfix accepts an email from any source, it places the email in a queue. Another Postfix process then runs through the queue and processes the email according to the Postfix settings (typically deliver it to a local mailbox, spamassassin or an outside mail server).

Types of Mail Queues

  • maildrop The maildrop queue is the temporary queue for all incoming mail from the local server. Messages submitted directly to the maildrop or sendmail scripts are placed here until they can be added to the active queue.
  • hold The hold queue is basically a quarantine queue created by changing access restrictions from Postfix settings. Normally this queue is not used unless explicitly setup by the admin.
  • incoming Emails arriving are placed in the incoming queue immediately upon arrival. Normally once an email hits the incoming queue it is sent to the active queue, however this is dependent upon the resources available to the active queue.
  • active Emails which have yet to be delivered due to resource restrictions are placed in the active queue. This is the only memory queue, where the other queues operate with a file on the hard-disk.
  • deferred Emails which could not be delivered but where not necessarily bounced are placed in the deferred queue for future deliver. 

 

Mail Queue Operations

Viewing a Queue

An individual queue can be viewed with the mailq command.

~ mailq

~ QUEUE_ID MSG_SIZE ARRIVAL_TIME SENDER RECIPIENT,RECIPIENT...

The mailq command outputs a table with the queue ID, message size, arrival time, sender and outstanding recipients.

 

Flush Queue

Flushing the queue forces the mail queue manager to attempt to process and deliver every message in the queue. Unless the active queue crashed, you typically will only flush the deferred or hold queues since the other ones seldom have messages for longer than a few seconds.

~ postfix flush

 

Clear Queue

Clearing the queue forces the mail manager to delete all the messages in the particular queue.

~ postsuper -d QUEUE

Substitute QUEUE for the mail queue you want to clear, or 'ALL' to delete all the messages in all the queues.