By dmitrizzleHeader image credit: dmitrizzle
Pixel and dot density are the building blocks of anything represented by color and graphics via modern technology. While it has been discussed and explained in depth online and offline, my take on the subject includes some of the concepts and ideas that I've learned over a decade and a half working as a web and print designer. These concepts aren't typically obvious or included in various explanations.
Quick facts about PPI:
(Often confused with DPI)
Each one of the little squares that makes the image is a pixel. If there were more of them, more details would be visible on the device. Hence, higher PPI would yield a higher resolution of a digital photo displayed on a screen. For example, an image with a resolution of 1920(width)x1080(height) pixels has a higher PPI than a picture with 1024x768 pixels. That would make it look sharper if displayed on the same device.
Although the resolution is often represented by two numbers (width and height), multiplying them would give you a total pixel area value. This method of counting is common when you read about some devices like screens and cameras. For example, 1920x1080 pixel dimensions would yield 2,073,600 pixels in total (area) required to show the image (I just multiplied those numbers). Rounding it up, it has 2 million pixels: 2 Megapixels. Sound familiar?
More isn't always better
I learned this lesson while choosing my first digital camera back in the early 2000s. Back then film was just starting to get replaced by digital, which was still in its infancy. Memory was expensive, which meant that your typical MP3 player would have 10 songs, and most DSLRs could not hold more shots than a regular film camera.
Same was true for light sensors. The quantity pixels you could capture and store was almost directly proportional to the price. It is still true in many cases today. However, it does not necessarily dictate the quality. Quality is dependant on more than a single tech stat.
It came to me when I realized that the camera I got packed an enormous amount of pixels into every frame, each of which was shit. At the time I was just starting with photography, so my lack of skill, imagination and style plaid its part in the failure to capture the kind of images I wanted. As much as I love to blame myself for this, there was something else.
Simply put, the resolution is contributing much less to the overall quality than the sales folks would like us to know. Can't blame them of course. Novices buy most of the mid-range cameras. So the easiest way to justify a high price for a product to the non-savvy is to pivot on one stat. DPI.
This one got more, it costs more but it's better.Sales guy
In reality, the type and power of a processor, the quality and physical size of light sensor are just as important. The design of the electronics and the software inside, the mechanical parts and the body itself and, of course, the lenses.
I do not want this post to be about shitting on the ad and tech industry. However, I feel like we still haven't learned how to judge the quality of digital imaging products. Android and Apple devices are now sold with built-in cameras that have higher resolution than the DSLR I bout just a couple of years go. I am not trying to say that you cannot create amazing images using them. That is up to the photographer. However, their tiny lenses and microscopic sensors just can't compare to what a dedicated camera with quality optics can do, no matter how many dots they can produce per inch.
More isn't always better for digital files either (not just for the tools that created them). Large pixel density does not increase the overall quality of the image on its own. Neither does the size of the file alone.
More pixels are not going to make a bad image better. Just sharper (unless the photo was blurry already). However, they do not have to make a file larger either. Image compression is capable of reducing file size many times over without any detrimental effects or artifacts on the resulting output. Of course, some of the data is bound to be lost during that process (as while converting RAW to JPEG). Sometimes it could be appalling. In fact, in some cases, images with higher resolution (pixel density) could look much worse than a lower resolution image that hasn't been compressed as much. Image compression is not consistent across an entire visual spectrum, as reds typically look worse.
On paper 300PPI files are recommended for printers. It ensures that your business cards and postcards would not have any visible dots, even up close. This is not always reasonable or possible for large printing format.
Imagine a huge ad poster hanging down a side of a building somewhere. It could be as large as 400 square meters. That is 620,001 inches2, which at 300PPI would require an 186-megapixel file.Completely unreasonable.
Instead, designers and print shops try to operate within reason. As large prints are meant to be seen from a distance, their pixel density requirements are relaxed. For example, 150DPI could be deemed acceptable for anything over 12". 50DPI could work if the print is 5 feet and over (in my conditional opinion).
Good shops would have advanced interpolation software that can artificially increase the number of pixels a photograph contains. The medium also aids the quality of the image. More porous stuff (like canvas or extra matte paper) is capable of hiding some of the pixelation.This article is an edited version of what was originally posted on January 31, 2013.