What is the Difference Between a CCD and CMOS Sensor?

A lot of words have been written and exchanged about the difference between — and possible advantages or disadvantages of — CCD (charged-couple device) and CMOS (metal oxide semiconductor + active-pixel sensor) camera sensors. What really is the difference between them?

It is a debate that has existed since CMOS first began its journey toward being the industry dominant technology for camera sensors. This happened gradually throughout the 2000s and by the middle and particularly the end of that decade, it was clear what would win out for both stills and video.

What we will attempt to do is add a bit of clarity to this issue by covering the scientific differences — in language that has hopefully been distilled down to be accessible and concise yet also informative and detailed — as well as addressing some of the most common subjective talking points which have floated around the internet for the better part of two decades.

Editorial Credit: atdigit / Shutterstock.com

It’s important to note that we are limiting this discussion to non-scientific (non-specialty astro, medical, etc.) and non-video sensors. In other words, we are talking about CCD and CMOS sensor tech in stills cameras for the sake of brevity. An expanded conversation that goes into the tech across multiple disciplines would be the length of a book. Bear in mind then that some of the statements below do not hold true for applications outside the still photography space.

A Bit of History

To simplify things, let us start in the early to mid-2000s, by which time digital photography had established itself as a worthy alternative to film for many professionals. Both CCD and CMOS technology existed well before that point, but we want to keep things somewhat brass tacks here.

Credit: Cburnett, CC BY-SA 3.0, via Wikimedia Commons

In the very late 90s and early to mid-2000s, the camera industry was in upheaval in quite a few ways. Companies were competing for dominance as they juggled the analog to digital transition and digital sensor technology was all over the place. What is germane to this topic is that we saw a variety of different and unique sensor technologies being used across the different manufacturers, fluctuating and evolving frequently.

Today we have essentially four types of sensors: CMOS with a Bayer CFA, CMOS without a CFA (monochrome sensors), CMOS with Fujifilm’s X-Trans CFA, and Foveon sensors. As you may be able to guess, that really means we have just two types of actual sensor technology: CMOS and Foveon.

While there were unique sensors here and there (like the Nikon JFET-LBCAST), most cameras produced in the early to mid-2000s were fitted with CCD sensors. This gradually began to shift over the course of the decade. Undoubtedly, that was driven by the market leader, Canon, who implemented the first full-frame CMOS sensor with the Canon 1Ds in 2002 and continued to use CMOS technology in a majority of its cameras moving forward.

The debut of the Nikon D3 and Sony a700 in mid-2007 firmly cemented CMOS as the dominant technology for photographic cameras — not surprisingly, it was this same year that CMOS sales surpassed CCD sales. The only exception was the medium format arena, which would continue using CCD sensors until the release of the Hasselblad H5D-50c in 2014. Camera technology tends to trickle upward, after all.

Naturally, the big question is “why?” Why did companies abandon CCD in favor of CMOS?

Objective Differences: The Science

Sensors themselves are completely monochromatic. In other words, they measure light — it isn’t until a color filter array (CFA) is installed over the sensor that they can capture color information. This is usually done with an RGB Bayer mosaic, whether the sensor itself is CCD or CMOS.

Both types of sensors are built with arrays of silicon photosites, also known as pixels. In digital cameras, there will be millions of these pixels — one million pixels is better known as a “megapixel.” These pixels are oriented in a pattern of rows and columns, ultimately coming together to form the rectangular shape we know as a sensor. When light passes through a lens and strikes these silicon pixels, photons from the light interact with atoms in the silicon substrate. As this happens, electrons get kicked in higher energy states and are sent moving through the structure.

That is the nuts and bolts of a basic sensor, whether it’s CCD or CMOS. After this point, the way in which each of them turns these photons into a digital image reveals their differences — this process is otherwise known as “reading the sensor” or a “readout” and is when the translation of physical electric activity into digital data occurs.

CCD

In a CCD sensor, each pixel contains a potential well which is often likened to a bucket. During the exposure, as light strikes the sensor, this potential well collects photons, and the photons liberate electrons. The electrons amass during exposure, constrained within the “bucket” by electrodes and vertical clocks.

After exposure, electrons migrate down each row of the CCD, and the charge is gathered from each pixel along the way. Eventually, they reach a “container” at the end of the row known as an amplifier. This amplifier measures the number of photons that were loose in each well and converts that into a voltage. The process continues from there onto the gain stage and then to the ADC (analog to digital converter).

With most photographic CCD sensors, a mechanical shutter is necessary to avoid potential “smear” — since the sensor is read out one line at a time, any light that falls on photosites during the process can create vertical smear-type artifacts. This obviously precludes CCD sensors from being used with live-view. As a reminder, we are specifically referring to photographic stills cameras — CCD cinema cameras use a different design.

Editorial Credit: gritsalak karalak / Shutterstock.com

You may at this point say, “hey, early compact digital cameras with CCD sensors had live view!”

Yes and no.

These cameras did not have a true live view as we know it today. Instead, they displayed considerable lag, particularly noticeable when the camera is moved. You might chalk this up to the slow technology of the time, but it was really a limitation of the slow readout speed of the CCD chips — each frame had to be binned and transferred to the LCD screen or EVF, which could take up to a second or more. So, you end with a quasi-live image with an abysmal framerate, though it’s decent enough for framing static or mostly static subjects.

CMOS

Jumping over to CMOS sensors, everything above remains true as far as pixels collecting light (photons), however, the two technologies diverge at the readout stage: every individual pixel in a CMOS sensor has its own readout circuit — a photodiode-amplifier pair that converts the photons into voltage. From there, each column of the CMOS sensor has its own ADC. One upshot of this is significantly lower production costs to produce CMOS sensors since both the ADCs and imaging sensor are on the same silicon die. It also allows for a more compact design, which is particularly beneficial for smartphones and very compact cameras.

Editorial Credit: gritsalak karalak / Shutterstock.com

As you would expect, since each pixel is read out in parallel, CMOS sensors can be much faster. Today, this is particularly important for both video and the use of silent electronic shutters — faster sensor readout means less distortion of moving objects (“rolling shutter”) as well as the potential for uninterrupted live-view. Cameras like the Canon R5 and R6 and Sony Alpha 1 can read out the sensor fast enough that even high-speed objects like race cars or athletes in motion do not warp or distort when using the electronic shutter. It also aids in the use of flash, as seen in the Sony Alpha 1 which can flash sync with the e-shutter at the same speed as many mechanical shutters. 

None of this would be remotely possible in a CCD sensor.

CMOS sensors also require less power and produce less heat. This is one reason that global shutter CCD (“frame transfer CCD”) cameras found in some digital cinema cameras could not be implemented in stills cameras — while the large body and hefty batteries of the cinema camera mitigated the heat and power issues, this was not possible in a significantly smaller package.

Via Creative Commons

Sony’s introduction of BSI (backside illumination) technology in 2009 in its Exmor R CMOS sensor further entrenched the dominance of CMOS technology. Traditional (front side illuminated) sensors have their active matrix and wiring on the front surface of the imaging sensor. Detrimentally, this reflects some of the incoming light, which reduces the amount of captured light. BSI moves this matrix behind the photodiodes, allowing for an approximate half-stop (50%) increase in the amount of collected light. BSI allowed CMOS technology to pull even further ahead of CCD.

So What is it Good For?

CCD did have its advantages over CMOS, though most of them have been solved in the years since CMOS took over. Take the Nikon D1 from 1999: it sports an APS-C CCD sensor and delivers 2.7-megapixel images — however, the sensor itself has 10.8 million photosites (i.e., 10.8 megapixels). Because of the serial readout of the pixels, it is very simple to implement on-sensor pixel binning to combine charges from neighboring pixels in a CCD design — this results in higher sensitivity and a greater signal-to-noise ratio. While you can pixel bin with a CMOS sensor, it must happen off-sensor and you can’t combine charges from neighboring photosites.

Sigma’s Foveon sensors were developed partially to combat this problem.

A good example of this in a slightly more modern camera is 2008’s Sony F35 CineAlta camera. It contained a single Super 35 (roughly the size of APS-C) CCD chip with a resolution of 12.4 megapixels. However, it only produced a 1920×1080 (HD) file. This is the result of on-chip pixel binning and it allowed, among other things, the camera to output true RGB 4:4:4 data — no interpolation necessary. It is possible to do this with CMOS technology, but it has to happen off-chip. For example, it is possible to downsample a high-resolution 4:2:0 video file to a lower resolution 4:4:4 file in software. Furthermore, many stills cameras with in-body image stabilization (IBIS) offer pixel shift, which can be used to generate a high-resolution file or a true color file of native resolution. But these are not ideal alternatives to on-chip binning.

CCD sensors also have a non-linearity that is often (though not always) lacking in the more linear CMOS sensors. This means pleasing and more natural roll-off in the quartertones and highlights — however, this exists at the expense of a higher noise floor, which is particularly noticeable in the shadows, even at base ISO. It also requires careful and precise exposure due to the unforgiving latitude of CCD sensors, but when done properly, it results in what many consider to be more film-like image quality. Film, after all, is also extremely non-linear with exceptional highlight latitude but little tolerance for pushing the shadows without aggressive pattern noise or color shifts.

Subjective Differences: The CCD vs. CMOS Debate

This is the area where things get complicated, but it’s also the root issue at the heart of the CCD vs CMOS debates across the depths of Internet forums. On one side are those who feel that CCD sensor cameras produce superior images. On the other side are those who tout the many benefits of CMOS technology, with some arguing that there isn’t much difference in the image output between the two.

From my perspective, there are certainly merits to the argument that CCD sensors can and do produce more pleasing files — but of course, the entire concept of “pleasing” is a subjective one. A lot of it is related to the aforementioned tonal curves inherent to each sensor type. Non-linearity produces files that more closely mimic human vision — it is incredibly common for our vision to clip totally to black, but we almost never see completely blown highlights. Hypothetically, if we could see twenty stops of dynamic range, the spread might look something like 12 stops over and 8 stops under middle grey. Contrast that to a hypothetical 20-stop CMOS sensor, which would likely be the exact opposite.

As an aside, this is one reason the Arri Alexa is so popular for cinema and considered the most “film-like” — at its base ISO of 800 it allocates more dynamic range above middle grey than below, something which is not found in almost any other cinema camera.

Unusually for a digital camera — stills or cinema — the Arri Alexa’s ALEV III sensor’s native ISO 800 displays highlight bias.

Some argue CCD sensors produce more natural and accurate colors. Their color output is undoubtedly different, and I think there is some merit to the idea of color accuracy, at least based on my experience with many CCD cameras. Some speculate this has to do with the CFA designs and perhaps it does — certainly with some cameras like Fujifilm’s SuperCCD cameras this is the case. But we also see extremely accurate and neutral colors in many CMOS sensors — Hasselblad is the king of neutral color, in my opinion. Numerous blind tests have also shown that photos from CMOS sensors can easily be matched to images from a CCD (and vice versa) at least as far as color goes.

From my perspective and experience, CCD output in optimal conditions (good directional light, low ISO, punchy colors) will result in deeper blues, surprisingly accurate reds, warm midtones, neutral and cool shadows, and very pleasing tonal transitions from the quartertones into the highlights — if those highlights aren’t clipped. If a scene is going to have clipped highlights, then results will favor the CMOS because the roll-off avoids some of the harsher, sharp edges you find in clipped CCD highlights.

Almost all of these things, given each that image is properly exposed, can be matched relatively easily with some judicious use of HSL (hue, saturation, luminance) sliders.

What Does it All Mean?

So, is there a difference between CCD and CMOS images? Absolutely, there is no doubt — both in design and output.

Are those differences important? That depends.

If you are a fan of using straight out of camera files, then you’ll likely find the output of CCD sensors to be more pleasing — images are punchier, more colorful, and can work very well without much adjustment. Then again, the same is true of many CMOS-based cameras with excellent, adjustable JPEG engines — Fujifilm and Olympus are the most notable, though far from the only examples.

Editorial Use: Yury Zap / Shutterstock.com

But if you shoot and process RAW? Not only can you mimic the output of CCD in that case, but the wider latitude of CMOS allows you a much greater range of options.

There is one thing is that is without doubt: CMOS technology has outgrown and outpaced CCD, at least for stills and video imaging. But perhaps you love the output from your Leica M9 and don’t need live-view, silent electronic shutter, wide dynamic range, or exceedingly impressive low-light capabilities. In that case, cherish and use your M9.

But if your camera is worse for wear and needs an upgrade, there’s no reason to fret over what sensor is in your replacement.


Image credits: Header image graphic made from Creative Commons elements and those licensed via Depositphotos.

Leave a Reply

Your email address will not be published. Required fields are marked *