Leaf&Core

The Rule of the Techno-Illiterate: Rittenhouse Lawyer Claims Pinch-to-Zoom Fakes Video Footage

Reading Time: 5 minutes.
Screenshot of drone footage showing Kyle Rittenhouse, possibly raising his weapon

The drone footage prosecution couldn’t zoom in on, via Washington Post’s livestream of the trial, November 10th, 2021.

In a move that would likely only work if the judge and maybe even the prosecution are already on your side (or everyone involved is very ignorant), the defense attorney for Kyle Rittenhouse in his murder trial claimed that they couldn’t zoom in on the prosecution’s video because the iPad changes things as you zoom in. It was quick thinking, and the kind of thing that shouldn’t work on anyone with any technical know-how unless they’re someone who is already sympathetic to your case and possesses questionable reasoning.

Anyway, it worked.

So does pinch-to-zoom on an iPad change the image you’re displaying? The short answer is yes, but it really also doesn’t. So let’s look into why yet another claim of the defense is bunk, and let’s leave why it worked to those who will be analyzing this case for years to come.

Digital Zoom

At its most basic level, digital zoom or “blowing up” an image works by taking a single pixel and changing it into four pixels. This “doubles” the pixel in every direction. So, if you imagine one red square block, there are now four red square blocks to represent that. Congrats, you’ve doubled the size of your image.

However, doing this will make it look pixelated and grainy. That’s when we start to get into processing. By taking those pixels that were doubled and averaging and blending them with other doubled pixels, you get something that “fills in the blanks” between pixels. In doing so, you’ve added pixels that weren’t there before, but are based on the pixels that you started with. The image won’t change. It’s impossible to add information to these photos using this method. However, it’ll look bigger to the human eye, and your human brain can take those averaged pixels and process the data more easily. Sure, it’ll look blurry, but you can make out smaller details better now.

This is an oversimplification, of course. There are other algorithms (not “logarithms,” as the defense called them) that create a sharper image by using only the data that was already present in the image. There are second and third passes you can do to sharpen up those edges. But none of this adds anything to the photo, as Rittenhouse’s lawyer claimed. However, there are zooming techniques that do add information.

AI Digital Zoom

AI-assisted digital zoom is a bit different. It still does the same things to blow up items, but it also tries to identify items on the screen. It can then make assumptions about blurry details and enhance them based on what they’re supposed to look like. The AI knows what metal looks like, so it can make a blurry photo of something with metal look sharper. These aren’t there to manipulate or wildly change anything in the photo. Instead, they enhance the details that are already there in ways that benefit what they are. A photo of a bridge from far away without that digital enhancement would have a person squinting at it and saying, “Oh, they’re rivets!” But with digital zoom enhanced with AI, the AI has already said, “Oh, they’re rivets!” and applied filters over those rivets to improve their sharpness so you could more easily identify them at a glance.

Google’s RAISR made zoom photos on the single lens Pixel 2 incredibly, rivaling those with optical zoom, like the iPhone X. Photo via Google, on a Pixel 2.

AI digital zoom is adding details where they don’t previously exist, mostly with standard procedures like sharpening, that is, darkening the areas between contrasting pixels. However, they can also apply textures. For example, it could look at skin and understand that human skin has pores, so, when it sees darker areas that are most likely pores, sharpen them specifically in the way you’d sharpen a pore, even if all of the details aren’t there to make that assumption. The image it creates is far more clear than what it started with, but could add small details where they weren’t before.

These are small details. A metallic texture on a metallic item. These aren’t details that would change a scene, just provide better clarity to items that are blurred by the zooming process. It’s so accurate, Google’s RAISR (Rapid and Accurate Image Super-Resolution) doesn’t even store the modified image. It stores the smaller image, then users RAISR on the fly to blow it up when the user observes it. It’s an incredible and innovative way to save storage and keep a full quality image from a lower quality one.

Note that the smoothing still left the fringing on the ear. All the details are still here, just smoother. via Google

And here’s the thing: your camera is already doing it out of the box. If you zoom in on something with your iPhone or Android phone, it’s going to be doing this. Apple’s night mode does this, to add details in the dark, and it’s low light functionality does as well. These details just help textures look right after a long exposure, which causes shaking. They don’t actually change the contents of the photo, just its sharpness.

Does it Matter?

In short? No. AI-assisted digital zoom may add details, but it does so on things like textures, to make items appear less blurry. Zooming in with this technology does not fundamentally alter the image or movie. It can’t change what’s happening. It doesn’t add items or change angles. Zooming wouldn’t, say, make someone aiming at and shooting an unarmed protestor on video if they didn’t do just that. All AI assisted zoom is perform normal zoom operations, then add finer texture details.

Furthermore, zooming in on an iPad video isn’t doing this to any extreme levels. Google is more well-known for this, and does it for photos when they’re taken. Doing it for videos is far too complex to do with a simple pinch gesture. In other words, while AI zoom does exist, it doesn’t change items as much as the defense claims, and also isn’t happening here anyway.

Binger: “Mr. Rittenhouse, you told us earlier everything that you did when you first got to this location, correct?”

Rittenhouse: “Yes.”

Binger: “What you didn’t tell us is that right here on the video, you have your gun raised. Don’t you?”

Rittenhouse: “I can’t see it.”

– Testimony after the judge refused to allow prosecution to zoom in on a video that counters Rittenhouse’s claims that he did not raise his weapon.

The claims made by defense were—knowingly or not—falsehoods, and forced a jury to squint to look at a small image on a surprisingly small TV. Those claims also allowed Rittenhouse to claim he couldn’t tell what he was doing on camera, and therefore couldn’t assert whether or not he had lied about previous testimony. It forced the jury to make up their own minds about the evidence, rather than getting to actually see the evidence for themselves. The nature of this defense would throw out any digital image, since all digital images go through some computations to improve clarity and remove noise. Digital sensors are plagued with noise, they just know how to filter it out. No image is ever in a “pure” state if it’s digital, just as developing analog photos can fundamentally change what you can see in the photo.

The argument was one made in bad faith, based on a lie, but it worked. It worked because either the judge wanted it to work, or he was just too ignorant to know better. Perhaps a combination of both. Either way, taking the defense’s claims at face value, without requiring them to prove anything regarding their claims about Apple’s pinch-to-zoom, the judge picked a side based not on evidence.

I suppose you, like the jury squinting at a screen, will have to make of that what you will.


Sources and Further Reading:
Exit mobile version