用 OpenCV 实现彩色纸张照片的自动对比和亮度调整

当拍摄一张纸时(例如用手机相机) ,我得到以下结果(左图)(jpg 下载 给你)。理想的结果(用图像编辑软件手动处理)在右边:

我想用 openCV 处理原始图像,以获得更好的亮度/对比度 自然而然地(这样背景更白)

假设: 图像有一个 A4的肖像格式(我们不需要透视-在这个主题扭曲它在这里) ,纸是白色的,可能有文字/图像在黑色或颜色。

到目前为止我所做的努力:

  1. 各种 自适应阈值分割法方法,如高斯,OTSU (见 OpenCV doc 图像阈值)。它通常与 OTSU 很好地工作:

    ret, gray = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY)
    

    但它只适用于灰度图像 ,而不直接适用于彩色图像。此外,输出是二进制(白色或黑色) ,这是我不想要的: 我更喜欢保持一个彩色非二进制图像作为输出

  2. 直方图均衡化

    • 应用于 Y (在 RGB = > YUV 变换之后)
    • 或应用于 V (在 RGB = > HSV 变换之后) ,

    正如这个 回答(不能处理彩色图像的直方图均衡化-OpencV)或这个 (OpenCV Python 均衡 Hist 彩色图像)所建议的:

    img3 = cv2.imread(f)
    img_transf = cv2.cvtColor(img3, cv2.COLOR_BGR2YUV)
    img_transf[:,:,0] = cv2.equalizeHist(img_transf[:,:,0])
    img4 = cv2.cvtColor(img_transf, cv2.COLOR_YUV2BGR)
    cv2.imwrite('test.jpg', img4)
    

    或单纯疱疹病毒:

    img_transf = cv2.cvtColor(img3, cv2.COLOR_BGR2HSV)
    img_transf[:,:,2] = cv2.equalizeHist(img_transf[:,:,2])
    img4 = cv2.cvtColor(img_transf, cv2.COLOR_HSV2BGR)
    

    不幸的是,结果相当糟糕,因为它在局部造成了可怕的微观对比(?) :

    我也尝试了 YCbCr 代替,它是类似的。

  3. 我还尝试了 对比有限公司适应直方图均衡化和从 11000的各种 tileGridSize:

    img3 = cv2.imread(f)
    img_transf = cv2.cvtColor(img3, cv2.COLOR_BGR2HSV)
    clahe = cv2.createCLAHE(tileGridSize=(100,100))
    img_transf[:,:,2] = clahe.apply(img_transf[:,:,2])
    img4 = cv2.cvtColor(img_transf, cv2.COLOR_HSV2BGR)
    cv2.imwrite('test.jpg', img4)
    

    但结果也同样糟糕。

  4. 正如问题 如何应用 CLAHE 的 RGB 彩色图像中所建议的,使用 LAB 颜色空间执行这个 CLAHE 方法:

    import cv2, numpy as np
    bgr = cv2.imread('_example.jpg')
    lab = cv2.cvtColor(bgr, cv2.COLOR_BGR2LAB)
    lab_planes = cv2.split(lab)
    clahe = cv2.createCLAHE(clipLimit=2.0,tileGridSize=(100,100))
    lab_planes[0] = clahe.apply(lab_planes[0])
    lab = cv2.merge(lab_planes)
    bgr = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR)
    cv2.imwrite('_example111.jpg', bgr)
    

    输出图像:

  5. 做一个自适应的阈值或直方图均衡化 分别在每个频道上播放(R,G,B)不是一个选项,因为它会干扰色彩平衡,如解释 给你

  6. “对比拉伸” 方法来自 scikit-image直方图均衡化上的教程:

    图像被重新调整以包括所有属于第2和第98百分位数的强度

    是好一点,但仍然远远没有达到预期的结果(见图片顶部的这个问题)。


DR: 如何使用 OpenCV/Python 实现彩色纸张照片的自动亮度/对比度优化?可以使用什么样的阈值/直方图均衡化/其他技术?

52931 次浏览

enter image description here enter image description here

Contrast and brightness can be adjusted using alpha (α) and beta (β), respectively. These variables are often called the gain and bias parameters. The expression can be written as

enter image description here

OpenCV already implements this as cv2.convertScaleAbs() so we can just use this function with user defined alpha and beta values.

import cv2


image = cv2.imread('1.jpg')


alpha = 1.95 # Contrast control (1.0-3.0)
beta = 0 # Brightness control (0-100)


manual_result = cv2.convertScaleAbs(image, alpha=alpha, beta=beta)


cv2.imshow('original', image)
cv2.imshow('manual_result', manual_result)
cv2.waitKey()

But the question was

How to get an automatic brightness/contrast optimization of a color photo?

Essentially the question is how to automatically calculate alpha and beta. To do this, we can look at the histogram of the image. Automatic brightness and contrast optimization calculates alpha and beta so that the output range is [0...255]. We calculate the cumulative distribution to determine where color frequency is less than some threshold value (say 1%) and cut the right and left sides of the histogram. This gives us our minimum and maximum ranges. Here's a visualization of the histogram before (blue) and after clipping (orange). Notice how the more "interesting" sections of the image are more pronounced after clipping.

To calculate alpha, we take the minimum and maximum grayscale range after clipping and divide it from our desired output range of 255

α = 255 / (maximum_gray - minimum_gray)

To calculate beta, we plug it into the formula where g(i, j)=0 and f(i, j)=minimum_gray

g(i,j) = α * f(i,j) + β

which after solving results in this

β = -minimum_gray * α

For your image we get this

Alpha: 3.75

Beta: -311.25

You may have to adjust the clipping threshold value to refine results. Here's some example results using a 1% threshold with other images: Before -> After

Automated brightness and contrast code

import cv2
import numpy as np
from matplotlib import pyplot as plt


# Automatic brightness and contrast optimization with optional histogram clipping
def automatic_brightness_and_contrast(image, clip_hist_percent=1):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    

# Calculate grayscale histogram
hist = cv2.calcHist([gray],[0],None,[256],[0,256])
hist_size = len(hist)
    

# Calculate cumulative distribution from the histogram
accumulator = []
accumulator.append(float(hist[0]))
for index in range(1, hist_size):
accumulator.append(accumulator[index -1] + float(hist[index]))
    

# Locate points to clip
maximum = accumulator[-1]
clip_hist_percent *= (maximum/100.0)
clip_hist_percent /= 2.0
    

# Locate left cut
minimum_gray = 0
while accumulator[minimum_gray] < clip_hist_percent:
minimum_gray += 1
    

# Locate right cut
maximum_gray = hist_size -1
while accumulator[maximum_gray] >= (maximum - clip_hist_percent):
maximum_gray -= 1
    

# Calculate alpha and beta values
alpha = 255 / (maximum_gray - minimum_gray)
beta = -minimum_gray * alpha
    

'''
# Calculate new histogram with desired range and show histogram
new_hist = cv2.calcHist([gray],[0],None,[256],[minimum_gray,maximum_gray])
plt.plot(hist)
plt.plot(new_hist)
plt.xlim([0,256])
plt.show()
'''


auto_result = cv2.convertScaleAbs(image, alpha=alpha, beta=beta)
return (auto_result, alpha, beta)


image = cv2.imread('1.jpg')
auto_result, alpha, beta = automatic_brightness_and_contrast(image)
print('alpha', alpha)
print('beta', beta)
cv2.imshow('auto_result', auto_result)
cv2.waitKey()

Result image with this code:

enter image description here

Results with other images using a 1% threshold

enter image description here enter image description here

enter image description here enter image description here

An alternative version is to add gain and bias to an image using saturation arithmetic instead of using OpenCV's cv2.convertScaleAbs(). The built-in method does not take an absolute value, which would lead to nonsensical results (e.g., a pixel at 44 with alpha = 3 and beta = -210 becomes 78 with OpenCV, when in fact it should become 0).

import cv2
import numpy as np
# from matplotlib import pyplot as plt


def convertScale(img, alpha, beta):
"""Add bias and gain to an image with saturation arithmetics. Unlike
cv2.convertScaleAbs, it does not take an absolute value, which would lead to
nonsensical results (e.g., a pixel at 44 with alpha = 3 and beta = -210
becomes 78 with OpenCV, when in fact it should become 0).
"""


new_img = img * alpha + beta
new_img[new_img < 0] = 0
new_img[new_img > 255] = 255
return new_img.astype(np.uint8)


# Automatic brightness and contrast optimization with optional histogram clipping
def automatic_brightness_and_contrast(image, clip_hist_percent=25):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)


# Calculate grayscale histogram
hist = cv2.calcHist([gray],[0],None,[256],[0,256])
hist_size = len(hist)


# Calculate cumulative distribution from the histogram
accumulator = []
accumulator.append(float(hist[0]))
for index in range(1, hist_size):
accumulator.append(accumulator[index -1] + float(hist[index]))


# Locate points to clip
maximum = accumulator[-1]
clip_hist_percent *= (maximum/100.0)
clip_hist_percent /= 2.0


# Locate left cut
minimum_gray = 0
while accumulator[minimum_gray] < clip_hist_percent:
minimum_gray += 1


# Locate right cut
maximum_gray = hist_size -1
while accumulator[maximum_gray] >= (maximum - clip_hist_percent):
maximum_gray -= 1


# Calculate alpha and beta values
alpha = 255 / (maximum_gray - minimum_gray)
beta = -minimum_gray * alpha


'''
# Calculate new histogram with desired range and show histogram
new_hist = cv2.calcHist([gray],[0],None,[256],[minimum_gray,maximum_gray])
plt.plot(hist)
plt.plot(new_hist)
plt.xlim([0,256])
plt.show()
'''


auto_result = convertScale(image, alpha=alpha, beta=beta)
return (auto_result, alpha, beta)


image = cv2.imread('1.jpg')
auto_result, alpha, beta = automatic_brightness_and_contrast(image)
print('alpha', alpha)
print('beta', beta)
cv2.imshow('auto_result', auto_result)
cv2.imwrite('auto_result.png', auto_result)
cv2.imshow('image', image)
cv2.waitKey()

I think the way to do that is 1) Extract the chroma (saturation) channel from HCL colorspace. (HCL works better than HSL or HSV). Only colors should have non-zero saturation, so bright, and gray shades will be dark. 2) Threshold that result using otsu thresholding to use as a mask. 3) Convert your input to grayscale and apply local area (i.e., adaptive) thresholding. 4) put the mask into the alpha channel of the original and then composite the local area thresholded result with the original, so that it keeps the colored area from the original and everywhere else uses the local area thresholded result.

Sorry, I do not know OpeCV that well, but here are the steps using ImageMagick.

Note that channels are numbered starting with 0. (H=0 or red, C=1 or green, L=2 or blue)

Input:

enter image description here

magick image.jpg -colorspace HCL -channel 1 -separate +channel tmp1.png


enter image description here

magick tmp1.png -auto-threshold otsu tmp2.png


enter image description here

magick image.jpg -colorspace gray -negate -lat 20x20+10% -negate tmp3.png


enter image description here

magick tmp3.png \( image.jpg tmp2.png -alpha off -compose copy_opacity -composite \) -compose over -composite result.png


enter image description here

ADDITION:

Here is Python Wand code, which produces the same output result. It needs Imagemagick 7 and Wand 0.5.5.

#!/bin/python3.7


from wand.image import Image
from wand.display import display
from wand.version import QUANTUM_RANGE


with Image(filename='text.jpg') as img:
with img.clone() as copied:
with img.clone() as hcl:
hcl.transform_colorspace('hcl')
with hcl.channel_images['green'] as mask:
mask.auto_threshold(method='otsu')
copied.composite(mask, left=0, top=0, operator='copy_alpha')
img.transform_colorspace('gray')
img.negate()
img.adaptive_threshold(width=20, height=20, offset=0.1*QUANTUM_RANGE)
img.negate()
img.composite(copied, left=0, top=0, operator='over')
img.save(filename='text_process.jpg')

enter image description here

This method should work well for your application. First you find a threshold value that separates the distribution modes well in the intensity histogram then rescale the intensity using that value.

from skimage.filters import threshold_yen
from skimage.exposure import rescale_intensity
from skimage.io import imread, imsave


img = imread('mY7ep.jpg')


yen_threshold = threshold_yen(img)
bright = rescale_intensity(img, (0, yen_threshold), (0, 255))


imsave('out.jpg', bright)

I'm here using Yen's method, can learn more about this method on this page.

Robust Locally-Adaptive Soft Binarization! That's what I call it.

I've done similar stuffs before, for a bit different purpose, so this may not perfectly fit for your needs, but hope it helps (also I wrote this code at night for personal use so it's ugly). In a sense, this code was intended to solve a more general case compared to yours, where we can have a lot of structured noise on the background (see demo below).

What this code does? Given a photo of a sheet of paper, it will whiten it so that it can be perfectly printable. See example images below.

Teaser: that's how your pages will look like after this algorithm (before and after). Notice that even the color marker annotations are gone, so I don't know if this will fit your use case but the code might be useful:

To get a perfectly clean results, you might need to toy around with filtering parameters a bit, but as you can see, even with default parameters it works quite well.


Step 0: Cut the images to fit closely to the page

Let's asume you somehow did this step (it seems like that in the examples you provided). If you need a manual annotate-and-rewarp tool, just pm me! ^^ The results of this step is below (the examples I use here are arguably harder than the one you provided, whilst it may not exactly match your case):

From this we can immediately see the following problems:

  • Lightening condition is not even. This means all simple binarization methods won't work. I tried a lot of solutions available in OpenCV, as well as their combinations, none of them worked!
  • A lot of background noise. In my case, I needed to remove the grid of the paper, and also the ink from the other side of the paper that is visible through the thin sheet.

Step 1: Gamma correction

The reasoning of this step is to balance out the contrast of the whole image (since your image can be slightly overexposed/underexposed depending to the lighting condition).

This may seem at first as an unnecessary step, but the importance of it cannot be underestimated: in a sense, it normalizes the images to the similar distributions of exposures, so that you can choose meaningful hyper-parameters later (e.g. the DELTA parameter in next section, the noise filtering parameters, parameters for morphological stuffs, etc.)

# Somehow I found the value of `gamma=1.2` to be the best in my case
def adjust_gamma(image, gamma=1.2):
# build a lookup table mapping the pixel values [0, 255] to
# their adjusted gamma values
invGamma = 1.0 / gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")


# apply gamma correction using the lookup table
return cv2.LUT(image, table)

Here are results of gamma adjusting:

You can see that it is a bit more... "balanced" now. Without this step, all parameters that you will pick by hand in later steps will become less robust!


Step 2: Adaptive Binarization to Detect the Text Blobs

In this step, we will adaptively binarize out the text blobs. I will add more comments later, but the idea basically is following:

  • We divide the image into blocks of size BLOCK_SIZE. The trick is to choose its size large enough so that you still get a large chunk of text and background (i.e. larger than any symbols that you have), but small enough to not suffer from any lightening condition variations (i.e. "large, but still local").
  • Inside each block, we do locally-adaptive binarization: we look at the median value and hypothesize that it is the background (because we chose the BLOCK_SIZE large enough to have the majority of it to be background). Then, we further define DELTA — basically just a threshold of "how far away from median we will still consider it as background?".

So, the function process_image gets the job done. Moreover, you can modify the preprocess and postprocess functions to fit your need (however, as you can see from the example above, the algorithm is pretty robust, i.e. it works quite well out-of-the-box without modifying too much the parameters).

The code of this part assumes the foreground to be darker than the background (i.e. ink on paper). But you can easily change that by tweaking the preprocess function: instead of 255 - image, return just image.

# These are probably the only important parameters in the
# whole pipeline (steps 0 through 3).
BLOCK_SIZE = 40
DELTA = 25


# Do the necessary noise cleaning and other stuffs.
# I just do a simple blurring here but you can optionally
# add more stuffs.
def preprocess(image):
image = cv2.medianBlur(image, 3)
return 255 - image


# Again, this step is fully optional and you can even keep
# the body empty. I just did some opening. The algorithm is
# pretty robust, so this stuff won't affect much.
def postprocess(image):
kernel = np.ones((3,3), np.uint8)
image = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel)
return image


# Just a helper function that generates box coordinates
def get_block_index(image_shape, yx, block_size):
y = np.arange(max(0, yx[0]-block_size), min(image_shape[0], yx[0]+block_size))
x = np.arange(max(0, yx[1]-block_size), min(image_shape[1], yx[1]+block_size))
return np.meshgrid(y, x)


# Here is where the trick begins. We perform binarization from the
# median value locally (the img_in is actually a slice of the image).
# Here, following assumptions are held:
#   1.  The majority of pixels in the slice is background
#   2.  The median value of the intensity histogram probably
#       belongs to the background. We allow a soft margin DELTA
#       to account for any irregularities.
#   3.  We need to keep everything other than the background.
#
# We also do simple morphological operations here. It was just
# something that I empirically found to be "useful", but I assume
# this is pretty robust across different datasets.
def adaptive_median_threshold(img_in):
med = np.median(img_in)
img_out = np.zeros_like(img_in)
img_out[img_in - med < DELTA] = 255
kernel = np.ones((3,3),np.uint8)
img_out = 255 - cv2.dilate(255 - img_out,kernel,iterations = 2)
return img_out


# This function just divides the image into local regions (blocks),
# and perform the `adaptive_mean_threshold(...)` function to each
# of the regions.
def block_image_process(image, block_size):
out_image = np.zeros_like(image)
for row in range(0, image.shape[0], block_size):
for col in range(0, image.shape[1], block_size):
idx = (row, col)
block_idx = get_block_index(image.shape, idx, block_size)
out_image[block_idx] = adaptive_median_threshold(image[block_idx])
return out_image


# This function invokes the whole pipeline of Step 2.
def process_image(img):
image_in = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
image_in = preprocess(image_in)
image_out = block_image_process(image_in, BLOCK_SIZE)
image_out = postprocess(image_out)
return image_out

The results are nice blobs like this, closely following the ink trace:


Step 3: The "Soft" Part of Binarization

Having the blobs that covers the symbols and a little bit more, we can finally do the whitening procedure.

If we look more closely at the photos of sheets of papers with text (especially those that have hand writings), the transformation from "background" (white paper) to "foreground" (the dark color ink) is not sharp, but very gradual. Other binarization-based answers in this section proposes a simple thresholding (even if they are locally-adaptive, it is still a threshold), which works okay for printed text, but will produce not-so-pretty results with hand writings.

So, the motivation of this section is that we want to preserve that effect of gradual transmission from black to white, just as natural photos of sheets of papers with natural ink. The final purpose for that is to make it printable.

The main idea is simple: the more the pixel value (after thresholding above) differs from the local min value, the more likely it is belonging to the background. We can express this using a family of Sigmoid functions, re-scaled to the range of local block (so that this function is adaptively scaled thorough the image).

# This is the function used for composing
def sigmoid(x, orig, rad):
k = np.exp((x - orig) * 5 / rad)
return k / (k + 1.)


# Here, we combine the local blocks. A bit lengthy, so please
# follow the local comments.
def combine_block(img_in, mask):
# First, we pre-fill the masked region of img_out to white
# (i.e. background). The mask is retrieved from previous section.
img_out = np.zeros_like(img_in)
img_out[mask == 255] = 255
fimg_in = img_in.astype(np.float32)


# Then, we store the foreground (letters written with ink)
# in the `idx` array. If there are none (i.e. just background),
# we move on to the next block.
idx = np.where(mask == 0)
if idx[0].shape[0] == 0:
img_out[idx] = img_in[idx]
return img_out


# We find the intensity range of our pixels in this local part
# and clip the image block to that range, locally.
lo = fimg_in[idx].min()
hi = fimg_in[idx].max()
v = fimg_in[idx] - lo
r = hi - lo


# Now we use good old OTSU binarization to get a rough estimation
# of foreground and background regions.
img_in_idx = img_in[idx]
ret3,th3 = cv2.threshold(img_in[idx],0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)


# Then we normalize the stuffs and apply sigmoid to gradually
# combine the stuffs.
bound_value = np.min(img_in_idx[th3[:, 0] == 255])
bound_value = (bound_value - lo) / (r + 1e-5)
f = (v / (r + 1e-5))
f = sigmoid(f, bound_value + 0.05, 0.2)


# Finally, we re-normalize the result to the range [0..255]
img_out[idx] = (255. * f).astype(np.uint8)
return img_out


# We do the combination routine on local blocks, so that the scaling
# parameters of Sigmoid function can be adjusted to local setting
def combine_block_image_process(image, mask, block_size):
out_image = np.zeros_like(image)
for row in range(0, image.shape[0], block_size):
for col in range(0, image.shape[1], block_size):
idx = (row, col)
block_idx = get_block_index(image.shape, idx, block_size)
out_image[block_idx] = combine_block(
image[block_idx], mask[block_idx])
return out_image


# Postprocessing (should be robust even without it, but I recommend
# you to play around a bit and find what works best for your data.
# I just left it blank.
def combine_postprocess(image):
return image


# The main function of this section. Executes the whole pipeline.
def combine_process(img, mask):
image_in = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
image_out = combine_block_image_process(image_in, mask, 20)
image_out = combine_postprocess(image_out)
return image_out

Some stuffs are commented since they are optional. The combine_process function takes the mask from the previous step, and executes the whole composition pipeline. You can try to toy with them for your specific data (images). The results are neat:

Probably I will add more comments and explanations to the code in this answer. Will upload the whole thing (together with cropping and warping code) on Github.

First we separate text and color markings. This can be done in a color space with a color saturation channel. I used instead a very simple method inspired by this paper: the ration of min(R,G,B)/ max(R,G,B) will be near 1 for (light) gray areas and << 1 for colored areas. For dark gray areas we get anything between 0 and 1, but this doesn't matter: either these areas go to the color mask and are then added as is or they are not included in the mask and are contributed to the output from the binarized text. For black we use the fact that 0/0 becomes 0 when converted to uint8.

The grayscale image text gets locally thresholded to produce a black and white image. You can pick your favorite technique from this comparison or that survey. I chose the NICK technique that copes well with low contrast and is rather robust, i.e. the choice of the parameter k between about -0.3 and -0.1 works well for a very wide range of conditions which is good for automatic processing. For the sample document provided the chosen technique doesn't play a big role as it is relatively uniformly illuminated, but in order to cope with non-uniformly illuminated images it should be a local thresholding technique.

In the final step, the color areas are added back to the binarized text image.

So this solution is very similar to @fmw42's solution (all credit for the idea to him) with the exception of the different color detection and binarization methods.

image = cv2.imread('mY7ep.jpg')


# make mask and inverted mask for colored areas
b,g,r = cv2.split(cv2.blur(image,(5,5)))
np.seterr(divide='ignore', invalid='ignore') # 0/0 --> 0
m = (np.fmin(np.fmin(b, g), r) / np.fmax(np.fmax(b, g), r)) * 255
_,mask_inv = cv2.threshold(np.uint8(m), 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
mask = cv2.bitwise_not(mask_inv)


# local thresholding of grayscale image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
text = cv2.ximgproc.niBlackThreshold(gray, 255, cv2.THRESH_BINARY, 41, -0.1, binarizationMethod=cv2.ximgproc.BINARIZATION_NICK)


# create background (text) and foreground (color markings)
bg = cv2.bitwise_and(text, text, mask = mask_inv)
fg = cv2.bitwise_and(image, image, mask = mask)


out = cv2.add(cv2.cvtColor(bg, cv2.COLOR_GRAY2BGR), fg)

enter image description here

If you don't need the color markings, you can simply binarize the grayscale image:

image = cv2.imread('mY7ep.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
text = cv2.ximgproc.niBlackThreshold(gray, 255, cv2.THRESH_BINARY, at_bs, -0.3, binarizationMethod=cv2.ximgproc.BINARIZATION_NICK)

enter image description here