从图像中获取像素数组

我正在寻找最快的方法,以获得像素数据(在形式 int[][])从一个 BufferedImage。我的目标是能够地址像素 (x, y)从图像使用 int[x][y]。我找到的所有方法都没有这样做(大多数方法返回 int[])。

257581 次浏览

像这样吗?

int[][] pixels = new int[w][h];


for( int i = 0; i < w; i++ )
for( int j = 0; j < h; j++ )
pixels[i][j] = img.getRGB( i, j );

我只是在玩同一个主题,这是获取像素最快的方法。目前我知道两种方法:

  1. 按照@tskuzzy 的回答使用 BufferedImage 的 getRGB()方法。
  2. 通过直接访问像素数组,使用:

    byte[] pixels = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
    

If you are working with large images and performance is an issue, the first method is absolutely not the way to go. The getRGB() method combines the alpha, red, green and blue values into one int and then returns the result, which in most cases you'll do the reverse to get these values back.

The second method will return the red, green and blue values directly for each pixel, and if there is an alpha channel it will add the alpha value. Using this method is harder in terms of calculating indices, but is much faster than the first approach.

In my application I was able to reduce the time of processing the pixels by more than 90% by just switching from the first approach to the second!

Here is a comparison I've setup to compare the two approaches:

import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.IOException;
import javax.imageio.ImageIO;


public class PerformanceTest {


public static void main(String[] args) throws IOException {


BufferedImage hugeImage = ImageIO.read(PerformanceTest.class.getResource("12000X12000.jpg"));


System.out.println("Testing convertTo2DUsingGetRGB:");
for (int i = 0; i < 10; i++) {
long startTime = System.nanoTime();
int[][] result = convertTo2DUsingGetRGB(hugeImage);
long endTime = System.nanoTime();
System.out.println(String.format("%-2d: %s", (i + 1), toString(endTime - startTime)));
}


System.out.println("");


System.out.println("Testing convertTo2DWithoutUsingGetRGB:");
for (int i = 0; i < 10; i++) {
long startTime = System.nanoTime();
int[][] result = convertTo2DWithoutUsingGetRGB(hugeImage);
long endTime = System.nanoTime();
System.out.println(String.format("%-2d: %s", (i + 1), toString(endTime - startTime)));
}
}


private static int[][] convertTo2DUsingGetRGB(BufferedImage image) {
int width = image.getWidth();
int height = image.getHeight();
int[][] result = new int[height][width];


for (int row = 0; row < height; row++) {
for (int col = 0; col < width; col++) {
result[row][col] = image.getRGB(col, row);
}
}


return result;
}


private static int[][] convertTo2DWithoutUsingGetRGB(BufferedImage image) {


final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;


int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel + 3 < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel + 2 < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}


return result;
}


private static String toString(long nanoSecs) {
int minutes    = (int) (nanoSecs / 60000000000.0);
int seconds    = (int) (nanoSecs / 1000000000.0)  - (minutes * 60);
int millisecs  = (int) ( ((nanoSecs / 1000000000.0) - (seconds + minutes * 60)) * 1000);




if (minutes == 0 && seconds == 0)
return millisecs + "ms";
else if (minutes == 0 && millisecs == 0)
return seconds + "s";
else if (seconds == 0 && millisecs == 0)
return minutes + "min";
else if (minutes == 0)
return seconds + "s " + millisecs + "ms";
else if (seconds == 0)
return minutes + "min " + millisecs + "ms";
else if (millisecs == 0)
return minutes + "min " + seconds + "s";


return minutes + "min " + seconds + "s " + millisecs + "ms";
}
}

你能猜出结果吗? ;)

Testing convertTo2DUsingGetRGB:
1 : 16s 911ms
2 : 16s 730ms
3 : 16s 512ms
4 : 16s 476ms
5 : 16s 503ms
6 : 16s 683ms
7 : 16s 477ms
8 : 16s 373ms
9 : 16s 367ms
10: 16s 446ms


Testing convertTo2DWithoutUsingGetRGB:
1 : 1s 487ms
2 : 1s 940ms
3 : 1s 785ms
4 : 1s 848ms
5 : 1s 624ms
6 : 2s 13ms
7 : 1s 968ms
8 : 1s 864ms
9 : 1s 673ms
10: 2s 86ms


BUILD SUCCESSFUL (total time: 3 minutes 10 seconds)

如果有用的话,试试这个:

BufferedImage imgBuffer = ImageIO.read(new File("c:\\image.bmp"));


byte[] pixels = (byte[])imgBuffer.getRaster().getDataElements(0, 0, imgBuffer.getWidth(), imgBuffer.getHeight(), null);

这对我很有效:

BufferedImage bufImgs = ImageIO.read(new File("c:\\adi.bmp"));
double[][] data = new double[][];
bufImgs.getData().getPixels(0,0,bufImgs.getWidth(),bufImgs.getHeight(),data[i]);

我发现 Mota 的回答让我的速度提高了10倍——所以谢谢 Mota。

我将代码封装在一个方便的类中,该类接受构造函数中的 BufferedImage,并公开一个等效的 getRBG (x,y)方法,这使得它成为使用 BufferedImage.getRGB (x,y)代替代码的一个删除

import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;


public class FastRGB
{


private int width;
private int height;
private boolean hasAlphaChannel;
private int pixelLength;
private byte[] pixels;


FastRGB(BufferedImage image)
{


pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
width = image.getWidth();
height = image.getHeight();
hasAlphaChannel = image.getAlphaRaster() != null;
pixelLength = 3;
if (hasAlphaChannel)
{
pixelLength = 4;
}


}


int getRGB(int x, int y)
{
int pos = (y * pixelLength * width) + (x * pixelLength);


int argb = -16777216; // 255 alpha
if (hasAlphaChannel)
{
argb = (((int) pixels[pos++] & 0xff) << 24); // alpha
}


argb += ((int) pixels[pos++] & 0xff); // blue
argb += (((int) pixels[pos++] & 0xff) << 8); // green
argb += (((int) pixels[pos++] & 0xff) << 16); // red
return argb;
}
}

Mota 的答案很棒,除非你的 BufferedImage 来自单色位图。Monochrome 位图的像素只有2个可能的值(例如0 = 黑色和1 = 白色)。当使用 Monochrome 位图时,则

final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();

Call 以每个字节包含多个像素的方式返回原始像素数组数据。

因此,当你使用 Monochrome 位图图像来创建 BufferedImage 对象时,这就是你想要使用的算法:

/**
* This returns a true bitmap where each element in the grid is either a 0
* or a 1. A 1 means the pixel is white and a 0 means the pixel is black.
*
* If the incoming image doesn't have any pixels in it then this method
* returns null;
*
* @param image
* @return
*/
public static int[][] convertToArray(BufferedImage image)
{


if (image == null || image.getWidth() == 0 || image.getHeight() == 0)
return null;


// This returns bytes of data starting from the top left of the bitmap
// image and goes down.
// Top to bottom. Left to right.
final byte[] pixels = ((DataBufferByte) image.getRaster()
.getDataBuffer()).getData();


final int width = image.getWidth();
final int height = image.getHeight();


int[][] result = new int[height][width];


boolean done = false;
boolean alreadyWentToNextByte = false;
int byteIndex = 0;
int row = 0;
int col = 0;
int numBits = 0;
byte currentByte = pixels[byteIndex];
while (!done)
{
alreadyWentToNextByte = false;


result[row][col] = (currentByte & 0x80) >> 7;
currentByte = (byte) (((int) currentByte) << 1);
numBits++;


if ((row == height - 1) && (col == width - 1))
{
done = true;
}
else
{
col++;


if (numBits == 8)
{
currentByte = pixels[++byteIndex];
numBits = 0;
alreadyWentToNextByte = true;
}


if (col == width)
{
row++;
col = 0;


if (!alreadyWentToNextByte)
{
currentByte = pixels[++byteIndex];
numBits = 0;
}
}
}
}


return result;
}

下面是 给你中的另一个 FastRGB 实现:

public class FastRGB {
public int width;
public int height;
private boolean hasAlphaChannel;
private int pixelLength;
private byte[] pixels;


FastRGB(BufferedImage image) {
pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
width = image.getWidth();
height = image.getHeight();
hasAlphaChannel = image.getAlphaRaster() != null;
pixelLength = 3;
if (hasAlphaChannel)
pixelLength = 4;
}


short[] getRGB(int x, int y) {
int pos = (y * pixelLength * width) + (x * pixelLength);
short rgb[] = new short[4];
if (hasAlphaChannel)
rgb[3] = (short) (pixels[pos++] & 0xFF); // Alpha
rgb[2] = (short) (pixels[pos++] & 0xFF); // Blue
rgb[1] = (short) (pixels[pos++] & 0xFF); // Green
rgb[0] = (short) (pixels[pos++] & 0xFF); // Red
return rgb;
}
}

这是什么?

通过 BufferedImage 的 getRGB 方法一个像素一个像素地读取图像是相当慢的,这个类就是这个问题的解决方案。

其思想是通过向对象提供一个 BufferedImage 实例来构造对象,然后它一次读取所有数据并将它们存储在一个数组中。一旦需要获取像素,就调用 getRGB

依赖性

import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;

考虑因素

虽然 FastRGB 可以更快地读取像素,但它可能导致高内存使用率,因为它只存储图像的一个副本。因此,如果内存中有一个4 MB 的 BufferedImage,那么一旦创建了 FastRGB 实例,内存使用量就会变成8 MB。但是,可以在创建 FastRGB 之后回收 BufferedImage 实例。

当在 Android 手机等设备上使用它时,要小心不要陷入 OutOfMemory 异常,因为 RAM 是一个瓶颈