与“分割”和“场景标记”相比,“语义分割”是什么?

语义分割仅仅是一种普遍现象,还是“语义分割”和“分割”有区别?“场景标记”和“场景解析”有区别吗?

像素级分割和像素级分割的区别是什么?

(附带问题: 当你有这种像素级的注释时,你是免费得到目标检测,还是仍然有事可做?)

请给出你的定义的来源。

使用“语义分割”的资料来源

  • Jonathan Long,Evan Shelhamer,Trevor Darrell: 用于语义分割的全卷积网络. CVPR,2015 and PAMI,2016
  • Hong,Seunghoon,Hyeonwoo Noh,and Bohyung Han: “用于半监督语义分割的解耦深度神经网络”,预印本 arXiv: 1506.04924,2015。
  • V.Lempitsky,A. Vedaldi,and A. Zisserman: 语义分割的塔模型。神经信息处理系统进展,2011。

使用“场景标签”的来源

  • 克莱门特 · 法拉贝(Clement Farabet) ,卡米尔 · 库普里(Camille Couprie) ,劳伦特 · 纳吉曼(Laurent Najman) ,扬 · 勒库恩(Yann Leque) : 场景标记,模式分析与机器智能,2013年

使用“像素级”的源

  • Pinheiro,Pedro O. ,and Ronan Collobert: “从图像级到像素级卷积网络标记。”2015年 IEEE 计算机视觉和模式识别会议论文集。(见 http://arxiv.org/abs/1411.6228)

使用“像素化”的源

谷歌 Ngram

“语义分割”最近似乎比“场景标签”用得更多

enter image description here

51934 次浏览

"segmentation" is a partition of an image into several "coherent" parts, but without any attempt at understanding what these parts represent. One of the most famous works (but definitely not the first) is Shi and Malik "Normalized Cuts and Image Segmentation" PAMI 2000. These works attempt to define "coherence" in terms of low-level cues such as color, texture and smoothness of boundary. You can trace back these works to the Gestalt theory.

On the other hand "semantic segmentation" attempts to partition the image into semantically meaningful parts, and to classify each part into one of the pre-determined classes. You can also achieve the same goal by classifying each pixel (rather than the entire image/segment). In that case you are doing pixel-wise classification, which leads to the same end result but in a slightly different path...

So, I suppose you can say that "semantic segmentation", "scene labeling" and "pixelwise classification" are basically trying to achieve the same goal: semantically understanding the role of each pixel in the image. You can take many paths to reach that goal, and these paths lead to slight nuances in the terminology.

I read a lot of papers about Object Detection, Object Recognition, Object Segmentation, Image Segmentation and Semantic Image Segmentation and here's my conclusions which could be not true:

Object Recognition: In a given image you have to detect all objects (a restricted class of objects depend on your dataset), Localized them with a bounding box and label that bounding box with a label. In below image you will see a simple output of a state of the art object recognition.

object recognition

Object Detection: it's like Object recognition but in this task you have only two class of object classification which means object bounding boxes and non-object bounding boxes. For example Car detection: you have to Detect all cars in a given image with their bounding boxes.

Object Detection

Object Segmentation: Like object recognition you will recognize all objects in an image but your output should show this object classifying pixels of the image.

object segmentation

Image Segmentation: In image segmentation you will segment regions of the image. your output will not label segments and region of an image that consistent with each other should be in same segment. Extracting super pixels from an image is an example of this task or foreground-background segmentation.

image segmentation

Semantic Segmentation: In semantic segmentation you have to label each pixel with a class of objects (Car, Person, Dog, ...) and non-objects (Water, Sky, Road, ...). I other words in Semantic Segmentation you will label each region of image.

semantic segmenation

I think pixel-level and pixelwise labeling is basically is the same could be image segmentation or semantic segmentation. I've also answered your question in this link as the same.

The previous answers are really great, I would like to point out a few more additions:

Object Segmentation

one of the reasons that this has fallen out of favor in the research community is because it is problematically vague. Object segmentation used to simply mean finding a single or small number of objects in an image and draw a boundary around them, and for most purposes you can still assume it means this. However, it also began to be used to mean segmentation of blobs that might be objects, segmentation of objects from the background (more commonly now called background subtraction or background segmentation or foreground detection), and even in some cases used interchangeably with object recognition using bounding boxes (this quickly stopped with the advent of deep neural network approaches to object recognition, but beforehand object recognition could also mean simply labeling an entire image with the object in it).

What makes "segmentation" "semantic"?

Simpy, each segment, or in the case of deep methods each pixel, is given a class label based on a category. Segmentation in general is just the division of the image by some rule. Meanshift segmentation, for example, from a very high level divide the data according to the changes in the energy of the image. Graph cut based segmentation is similarly not learned but directly derived from the properties of each image separate from the rest. More recent (neural network based) methods use pixels that are labeled to learn to identify the local features which are associated with specific classes, and then classify each pixel based on which class has the highest confidence for that pixel. In this way, "pixel-labeling" is actually more honest name for the task, and the "segmentation" component is emergent.

Instance Segmentation

Arguably the most difficult, relevant, and original meaning of Object Segmentation, "instance segmentation" means the segmentation of the individual objects within a scene, regardless of if they are the same type. However, one of the reason this is so difficult is because from a vision perspective (and in some ways a philosophical one) what makes an "object" instance is not entirely clear. Are body parts objects? Should such "part-objects" be segmented at all by an instance segmentation algorithm? Should they be only segmented if they are seen separate from the whole? What about compound objects should two things clearly adjoined but separable be one object or two (is a rock glued to the top of a stick an ax, a hammer, or just a stick and a rock unless properly made?). Also, it isn't clear how to distinguish instances. Is a will a separate instance from the other walls it is attached to? What order should instances be counted in? As they appear? Proximity to the viewpoint? In spite of these difficulties, segmentation of objects is still a big deal because as humans we interact with objects all the time regardless of their "class label" (using random objects around you as paper weights, sitting on things that are not chairs), and so some dataset do attempt to get at this problem, but the main reason there isn't much attention given to the problem yet is because it isn't well enough defined. enter image description here

Scene Parsing/Scene labeling

Scene Parsing is the strictly segmentation approach to scene labeling, which also has some vagueness problems of its own. Historically, scene labeling meant to divide the entire "scene" (image) up into segments and give them all a class label. However, it was also used to mean giving class labels to areas of the image without explicitly segmenting them. With respect to segmentation, "semantic segmentation" does not imply dividing the entire scene. For semantic segmentation, the algorithm is intended to segment only the objects it knows, and will be penalized by its loss function for labeling pixels that don't have any label. For example the MS-COCO dataset is a dataset for semantic segmentation where only some objects are segmented. MS-COCO sample images