There are three main types of existing iris segmentation methods:boundary-based methods, pixel-based methods and deep learning based methods. Boundary-based methods try to locate iris region by finding pupil boundary and limbus boundary. Whilepixel-based methods directly determine whether each pixel belongs to iris region. Deep learning-based methods can be roughly classified as pixel-based method. But due to their outstanding performance,we list them separately.Boundary-based methods. Boundary-based methods try to locate iris region by finding pupil boundary and limbus boundary.It’s natural to have the idea that iris region is annulus. But in reality,the inner and outer boundary of iris are usually not concentric.To avoid such mistake, Daugman’s early work [5] simply regard iris region as circulars, which can be represented by two circles. He proposed a two-step iris segmentation method by repeatedly using an integro-differential operator to search over the image domain tofind first the outer boundary, and then the inner boundary of iris region. Wildes et al. [6] adopted a gradient-based edge detection and then located outer and inner circles by using Hough transform[24,25]. Nearly all early methods were based on the assumption that iris had annulus-like boundary. However, usually, due to thereason of noise and occlusion, iris boundary, especially outer boundary, are not circular. So, Wildes et al. [6] parameterized upper and lower eyelids as parabolic arcs and located them by a gradient-based edge detector that favors the horizontal (based on the assumption that head is upright). Methods proposed by Shah[7] and Daugman [22] used active contours to enhance iris segmentation,because such method allowed for non-circular boundaries and enabled flexible coordinate systems. Moreover, many approaches such as reflection removal [3], illumination normalizationand coarse iris localization [26], boundary fitting model [27]have been introduced to improve the performance of boundarybased iris segmentation methods.Pixel-based methods. Essentially, pixel-based methods try to construct classifiers for determining if each pixel is within iris region. Early promising pixel-based methods proposed by Pundlik et al. [8] adopted a step-wise procedure based on image intensities.In this method, a graph cut based energy minimization algorithm was used to separate first eyelash, then pupil, iris, and background.Tan and Kumar [9] exploited the localized Zernike moments feature[28,29] at different radii to classify each pixel into iris or non-iris category using support vector machines (SVMs). Proença[10] first introduced neural network to classify iris pixels. In thismethod, a shallow neural network, which contained only one hidden layer, was adopted for first sclera and then iris training/classification.The idea of first stage comes from the insight that sclera is the most distinguishable region in non-ideal images, and the mandatory adjacency of the sclera and the iris was exploited to detect noise-free iris regions.Basically, boundary-based iris segmentation methods require prominent contrast of structure components. Gradient and contour information was concerned more in such methods. While pixelbased methods rely highly on discriminative features such as images’ texture, color and intensity. Also, approaches such as[30,31] integrated these two kind of methods. [30] first roughlycluster image pixels into iris and non-iris regions by setting threshold on brightness, and then on the obtained coarse iris location image, an integro-differential model was adopted to locate iris boundary. While [31] first used Random Walker to locate coarse boundary circle of iris region, then a series of operations based on statistical gray level intensity information were adopted forpixel-level boundary refine.