[PDF] [PDF] Automatic Radial Distortion Estimation from a Single Image

tion · Radial distortion · Barrel distortion · Pincushion distortion · Plumb-line sumption is invalid, making distortion correction a must Radial distortion is the 



Previous PDF Next PDF





[PDF] Correction of Barrel Distortion in Fisheye Lens Images Using Image

ABSTRACT Images acquired by a fisheye lens camera contain geometric distortion that results in defor- mation of the object's shape To correct the lens distortion, 



An Efficient Barrel Distortion Correction Processor for - IEEE Xplore

19 jui 2018 · can perform the barrel distortion correction (BDC) jointly with color demosaicking, so as to produce the barrel-distortion-corrected color image



[PDF] FPGA ARCHITECTURE FOR REAL-TIME BARREL DISTORTION

Index Terms— real-time barrel distortion correction, frame buffer, Field Programmable Gate Array (FPGA), YUV 4:2:2 1 INTRODUCTION In recent years digital 



[PDF] A Real-time FPGA Implementation of a Barrel Distortion Correction

A higher quality lens can be used to correct for this but this comes at additional cost to the image capture system Barrel distortion is primarily radial in nature, with 



[PDF] A New Approach to Lens Distortion Correction - Massey University

When using an imaging system to make measurements, it is important to correct for lens distortion An image of a grid is captured, and parabolas are fitted to 



[PDF] Automatic Radial Distortion Estimation from a Single Image

tion · Radial distortion · Barrel distortion · Pincushion distortion · Plumb-line sumption is invalid, making distortion correction a must Radial distortion is the 

[PDF] barron's 501 french verbs pdf

[PDF] barron's ap french language and culture

[PDF] barron's ap french language and culture pdf

[PDF] base addressing mode in 8086

[PDF] base catalyzed hydrolysis of ethyl acetate

[PDF] base d'un espace vectoriel exercice corrigé

[PDF] base hydrolysis constant reaction

[PDF] base hydrolysis of amides mechanism

[PDF] base hydrolysis of amides products

[PDF] base hydrolysis of aspirin mechanism

[PDF] base hydrolysis of ester mechanism

[PDF] base hydrolysis of ethyl acetate lab report

[PDF] base hydrolysis of ethyl ethanoate

[PDF] base hydrolysis of methyl acetate

[PDF] base hydrolysis of n ethylformamide

Journal of Mathematical Imaging and Vision manuscript No. (will be inserted by the editor)Automatic Radial Distortion Estimation from a Single Image

Faisal BukhariMatthew N. Dailey

the date of receipt and acceptance should be inserted later

AbstractMany computer vision algorithms rely on

the assumptions of the pinhole camera model, but lens distortion with o-the-shelf cameras is usually signi- cant enough to violate this assumption. Many methods for radial distortion estimation have been proposed, but they all have limitations. Robust automatic radial dis- tortion estimation from a single natural image would be extremely useful for many applications, particularly those in human-made environments containing abun- dant lines. For example, it could be used in place of an extensive calibration procedure to get a mobile robot or quadrotor experiment up and running quickly in an in- door environment. We propose a new method for auto- matic radial distortion estimation based on the plumb- line approach. The method works from a single image and does not require a special calibration pattern. It is based on Fitzgibbon's division model, robust esti- mation of circular arcs, and robust estimation of dis- tortion parameters. We perform an extensive empirical study of the method on synthetic images. We include a comparative statistical analysis of how dierent circle tting methods contribute to accurate distortion pa- rameter estimation. We nally provide qualitative re- sults on a wide variety of challenging real images. The experiments demonstrate the method's ability to accu- rately identify distortion parameters and remove dis- tortion from images.Faisal BukhariMatthew N. Dailey The authors are with the Computer Science and Information Management program, School of Engineering and Technol- ogy, Asian Institute of Technology, P.O. Box 4, Klong Luang,

Pathumthani 12120, Thailand

Fax: +66 2 524 5721

E-mail: syed.faisal.bukhari@ait.ac.th, mdailey@ait.ac.thKeywordsCamera calibrationDistortion correc- tionRadial distortionBarrel distortionPincushion distortionPlumb-line methodsDivision model

Algebraic circle ttingGeometric circle tting

1 Introduction

Most computer vision algorithms, particularly struc- ture from motion algorithms, rely critically on the as- sumption of a linear pinhole camera model. However, most commercially available cameras introduce su- ciently severe optical distortion that the pinhole as- sumption is invalid, making distortion correction a must. Radial distortion is the most signicant type of distor- tion in today's cameras [28,43,54]. It is most evident in images produced with low-cost, wide-angle lenses [23]. Such lenses are being widely deployed, for example, in automotive driver assistance applications [22,26]. But radial distortion is also signicant enough in higher- quality cameras to introduce error into 3D reconstruc- tion processes. Radial distortion bends straight lines into circular arcs [43,50], violating the main invariance preserved in the pinhole camera model, that straight lines in the world map to straight lines in the image plane [17, 25]. Radial distortion may appear as bar- rel distortion, usually arising at short focal lengths, or pincushion distortion, usually arising at longer fo- cal lengths. Besides radial distortion, another type of distortion is tangential distortion. We do not have ex- perience with real cameras that introduce signicant tangential distortion, so like most previous work [2,28,

38,43,47,54], we ignore tangential distortion.

Methods for radial distortion estimation fall into three major categories: point correspondence [7,49,54],

2Faisal Bukhari, Matthew N. Dailey

multiple view autocalibration [4,21,24,36,42], and plumb- line [2,8,9,17,43,44,47,50].

Point correspondence based methods [7,49,54] are

ideal for distortion estimation during pre-calibration of a camera with a xed focal length. They identify im- age points with known 3D positions in multiple images using a known pattern such as a chessboard and then es- timate the parameters of an undistortion function. The parameterized undistortion function can then be used to undistort specic images or point positions. These point correspondence methods are highly reliable and accurate; radial distortion estimation and removal is a solved problem for cameras that are pre-calibrated at a xed focal length.

Manual camera calibration, however, is a tedious

process that is not always possible, for example, when we want to process an existing image sequence acquired with an unknown camera, when we want to change the focal length dynamically during image sequence acqui- sition, or when we want to get a mobile robot experi- ment up and running quickly. Multiple view auto-calibration is an active area of computer vision research that aims to extract camera parameters automatically from natural images. Auto- calibration methods use a sequence of arbitrary natu- ral images without any special pattern or information about the scene. Although many auto-calibration meth- ods assume a pinhole camera, others do attempt to si- multaneously estimate radial distortion parameters and pinhole parameters [21,24,28,36,42]. Auto-calibration is a mature area of research, but the main limitation of this class of methods is that it requires multiple images under camera motion. For xed cameras and for situations where immediate on- line estimation is desirable, multiple view methods are inappropriate. In view of the limitations of the point correspon- dence and auto-calibration methods, robust automatic distortion estimation and and removal from a single natural image would be extremely useful for many ap- plications, particular those in human-made environments containing abundant lines. For example, it could be used in place of an extensive calibration procedure to get a mobile robot or quadrotor experiment up and running quickly in an indoor environment. Plumb-line methods are the most promising for robust distortion estimation from a single image or a small number of im- ages. Rather than using a known pattern or sequence of images under camera motion, they estimate distor- tion parameters directly from distorted straight lines in one or more images. Straight lines are frequent enough in most human-made environments to make distortion estimation from a single image possible [43,47,50].The main limitations of this class of methods are that straight lines must be visible in the image and that images of actual curved lines may disrupt estimation. Some methods address these issues simply by utilizing human supervision to select the lines (see, e.g., [2,9,44]). But when human supervision is not used, plumb-line methods depend critically on the robustness and accu- racy of the line detection algorithms. Some plumb-line approaches do not use all available lines for distortion estimation despite the fact that additional lines could minimize estimation error [43,47,50], or assume the dis- tortion center as the center of the image [2,8,21,28,43], which is in contrast to some researchers' recommenda- tions [24,46]. The Devernay and Faugeras [17] method is the only existing method that overcomes all of these limitations. However, it requires a complex process of polygonal approximation of the distorted lines. As we shall see, the distorted line detection process can be dra- matically simplied by using an alternative distortion model.

We propose a new method based on the plumb-

line approach that addresses the aforementioned lim- itations. The method works from a single image if the image contains a sucient number of distorted straight lines. It does not require a calibration pattern or hu- man intervention. We use Fitzgibbon's division model of radial distortion [21] with a single parameter. Our estimator is similar to that of Strand and Hayman [43] and Wang et al. [50] in that we estimate the parameters of the distortion model from the parameters of circular arcs identied in the distorted image, based on the fact that distorted straight lines can be modeled as circular under the division model [4,43,50]. Our contribution is to make the process fully automatic and robust to out- liers using a two-step random sampling process. For the rst step, we introduce a sampling algorithm to search the input image for subsequences of contour pixels that can be modeled as circular arcs. For the second step, we introduce a sampling algorithm that nds the dis- tortion parameters consistent with the largest number of arcs. Based on these parameters, we undistort the in- put image. Some preliminary results from our method have previously appeared in a conference paper [11].

In this paper, to evaluate the new algorithm, we

perform a comprehensive quantitative study of its per- formance on distorted synthetic images and provide extensive examples of its ability to remove distortion from a large, challenging set of real images taken from Web sites and previous papers on distortion estima- tion. We nd that the algorithm performs very well, with excellent reconstruction of the original synthetic images even under severe barrel distortion and pincush- ion distortion. We also nd that the method is able to Automatic Radial Distortion Estimation from a Single Image 3 eliminate nearly all of the visible distortion in the real images, including those acquired with wide angle and sh-eye lenses. Finally, we perform a direct compari- son of our method with that ofAlvarez et al. [2], the only researchers who have provided a publicly accessible implementation of their method, on synthetic images. TheAlvarez et al. method exploits user assistance in identifying points on straight lines, but we nevertheless nd that our fully automatic method provides superior reconstruction of the original undistorted image. Our method is thus a practical solution to the important problem of radial distortion estimation and removal.

2 Mathematical Model

In this section, we outline the mathematical model of radial distortion assumed in the rest of the paper and show how to estimate the parameters of this model.

2.1 Distortion model

The most commonly used radial distortion model is the even-order polynomial model x u=xd(1 +1r2d+2r4d+3r6d+) y u=yd(1 +1r2d+2r4d+3r6d+);(1) where (xu;yu) and (xd;yd) are the corresponding coor- dinates of an undistorted point and a distorted point, respectively.rdis the Euclidean distance of the dis- torted point to the distortion center. If the distortion center is the origin of the distorted image, we can sim- ply write r

2d=x2d+y2d:(2)

However, if (x0;y0) is the center of distortion (in the distorted image), we write r

2d= (xdx0)2+ (ydy0)2(3)

and replacexdandydin Equation (1) with (xdx0) and (ydy0), respectively. In the model,x0,y0,1;2;3;::: are thedistortion parameters, which must be estimated from image measurements. There have been objections to the even-order poly- nomial model. According to Wang et al. [50], the model performs well for small distortion, but for severe distor- tion, a prohibitively large number of non-zero distortion parameters are required. Fitzgibbon [21] proposes an alternative model, the division model, as a more accurate approximation tothe typical camera's true undistortion function: x u=xd1 +1r2d+2r4d+ y u=yd1 +1r2d+2r4d+: The division model is preferred over the polynomial model because it requires fewer terms than the poly- nomial model in case of severe distortion [50]. It is also slightly easier to work with; inverting the single- parameter division model, for example, requires solu- tion of a polynomial of degree two, whereas invert- ing the single-parameter polynomial model leads to a polynomial of degree three. In our work, we use the single-parameter division model (xing2== 0), because for most cameras, a single term is sucient [17,21,50,51]. When the center of distortion is not the origin, we can write the single-parameter division model in the form x u=x0+xdx01 +r2d y u=y0+ydy01 +r2d(4) withr2ddened according to Equation (3). Strand and Hayman [43] nd that for the typical case of relatively small barrel distortion (small negative values for), the single-parameter division model is highly correlated with the single-parameter polynomial model.

2.2 Distortion of a line under the single-parameter

division model Wang et al. [50] show that under the single-parameter division model, the distorted image of a straight line is a circular arc. However, they use the slope-y-intercept form of the equation of a line, which we avoid due to its inability to model vertical lines and its undesirable numerical properties. However, it is also easy to show that the general line ax u+byu+c= 0 (5) is also imaged as a circular arc under the single pa- rameter division model. To avoid the degenerate case a=b= 0, we impose the constraint thata2+b2>0. (When convenient we will further assume the line pa- rameters are normalized so thata2+b2= 1.) By sub- stituting the image coordinates from Equation (4) into Equation (5), replacingr2dby its denition from Equa- tion (3), and simplifying, we obtain the circle x

2d+y2d+exd+fyd+g= 0;(6)

4Faisal Bukhari, Matthew N. Dailey

where e=ac 2x0 f=bc

2y0(7)

g=x20+y20ac x0bc y0+1 It is also possible to come to the conclusion that straight lines are imaged as circles using the parametric form of a straight line [43].

2.3 Inverse mapping

When undistorting an image, it is necessary to com- pute, for each pixel in the output undistorted image, the corresponding pixel position in the distorted im- age then perform interpolation to determine the actual pixel color or intensity in the output undistorted image (we use simple bilinear interpolation in all of the experi- ments reported on in this paper). However, while every distorted image point (xd;yd) is mapped to a unique undistorted image point (xu;yu) by Equation (4), the reverse is not true. To invert Equation (4) and nd the value ofxdandydas a function ofxuandyu, we rst square and add the individual equations to obtain (xux0)2+ (yuy0)2=

1(1 +r2d)2((xdx0)2+ (ydy0)2):(8)

We then letrube the distance of (xu;yu) to the distor- tion center: r

2u= (xux0)2+ (yuy0)2:(9)

This lets us simplify Equation (8) to

r 2d1r urd+1 = 0:(10) For positive(pincushion distortion), given 0< r2u<

14, Equation (10) has two positive real roots. We use

the smaller of the two. For negative(barrel distor- tion), given anyr2u>0, there are always two real solu- tions, but one is negative. We use the positive solution. After solving forrdin terms ofru, the distorted image coordinates corresponding to (xu;yu) can be obtained as x d=x0+rdr u (xux0) y d=y0+rdr u (yuy0):(11)2.4 Estimating distortion parameters from circular arcs Strand and Hayman [43] and Wang et al. [50] show that it is possible to estimatefrom the parameters of circu- lar arcs identied in an image. Wang et al. [50] further show how bothand the distortion center (if not as- sumed to be the center of the image) can be estimated from the parameters of three circular arcs identied in an image. We use their formulation. In equation (7), multiplying the equation forebyx0, the equation for fbyy0, and adding the equations forex0,fy0, andg, we obtain x

20+y20+ex0+fy0+g1

= 0 (12) For each of the three arcsi2 f1;2;3g, we use Equa- tion (12) to obtain coecientsei,fi, andgi, then the distortion center can be estimated by solving the linear system (e1e2)x0+ (f1f2)y0+ (g1g2) = 0 (e1e3)x0+ (f1f3)y0+ (g1g3) = 0;(13) and an estimate ofcan be obtained from 1 =x20+y20+ex0+fy0+g(14) using any of the three arcs' parameters in place ofe,f, andg. See Wang et al. [50] for details.

3 Robust Radial Distortion Estimation

In this section, we provide a detailed algorithm for es- timating the parameters of the mathematical model in- troduced in Section 2.

3.1 Identifying circular arcs

The rst step in our method is to robustly identify as many non-overlapping circular arcs as possible in the distorted input image. Each arc is identied by a circle center, circle radius, and the contiguous sequence of pixels consistent with that circle.

To nd arcs, we rst extract edges and link adja-

cent edge pixels remaining contours. We discard any contour whose length is less thanlminpixels (we use l min= 10 pixels) and then we attempt to nd long pixel subsequences within each contour that can be t by circular arcs. Our method is based on random sam- pling and inspired by RANSAC [20], but, rather than nding a single model for all the data, we preserve all models (candidate circular arcs) not overlapping with Automatic Radial Distortion Estimation from a Single Image 5

Input:Con toursC1;C2;:::

Output:Ais the output arc set

A ; foreachcontourCidoifjCij lminthenN f(lmin;jCij) forn 1toNdoSample three pointsx1;x2;x3fromCi. ifx1;x2;x3are not collinearthenCalculatexc;yc;rfromx1;x2;x3

Anew arc for longest subsequence of

C iconsistent withxc;yc;r ifjAnewj lminthenifAnewdoes not overlap with any arc inAthenA A[ fAnewg end else ifAnewis longer than every overlapping arc inAthenRemove arcs overlapping with A newfromA

A A[ fAnewg

end end end end end end

Algorithm 1:Robust arc identication.

other arcs in the same contour that have more support. The termination criterion is to stop once the probabil- ity that an arc of minimal length has not yet been found is small. In Algorithm 1, we provide the details of the method. To determine the number of sampling iterations re- quired, the algorithm uses a functionf(l;n), which gives the number of trials required to ensure that the proba- bility of not sampling three oflinliers from a set ofn points is small. This ensures that we sample a sucient number of times to nd, with high probability, all arcs with sucient length in each contour.

3.2 Rening circular arc estimates

After the initial arc identication process is complete, each resulting arc, whose parameters have been calcu- lated directly from the minimum sample of three points, is rened using the inlier pixel contour subsegment sup- porting that model. The gold standard objective func- tion for circle tting is (xc;yc;r) =NX i=1d(xi;yi;xc;yc;r)2;(15) where (xc;yc) is the center of the circle,ris its radius,

andd(x;y;xc;yc;r) is the orthogonal distance of themeasured point (x;y) to the hypothetical circle.Nis

the number of pixels in a inlier contour. Since there is no closed-form solution for minimizing this objective function [1], we use an initial guess and the Levenberg- Marquardt (LM) iterative nonlinear least squares method to nd a local minimum. To obtain the initial guess, we use a variety of methods as detailed in the next section.

3.2.1 Algebraic circle tting methods

As the initial estimate of the circle's parameters, we use either the parameters calculated during the sam- pling procedure or one of three circle tting methods, Pratt [35], Taubin [45], and Kukush-Markovsky-van- Huel (KMvH) [29], based on algebraic error minimiza- tion. Both Pratt and Taubin use four parameters to specify a circle: a(x2+y2) +bx+cy+d= 0;(16) witha6= 0. The center of the circle is (b2a;c2a) and the radius is given byr=q(b2a)2+ (c2a)2da . The

Pratt method minimizes the objective function

(a;b;c;d) =NX i=1a(x2i+y2i) +bxi+cyi+d2;(17) subject to the constraint thatb2+c24ad= 1, to ensure that the parameterized equation represents an actual circle. The Taubin method minimizes the same objective function as the Pratt method, but imposes the constraint 4a2z+ 4abx+ 4acy+b2+c2= 1, wherexis the mean of the sample points'xcoordinates,yis the mean of the sample points'ycoordinates, andz= 1N P N i=1x2i+y2i. The additional constraint improves the convergence of the optimization [15].quotesdbs_dbs17.pdfusesText_23