[PDF] 3DCapture: 3D Reconstruction for a Smartphone





Previous PDF Next PDF



3DCapture: 3D Reconstruction for a Smartphone

An optimized pipeline for reconstructing a 3D repre- sentation of an object as a mesh with a texture com- pletely on a mobile phone. 1.1. Related Work. The 



Mobile3DRecon: Real-time Monocular 3D Reconstruction on a

2020. 12. 23. Our Mobile3DRecon system can perform real-time surface mesh re- construction on mid-range mobile phones with monocular camera we usually have in ...



CHISEL: Real Time Large Scale 3D Reconstruction Onboard a

Recently mobile phone manufacturers have started adding high-quality depth and inertial sensors to mobile phones and tablets. The devices we use in this work



3D Foot Reconstruction Based on Mobile Phone Photographing

2021. 4. 29. To tackle these problems we obtain foot parameters by 3D foot reconstruction based on mobile phone photography. Firstly



Live Metric 3D Reconstruction on Mobile Phones

Live Metric 3D Reconstruction on Mobile Phones. Petri Tanskanen Kalin Kolev



Real-Time 3D Tracking and Reconstruction on Mobile Phones

Index Terms—3d tracking 3d reconstruction



Live Metric 3D Reconstruction on Mobile Phones

tracking system on an Android phone where the inertial Recently the first works on live 3D reconstruction on mobile devices appeared. Wendel et al.



Collaborative Mobile 3D Reconstruction of Urban Scenes?

Abstract. Reconstruction of the surrounding 3D world is of particular interest either for mapping civil applications or for entertainment. The.



Mobile3DRecon: Real-time Monocular 3D Reconstruction on a

Our Mobile3DRecon system can perform real-time surface mesh re- construction on mid-range mobile phones with monocular camera we usually have in our pockets 



Through the Looking Glass: Neural 3D Reconstruction of

We propose a physically- based network to recover 3D shape of transparent objects using a few images acquired with a mobile phone camera under a known but 

3DCapture:3DReconstructionfor aSmartphone

OlegMuratovYury SlynkoVitalyChernov MariaLyubimtsevaArtem Shamsuarov

VictorBucha

SamsungR&DInstitute RUS

{o.muratov,y.slynko,v.chernov, l.maria,v.bucha}@samsung.com

Abstract

Weproposeamethod ofreconstructionof3Dr epresen-

tation(amesh withate xture)of anobjecton asmartphone withamonocular camera.The reconstructionconsists of twoparts- real-times canningaround theobjectandpost- processing.Atthescanningstag eIMUsensor sdataar e acquiredalongwithtrac ksoffeatur esinvideo. Aspecial careistakento complywith360 ◦scanrequir ement.All thesedataar eusedto buildacameratr ajectoryusingb un- dleadjustmenttec hniquesafterscanning iscompleted.This trajectoryisusedincalculation ofdepthmaps, whichthen areusedtoconstructa polygonalmeshwith overlaidte x- tures.Theproposedmethod ensurestr ackingat30fpson a modernsmartphonewhile thepost-processing partiscom- pletedwithin1 minuteusingan OpenCLcompatiblemobile

GPU.Inaddition,weshow thatwitha fewmodifications

thisalgorithmcan beadoptedfor humanfacer econstruc- tion.

1.Introduction

Manyrecentworks[

19,14]hav eshownthat3Drecon-

structionona smartphonebecamea reality.Ho wever ,most solutionsarestill quitecomputationallyhea vyanddo not produceahigh qualitymodel.These methodsgenerate densepointcloud or3Dcolor volume,which usuallycon- vantageofmostsolutionsis atricky initializationprocedure 13].

Inthispaper ,wepropose amethodofreconstructionofa

3Dmodelof anobjectas ameshwith atexture. Highframe

rateiscrucial forstabletracking (duetosmall posechanges betweensequentialframes athighframe rate)thuswe are usingalightweight 2Dfeaturepoints trackercoupled with IMUsensorsinte gration.Thisapproach alsoallowsusto excludeaninitializationstep.Then pointstracksand IMU dataareused inano velstructure frommotionalgorithm to calculateatrajectory ofacamera. During360◦scanning,

pointsarevisible forashort periodoftime andstrongoc- clusionoccursmaking poseestimationdif ficult.Butwe canrelyonthe factthat theendof thetrajectoryliesclosetoits startthusa loopclosuretechniques isusedto improve struc-turefrommotion accuracy.

Givenacameraposesandkeyframesitis straightforward

tocreatedepth mapsandfuse themintoa voxel volume.

Thisvolume isconvertedintoa meshandcomplemented

withtextures. ThankstoGPGPUcomputationsfordepth processingstagesour implementationgeneratesa highres- olutionmeshwithin 1minuteon asmartphone.

Ourke ycontributionscanbesummarizedas:

•Anov elstructure-from-motionalgorithmbasedonro-bustreal-timefeaturetracking,IMU dataintegration andappropriateof flinebundle adjustment.

•Anoptimizedpipeline forreconstructinga 3Drepre-sentationofan objectasa meshwitha texturecom- pletelyona mobilephone.

1.1.RelatedW ork

Thecurrentw orkdealswith monocular3Dreconstruc-

tion,whichhas beenaddressed byagreat numberofw orks. mobiledevice useduetolimitedcomputationalresources. Thereby,belowwewill focussolelyonworksthat address mobiledevice applicationof3Dreconstructionalgorithms.

Oneofthe firstworks on3Dreconstruction frommo-

bilephoneis [

19].Itis basedonthe wellknown PTAM

algorithm[

6]complementedwith inertialdatainte gration.

KeyframeswithposesproducedbySLAMsystem areused

fordepthmap computationinreal time,whichare fused intoadense 3Dcolorpoint cloud.Amore advancedap- proachhasbeen describedin[

14].Itutilize sdirectmodel-

basedimagealignment forcamerapose estimation.Atthe sametimethe reconstructedvolumetric modeliscontinu- ouslyupdatedwith newdepth measurements.Anotherap- proachhasbeen presentedin[

15].Itperforms simultaneous

visual-inertialshape-basedtracking and3Dshape estima- tion. 1 75

Allabov ementionedmethodsuseonline1trackingand

mapping,whichis acomputationallyhea vytask.This lead totwo significantdrawbacksofsuchapproaches. Firstly, duetolimited CPU/GPU/memoryresourcesthe ycom- promisereconstructionquality toachiev ereal-timeper - formance.Secondly, suchsystemsrequire accurateand smoothscanningthat ishardto achieve foranuntrained user.

Asopposedto thesemethods,a structurefrommotion

(SfM)approachcan beusedfor thesametask withnoneed forreal-timeperformance. Forinstance, OpenMVG[ 12] frameworkisusedin[

18].Howe ver,inordertorecover

cameratrajectorysuch approachesusuallyuse localfeature matchingthatresults inaquite longoffline processing.In manycases,asin[

1,21],thesetasks aredoneusing cloud-

basedprocessing.

Ourapproachis closetoclassic SfMapproaches.Ho w-

thatcombinesoptical flowand inertialdatathat allows fasterruntimeandhigheraccurac yascompared topure

SfMapproaches,such as[

18].Moreov er,tothebestofour

knowledgethisisthefirst workthat presentsacomplete 3D reconstructionpipeline(from capturetote xturedmesh)for mobiledevices.

1.2.Structure ofthePaper

Thispaperis organized asfollo ws:wepresentan

methodinSec.

2.Themotion estimationmethodandthe3D

reconstructionpipelineare describedinSec.

3andSec.4

respectively.Finally,weevaluatetheproposed solutionin

Sec.5.

2.SystemOv erview

1. Westartfromcapturinga videoduringwhichauserisasked tomake alooparoundanobjectof interest.Duringthis scanningIMUmeasurements arecollectedand visualfea- turetrackingis performed.

Aftercapturingis completed,k eyframeswith tracked

featuresandIMU dataareused toestimatecamera trajec- toryandscene structure.Next, theke yframeswithposes andsparsescene structurearepassed tothedepth mapes- timationmodule,where adepthmap iscomputedfor each keyframe.Thesedepthmapsarefusedrob ustlyintoa single voxelvolume.Afterthata3D meshisextractedfromthis volume.Finally,giv enthekeyframeswith posesandthe3D mesh,ate xtureisgenerated.

Theresultof theproposedmethod isthe3D textured

meshmodelof anobjectof interest.Theproposed method

1Hereandbelo w,online andofflinetermscorrespondtoatimeperiod

duringandafter capturingrespectfully.

Figure1.System Overview

tracksonlineat 30fpson amodernsmartphone whilethe offlineparttakesless thenaminute usinganOpenCLcom- patiblemobileGPU.

Inthefollo wingsectionseach stepofalgorithmisde-

scribedindetails. 76

3.Motionestimation

Amotionestimation modulecalculatesa trajectoryof

acamera.As typicalforsuch problemsitalso calculates somescenegeometry information(asparse pointcloud). Thismoduleis dividedinto onlineandof flineparts.In- ertialrotationestimation andfeaturetracking partsgather informationonlinein twoparallel threadswhilestructure estimationandloop closureisdone offline.

UnliketraditionalSLAMsystemsthere isnotight cou-

plingoftracking andmappingparts: featuretrackingis not dependentonthe resultsofstructure estimation,thelatter donotneed tomeetreal-time requirementsandcan beper- formedafterscene capturinghasbeen finished.Thisleads tothreek eyproperties oftheproposedmethod:

1.Thereis noinitializationstep incontrastto atypical

SLAMsystem.

2.Onlinepart isquitelightweight andperformancedoes

notdegrade withtime.

3.Lackof feedbackloopbetween trackingandmapping

partsmakes trackingverystablesinceoutliers donot affectonlinepartsofan algorithm.

Theinertialr otationestimationthr eadperformsgy-

roscopemeasurementsinte grationtopro videroughrotation estimation.Thegyroscope sensorissampled atthehighest availablerate,itis100Hzin ourcase.F orthesak eofef- ficiencyrotationdataare representedina quaternionform. Newmeasurementsareappliedto thecurrentrotation esti- mateusinga firstorderquaternion integration[

20].Here

wedonot compensateforthe possibledriftdue tobias. Inourcomputation weuseonly relative rotationbetween keyframesthatareveryclosein timethatmak eseffect of thisdriftne gligible. Thefeature trackingthread selectske yframesforfur- therprocessingand establishesvisualcorrespondences be- tweenthem.In ordertoget correspondencesweselect aset offeaturesin keyframes. Asthefeatures weusegrayscale

8×8squarepatchestak enatF ASTcorners[

16].We align

theminframes with2DLucas-Kanade tracker[

2]using

positionofpatch atprevious frameasinitial guess.For thesake ofefficiencyalignmentis performedinin verse- compositionalmanneron apyramidal imageina coarse- to-finefashion. Forbetterrobustnessag ainstoutliers abidirectional alignmentisperformed: apatchfrom ake yframeisaligned inthecurrent frame,andthen thisalignedpatch fromthe currentframeis alignedbackto theke yframe.Ifsuch bidi- rectionalalignment doesnotreturntothesame positionin theke yframefromwhereitisstarted,thenalignmentis treatedasf ailed. Itiscomputationally expensiv etotrack alldetectedfea-

tures,therebyonly asubsetof detectedfeaturesis selectedfortracking.A grid-basedfilteringis applied:animage isdivided intocells,andthefeaturewith thehighestShi- Tomasiresponse[

17]istak enfromeach cell.Suchafil-

teringensuresfeature pointsaree venlydist ributedo veran image.Thisminimizes apossibleg augefreedomef fect.

Inascenari oofa circularloopmotionaviewpoint

changesrapidlysuch thatfeaturesare observedfor ashort time,andtheir 2Dprojectionsappearance changesalot. To tacklewithfeature disappearancenew keyfram esareini- tializedbasedon camerarotationobtained fromgyroscope data.Empiricallywe foundthatgenerating anew keyframe every3 ◦resultsinbest motionestimationaccurac y.From eachke yframeanewsetoffeaturesise xtracted.

Inaddition,features areupdatedeach timeane w

keyframeisgenerated.Ifgiven featureisaligned inthene w keyframe,itsalignedpositionandpatchfrom thiske yframe isusedfor alignmentinne xtframes.This ensuresthatma- jorityoffeatures areobserved forov er10k eyframesresult- ingindense connectivitybetween frameswhilepreserving negligiblefeaturesdrift.

Thestructure estimationpartreceivesfeaturestracks

andke yframeswithassociatedrotationandestimatescam- eratrajectoriesalong witha3D structureofthe scene.All featureswhichwere observedin lessthanthree keyframes arefilteredout.

Themainidea ofthestructure estimationalgorithmis

touserough rotationtransformbetween framescalculated fromgyroscopemeasurement andreducethe poseestima- tionproblem(which isa6 DOFproblem)to findingacorre- spondingtranslation(which isa3 DOFproblem).The core ofthealgorithm isamultiple viewmatrix ofapoint [ 11]: M xj2R2xj1?xj2T2?xj3R3xj1?xj3T3. ?xjmRmxj1?xjmTm??????? ?R3(m-1)×2,(1) p jinaframe Ci,?xisacross productoperator ,RiandTiare rotationandtranslation ofthefram eCirespectfully.From therankcondition ofMpitfollows: xjiRixj1+αj?xjiTi=0,(2) wheretermαjisthein versedepth ofapointpjwithre- specttothe firstframe.These equationsarestack edfor eachpointto formasystem ofequations.Gi veninitial es- timateforrotation anddeptht his problemtakes anormal formofAx=b,whichcan besolved efficiently using Least-Squareswithrespect toTi.Andvice versa,in verse depthisestimated usingmultiple-view constraintequation: 77

αj=-?

m i=2(?xjiTi)T?xjiRixj1?mi=2??xji?2,(3) withmdenotingthenumber offrameswhere apointpjhas measurements. Forthefirstpairof framesthereis nopriorinformation ontranslationand depthdata.In ordertoinitialize struc- turewefirst setinv ersedepthv aluesforall pointsto1,thus makinganassumption ofaplane inthefront ofthecamera. Thenaniterati veprocedure isapplied.Ateachiterationwe estimatetranslationTiandin versedepthsαjforallpoints exceptareferenceone.This processstopswhen reprojec- tionerrorbecomes belowa threshold.Thein versedepth valueofthereferencepoint isfixed to1for theentire struc- tureestimationprocess, thusdefininga scaleofthe whole scene.Therecan bedifferent strategiesto selectthisrefer - encepoint;in ourimplementationwe pickapoint witha projectionnearestto thecenterof theframe. Forconsecutiveframes thereisnoneedtoperform“cold start",translationcan becomputedbased onpointswhich alreadyhav einversedepthestimate.Then,adepth isup- datedusingequation (

3).Notethat thisupdateincludes not

onlypointswithout depthvalue, butall thepointsvisible in thecurrentframe.

Next,werefineourestimates bysequentiallyperform-

ingmotion-onlyand structure-onlybundle adjustmentop- timizations,whichminimize reprojectionresiduals using

Gauss-Newtonalgorithm.Thisisnecessary inorderto

compensatefora possiblerotationdrift duetogyroscope sensorbias.On average itrequiresonly 1-2iterationstill thesystemcon verges. loopclosure stepisperformed.This steptakes advantage ofourprior knowledgethat thetrajectorycontains aloop pointdueto acircular-lik eshape.From thefirstk eyframe weextract BRISK[

10]featurescomputed atFAST corners

andseekfor loopclosurepoints amongotherk eyframes.

Forefficiencyreasons thiscomparisonisdoneonlyfor

keyframesthatarewithin15◦ofangulardistance tothe firstke yframeandhavenocommonpointswithit.Once agoodset ofcorrespondencesis founditis augmentedinto theexisting scenestructureandtheposetransform fromthe firsttothe loopclosurek eyframeis computed.Inorder to incorporatethisinformation aglobalb undleadjustmentis performedusingg

2oframew ork[

8].

4.3DReconstruction Pipeline

4.1.DepthMap Estimation

Theke yframeswithposesarepassedasaninputto a

depthmapestimation algorithm.Theresult ofthismodule isaset ofthedepth maps(withcorresponding poses). ab cde fgh Figure2.Coarse-to-fine depthestimation:depth mapwithambi- guityfilteringon level 0(a),the sameafterleft-rightconsistency filtering(b);upscale fromlev el0to 1(c),depth mapwithambi- guityfilteringon level 1(d),the sameafterleft-rightconsistency filteringonle vel1 (e);upscalefromlevel1to2 (f),depthmap withambiguityfiltering onlev el2(g), thesameafter left-right consistencyfilteringonlev el2(h).

Thisalgorithmis basedona planesweepapproachde-

scribedin[

3].Computationalsimplicity ofthismethod en-

suresfast calculationevenona mobiledevice. Atthesame time,raw depthmeasurementswithoutanexcessi vere gu- larizationhelpto preservefine detailsina 3Dmodel.This happensbecauseour variationaldepth fusionmethod(see

Section

4.2)suppressesthe impulse-likenoise ofthesimple

planesweepapproach.Ho wever ,itcannotdealwithlarge patchesofwrong depthinformationthat usuallycomefrom methodswithstrong regularization.F orthisreason weomit aregularization stepincontrastto[ 3].

Depthestimationis doneina coarse-to-finepyramidal

schemewiththree levels ofthep yramid.Onthecoarsest pyramidlevelonly asmallnumberofpixelsmustbepro- cessed.Thisallo wsusto usemoreaccuratesettingsfor depthestimationon thislev elwithoutsacrificing runtime. andperformimage rectification.For theupperle velsof the imagepyramid, rectificationisomittedforefficienc yrea- sons.Inaddition, forbetteraccurac yandf asterconv ergence searchrangeis calculatedusingthe sparsepointcloud cre- atedduringmotion estimation. Depthfilteringisappliedto eachpyramid level. Fusion methodscanusually handlewellmissing databyinterpola- tionorpropagationfrome xistingvalues,butaremoresensi- tivetoheavyoutliers.Thus,we leave onlythedepthvalues whichareaccurate withhighconfidence.

Weproposetwostages ofdepthfiltering.

78
•Photometricambiguity. Thedepthoutliers canbeef- ficientlyfilteredout byanalyzingratios ofthecosts to itsminimal valueforeachpixel (seeFig.

2(a)).When

atexture isabsentorambiguous(periodicalong the epipolarline)man ycostswill haveratiosaround1.

Thisallows tofiltertheseambiguities.Ane xample

oftheresulting depthmapswith thephotometricam- biguityfilteringapplied isshown inFig.

2(a)(d)(g).

•Left-rightconsistency .Theleft-rightcheck isdoneby analyzingtheconsistenc yofboth depthmapsforthe isdeterminedby checkingre-projectioner rorsforeach pixelusingdepthvalues frombothdepth maps.An examplesofthedepthmaps aftertheleft-right consis- tencycheckareshown inFig.

2(b)(e)(h).

Thedepthfiltering stagesignificantlyreduces anumber ofpixels tobeprocessedonthene xtpyramid level. Itis especiallyimportantbecause thefinestp yramidlev elspro- cessingismuch slowerthan coarserones.The proposed depthestimationalgorithm allowsa veryef ficientparallel implementationona graphicalprocessor(GPU). Memory consumptioncanalso bereducedbecause thereisno need tostorea fullcost-volume inaglobal memory.An image canbeprocessed insmallre gions,andfor eachofthem the matchingcostv aluescanbe storedinalocalmemory.

4.2.DepthFusion

Thedepthfusion modulefuses(combines) alldepth

mapsintoa volumetricrepresentation oftheobject taking intoaccounttheir poses.We followapproach describedin

22,3]forv ariationalfusionof depthmapsusingtruncated

signeddistancefunction (TSDF),whichis implementedin acoarse-to-finemanner .

Thereconstructiontak esplacein thevolumeofinter-

est(VOI), whichisautomaticallyplacedaroundthe cap- turedobjectbased onthesparse pointclouda vailableaf- tertrackingstage. Onthefiner pyramidle velthe resultof optimizationprocedureis usedasan initialguess.This al- lowsustospeedupruntime byreducingthe numberofit- erationsrequiredfor conver genceatthe optimizationstage andimprov equalitybyquicklypropagatingmeasurements tounseenspace oncoarserle velsand thusreducingartif acts whichtypicallyappear outsideofa capturedarea. Inaddition,we improve spatialresolutionof thefinal modelbyre-adjusting VOIafter analyzingresultsof the coarselev el.Oftenanobjectofinterestisplacedon atable oranotherplanar supportstructure(e.g. box).First,we ex- amineifthere isahorizontal planeatthe bottomofthe cap- turedscene. Thisisdonebydetectinga planeinthe dense pointcloudusing SVDandchecking thatallcamera rays arecastedfrom above onthatplane. Thisallowsalignment ofVOI withtheplaneandav oidwasting spaceforv olume Figure3.V ertexcolor interpolation.Heretwofacesareshown: a- b-c(textured)andb-c-d(nottextured yet).First,colortovertices b andcisassigned. Colorvalueforverte xdisobtainedby averaging colorsofv erticesbanda.Te xtureforfaceb-c-disobtainedby interpolatingcolorv aluesofv erticesb,candd. belowit.Afterprocessingcoarse pyramidle velrough volu- metricrepresentationof thesceneis usedtorefine informa- tionaboutspace thatisempty oroccludedby thesupport structure.Thisis donebye xaminingwhetherTSDF values arecloseto 1,whichrepresents avolume betweenacamera andthesurf ace.VOI boundariesareautomaticallyplaced moretightlyaround theobject,e xcludingprocessingof the uselessarea.

Similarto[

3]weuse first-orderprimal-dualalgorithm

forminimizationof theenergy functiononeach level of pyramid.Theresultisa TSDFvolume ofthecaptured ob- ject.Ourimplementation uses3le velsof pyramidwith

1283volumeonthefinestle vel.Measurement integration

andenergy minimizationisdoneonGPUwith OpenCLker - nels.

4.3.MeshConstruction andSimplification

WiththecomputedTSDF representationa3D polygonal

meshcanbe reconstructedinthe twofollo wingsteps.

Octree-basedTSDFrepresentation. Sincethecompu-

tationalcomplexity oftexturemappingprocedureis linearly dependentona numberof the3Dmesh polygons,weuse octreesrepresentationof theTSDFv olume.For themost models,amaximum octreedepthv alueequalto 7limitsthe complexityofthe3Dmodel by20-30thousand polygons.

Isosurfaceextraction.The3Dmesh isreconstructed

usinganunconstrained isosurfacee xtractiononarbitrary octreesapproachas itisdescribed in[ 5]. 79

Figure4.Sample reconstructionresults.

abc

Figure5.Method limitations:transparentand specularobject(a), plane(2D)details (b),objectsw ithnote xture(c).

4.4.Textur eMapping

Thereconstructedmesh andke yframeswithposes are

usedasan inputofthe texturemapping algorithmwhich buildsatexturedmesh. Firstitprocessed visiblefaces,then createstexture forinvisibleones.

Seamlesstexturing. Eachvisiblef aceiste xturedby

meansofprojection tooneof cameraimages.T oachiev e bestresultspecial careispaid toav oidseamsbetween tex- turepatchesfrom differentk eyframes.This isdoneintwo stagesfollowing analgorithmsdescribedin[ 9].

First,foreach visibleface acamerawhich willbeused

fortexturing isselectedbymeansofsolving Markov ran- domfield(MRF) energyminimization problem.Objec- tivefunctionconsistsoftwoterms thatforcingeach faceto chooseindividually “best"cameraatthesame timeforming seamlesstexture betweenadjacentfaces.Second,te xtures areadjustedby addingspecialle velingfunction tothemto minimizecolordiscontinuities onseams,left afterthefirst step. Holefilling .Therecanbe somefaces thatarein visible fromany camera.Hencetherearesomenot texturedar - eas(holes)in themeshat thisstage.T ofillthese holeswe

propagatecolorinformationfromte xturedfa cestoadjacent untexturedfacesthroughsimple vertexcolorinterpolation.

First,allin visiblevertices areaddedintoaprocessing set.Thena colorofeach vertex fromtheset iscomputedby averagingcolorsofallvisibleadjacentv ertices.Colorfor a visibleverte xispickedfromthesamecameraimage asfor texturingfaceswhichcontain thisvertex.Ifthere aresev- eralfaces containingthisvertexthat arelabeleddif ferently, eitherofthem ischosen.All coloredvertices aremarked asvisibleand deletedfromthe processingset.The process stopswhenthe setisempty .Thev ertexcolor interpolation algorithmissho wninFig. 3.

4.5.Face Reconstruction

Incaseof capturinghumanf aceweapply additional

stepsforbetter reconstructionquality .Firstof all,weuse facedetectiononthefirst capturedframein ordertodecide ifweneed totake additionalcareduring reconstruction.For facedetectionweutilize3D facialmotion capturealgorithm

4].Thisalgorithm providesinformation suchas3D po-

sitionofthe head,face contours,precisee yesandmouth location,gaze directionandblinkinginformation. Duringstructureestimation weusee yeslocationinfor - mationandsparse scenestructurein ordertoget metricesti- mateofscene scale.Thisallo wsusto useofmetric thresh- 80

oldsinfurther processingproviding betterreconstructionresults.Inthe depthmapestimation algorithmthef aceareaisusedin ordertoeliminate depthoutliers. Eyeslocationisalsoused inorderto properlyplaceand alignVOI duringdepthfusion.F orbetterte xturemappingweuseeyes andmouthtextures fromthesamecameraview (toensurecon- sistenteyes" viewdirection),whichischosen tobethemostfrontalcameravie watthe sametimeavoidingvie wswhereblinkinghasbeen detected.5.ExperimentalResults

Fortheevaluation werunthe proposedalgorithmon

SamsungGalaxyT abS2tablet withSamsungExynos5433

SoCfeaturingfour coreCPUand ARMMali-T760GPU. In

ourimplementationwe retrieve HDimagesf romthecam- era.For motionanddepthprocessingwedo wnsampleim- agestoV GAresolution.F ortexturemappingimageres- olutioniscritical forthequality andHDimages areused assourcesfor textures.T imingpermodule isprovidedinquotesdbs_dbs14.pdfusesText_20
[PDF] 3d reconstruction python github

[PDF] 3d reconstruction software

[PDF] 3d reconstruction tutorial

[PDF] 3d scene reconstruction from video

[PDF] 3d shape vocabulary cards

[PDF] 3d shape vocabulary eyfs

[PDF] 3d shape vocabulary ks1

[PDF] 3d shape vocabulary ks2

[PDF] 3d shape vocabulary mat

[PDF] 3d shape vocabulary worksheet

[PDF] 3d shape vocabulary year 6

[PDF] 3rd arrondissement 75003 paris france

[PDF] 4 2 practice quadratic equations

[PDF] 4 2 skills practice powers of binomials

[PDF] 4 avenue de paris 78000 versailles