962 resultados para Pixel
Resumo:
To be in compliance with the Endangered Species Act and the Marine Mammal Protection Act, the United States Department of the Navy is required to assess the potential environmental impacts of conducting at-sea training operations on sea turtles and marine mammals. Limited recent and area-specific density data of sea turtles and dolphins exist for many of the Navy’s operations areas (OPAREAs), including the Marine Corps Air Station (MCAS) Cherry Point OPAREA, which encompasses portions of Core and Pamlico Sounds, North Carolina. Aerial surveys were conducted to document the seasonal distribution and estimated density of sea turtles and dolphins within Core Sound and portions of Pamlico Sound, and coastal waters extending one mile offshore. Sea Surface Temperature (SST) data for each survey were extracted from 1.4 km/pixel resolution Advanced Very High Resolution Radiometer remote images. A total of 92 turtles and 1,625 dolphins were sighted during 41 aerial surveys, conducted from July 2004 to April 2006. In the spring (March – May; 7.9°C to 21.7°C mean SST), the majority of turtles sighted were along the coast, mainly from the northern Core Banks northward to Cape Hatteras. By the summer (June – Aug.; 25.2°C to 30.8°C mean SST), turtles were fairly evenly dispersed along the entire survey range of the coast and Pamlico Sound, with only a few sightings in Core Sound. In the autumn (Sept. – Nov.; 9.6°C to 29.6°C mean SST), the majority of turtles sighted were along the coast and in eastern Pamlico Sound; however, fewer turtles were observed along the coast than in the summer. No turtles were seen during the winter surveys (Dec. – Feb.; 7.6°C to 11.2°C mean SST). The estimated mean surface density of turtles was highest along the coast in the summer of 2005 (0.615 turtles/km², SE = 0.220). In Core and Pamlico Sounds the highest mean surface density occurred during the autumn of 2005 (0.016 turtles/km², SE = 0.009). The mean seasonal abundance estimates were always highest in the coastal region, except in the winter when turtles were not sighted in either region. For Pamlico Sound, surface densities were always greater in the eastern than western section. The range of mean temperatures at which turtles were sighted was 9.68°C to 30.82°C. The majority of turtles sighted were within water ≥ 11°C. Dolphins were observed within estuarine waters and along the coast year-round; however, there were some general seasonal movements. In particular, during the summer sightings decreased along the coast and dolphins were distributed throughout Core and Pamlico Sounds, while in the winter the majority of dolphins were located along the coast and in southeastern Pamlico Sound. Although relative numbers changed seasonally between these areas, the estimated mean surface density of dolphins was highest along the coast in the spring of 2006 (9.564 dolphins/km², SE = 5.571). In Core and Pamlico Sounds the highest mean surface density occurred during the autumn of 2004 (0.192 dolphins/km², SE = 0.066). The estimated mean surface density of dolphins was lowest along the coast in the summer of 2004 (0.461 dolphins/km², SE = 0.294). The estimated mean surface density of dolphins was lowest in Core and Pamlico Sounds in the summer of 2005 (0.024 dolphins/km², SE = 0.011). In Pamlico Sound, estimated surface densities were greater in the eastern section except in the autumn. Dolphins were sighted throughout the entire range of mean SST (7.60°C to 30.82°C), with a tendency towards fewer dolphins sighted as water temperatures increased. Based on the findings of this study, sea turtles are most likely to be encountered within the OPAREAs when SST is ≥ 11°C. Since sea turtle distributions are generally limited by water temperature, knowing the SST of a given area is a useful predictor of sea turtle presence. Since dolphins were observed within estuarine waters year-round and throughout the entire range of mean SST’s, they likely could be encountered in the OPAREAs any time of the year. Although our findings indicated the greatest number of dolphins to be present in the winter and the least in the summer, their movements also may be related to other factors such as the availability of prey. (PDF contains 28 pages)
Resumo:
Habitat mapping and characterization has been defined as a high-priority management issue for the Olympic Coast National Marine Sanctuary (OCNMS), especially for poorly known deep-sea habitats that may be sensitive to anthropogenic disturbance. As a result, a team of scientists from OCNMS, National Centers for Coastal Ocean Science (NCCOS), and other partnering institutions initiated a series of surveys to assess the distribution of deep-sea coral/sponge assemblages within the sanctuary and to look for evidence of potential anthropogenic impacts in these critical habitats. Initial results indicated that remotely delineating areas of hard bottom substrate through acoustic sensing could be a useful tool to increase the efficiency and success of subsequent ROV-based surveys of the associated deep-sea fauna. Accordingly, side scan sonar surveys were conducted in May 2004, June 2005, and April 2006 aboard the NOAA Ship McArthur II to: (1) obtain additional imagery of the seafloor for broader habitat-mapping coverage of sanctuary waters, and (2) help delineate suitable deep-sea coral/sponge habitat, in areas of both high and low commercial-fishing activities, to serve as sites for surveying-in more detail using an ROV on subsequent cruises. Several regions of the sea floor throughout the OCNMS were surveyed and mosaicked at 1-meter pixel resolution. Imagery from the side scan sonar mapping efforts was integrated with other complementary data from a towed camera sled, ROVs, sedimentary samples, and bathymetry records to describe geological and biological (where possible) aspects of habitat. Using a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999), we created a preliminary map of various habitat polygon features for use in a geographical information system (GIS). This report provides a description of the mapping and groundtruthing efforts as well as results of the image classification procedure for each of the areas surveyed. (PDF contains 60 pages.)
Resumo:
The Olympic Coast National Marine Sanctuary (OCNMS) continues to invest significant resources into seafloor mapping activities along Washington’s outer coast (Intelmann and Cochrane 2006; Intelmann et al. 2006; Intelmann 2006). Results from these annual mapping efforts offer a snapshot of current ground conditions, help to guide research and management activities, and provide a baseline for assessing the impacts of various threats to important habitat. During the months of August 2004 and May and July 2005, we used side scan sonar to image several regions of the sea floor in the northern OCNMS, and the data were mosaicked at 1-meter pixel resolution. Video from a towed camera sled, bathymetry data, sedimentary samples and side scan sonar mapping were integrated to describe geological and biological aspects of habitat. Polygon features were created and attributed with a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999). For three small areas that were mapped with both side scan sonar and multibeam echosounder, we made a comparison of output from the classified images indicating little difference in results between the two methods. With these considerations, backscatter derived from multibeam bathymetry is currently a costefficient and safe method for seabed imaging in the shallow (<30 meters) rocky waters of OCNMS. The image quality is sufficient for classification purposes, the associated depths provide further descriptive value and risks to gear are minimized. In shallow waters (<30 meters) which do not have a high incidence of dangerous rock pinnacles, a towed multi-beam side scan sonar could provide a better option for obtaining seafloor imagery due to the high rate of acquisition speed and high image quality, however the high probability of losing or damaging such a costly system when deployed as a towed configuration in the extremely rugose nearshore zones within OCNMS is a financially risky proposition. The development of newer technologies such as intereferometric multibeam systems and bathymetric side scan systems could also provide great potential for mapping these nearshore rocky areas as they allow for high speed data acquisition, produce precisely geo-referenced side scan imagery to bathymetry, and do not experience the angular depth dependency associated with multibeam echosounders allowing larger range scales to be used in shallower water. As such, further investigation of these systems is needed to assess their efficiency and utility in these environments compared to traditional side scan sonar and multibeam bathymetry. (PDF contains 43 pages.)
Resumo:
In September 2002, side scan sonar was used to image a portion of the sea floor in the northern OCNMS and was mosaiced at 1-meter pixel resolution using 100 kHz data collected at 300-meter range scale. Video from a remotely-operated vehicle (ROV), bathymetry data, sedimentary samples, and sonar mapping have been integrated to describe geological and biological aspects of habitat and polygon features have been created and attributed with a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999). The data can be used with geographic information system (GIS) software for display, query, and analysis. Textural analysis of the sonar images provided a relatively automated method for delineating substrate into three broad classes representing soft, mixed sediment, and hard bottom. Microhabitat and presence of certain biologic attributes were also populated into the polygon features, but strictly limited to areas where video groundtruthing occurred. Further groundtruthing work in specific areas would improve confidence in the classified habitat map. (PDF contains 22 pages.)
Resumo:
Direcciones de correo electrónico de las autoras: amaya999@hotmail.com ; nagores@hotmail.es
Resumo:
222 p. : il.
Resumo:
High-background applications such as climate monitoring, biology and security applications demand a large dynamic range. Under such conditions ultra-high sensitivity is not required. The resonator bolometer is a novel detector which is well-suited for these conditions. This device takes advantage of the high-density frequency multiplexing capabilities of superconducting microresonators while allowing for the use of high-Tc superconductors in fabrication, which enables a modest (1-4 K) operating temperature and larger dynamic range than is possible with conventional microresonators. The moderate operating temperature and intrinsic multiplexability of this device reduce cost and allow for large pixel counts, making the resonator bolometer especially suitable for the aforementioned applications. A single pixel consists of a superconducting microresonator whose light-absorbing area is placed on a thermally isolated island. Here we present experimental results and theoretical calculations for a prototype resonator bolometer array. Intrinsic device noise and noise equivalent power (NEP) under both dark and illuminated conditions are presented. Under dark conditions the device sensitivity is limited by the thermal noise fluctuations from the bolometer legs. Under the experimental illuminated conditions the device was photon noise limited.
Resumo:
This thesis explores the design, construction, and applications of the optoelectronic swept-frequency laser (SFL). The optoelectronic SFL is a feedback loop designed around a swept-frequency (chirped) semiconductor laser (SCL) to control its instantaneous optical frequency, such that the chirp characteristics are determined solely by a reference electronic oscillator. The resultant system generates precisely controlled optical frequency sweeps. In particular, we focus on linear chirps because of their numerous applications. We demonstrate optoelectronic SFLs based on vertical-cavity surface-emitting lasers (VCSELs) and distributed-feedback lasers (DFBs) at wavelengths of 1550 nm and 1060 nm. We develop an iterative bias current predistortion procedure that enables SFL operation at very high chirp rates, up to 10^16 Hz/sec. We describe commercialization efforts and implementation of the predistortion algorithm in a stand-alone embedded environment, undertaken as part of our collaboration with Telaris, Inc. We demonstrate frequency-modulated continuous-wave (FMCW) ranging and three-dimensional (3-D) imaging using a 1550 nm optoelectronic SFL.
We develop the technique of multiple source FMCW (MS-FMCW) reflectometry, in which the frequency sweeps of multiple SFLs are "stitched" together in order to increase the optical bandwidth, and hence improve the axial resolution, of an FMCW ranging measurement. We demonstrate computer-aided stitching of DFB and VCSEL sweeps at 1550 nm. We also develop and demonstrate hardware stitching, which enables MS-FMCW ranging without additional signal processing. The culmination of this work is the hardware stitching of four VCSELs at 1550 nm for a total optical bandwidth of 2 THz, and a free-space axial resolution of 75 microns.
We describe our work on the tomographic imaging camera (TomICam), a 3-D imaging system based on FMCW ranging that features non-mechanical acquisition of transverse pixels. Our approach uses a combination of electronically tuned optical sources and low-cost full-field detector arrays, completely eliminating the need for moving parts traditionally employed in 3-D imaging. We describe the basic TomICam principle, and demonstrate single-pixel TomICam ranging in a proof-of-concept experiment. We also discuss the application of compressive sensing (CS) to the TomICam platform, and perform a series of numerical simulations. These simulations show that tenfold compression is feasible in CS TomICam, which effectively improves the volume acquisition speed by a factor ten.
We develop chirped-wave phase-locking techniques, and apply them to coherent beam combining (CBC) of chirped-seed amplifiers (CSAs) in a master oscillator power amplifier configuration. The precise chirp linearity of the optoelectronic SFL enables non-mechanical compensation of optical delays using acousto-optic frequency shifters, and its high chirp rate simultaneously increases the stimulated Brillouin scattering (SBS) threshold of the active fiber. We characterize a 1550 nm chirped-seed amplifier coherent-combining system. We use a chirp rate of 5*10^14 Hz/sec to increase the amplifier SBS threshold threefold, when compared to a single-frequency seed. We demonstrate efficient phase-locking and electronic beam steering of two 3 W erbium-doped fiber amplifier channels, achieving temporal phase noise levels corresponding to interferometric fringe visibilities exceeding 98%.
Resumo:
Optical microscopy is an essential tool in biological science and one of the gold standards for medical examinations. Miniaturization of microscopes can be a crucial stepping stone towards realizing compact, cost-effective and portable platforms for biomedical research and healthcare. This thesis reports on implementations of bright-field and fluorescence chip-scale microscopes for a variety of biological imaging applications. The term “chip-scale microscopy” refers to lensless imaging techniques realized in the form of mass-producible semiconductor devices, which transforms the fundamental design of optical microscopes.
Our strategy for chip-scale microscopy involves utilization of low-cost Complementary metal Oxide Semiconductor (CMOS) image sensors, computational image processing and micro-fabricated structural components. First, the sub-pixel resolving optofluidic microscope (SROFM), will be presented, which combines microfluidics and pixel super-resolution image reconstruction to perform high-throughput imaging of fluidic samples, such as blood cells. We discuss design parameters and construction of the device, as well as the resulting images and the resolution of the device, which was 0.66 µm at the highest acuity. The potential applications of SROFM for clinical diagnosis of malaria in the resource-limited settings is discussed.
Next, the implementations of ePetri, a self-imaging Petri dish platform with microscopy resolution, are presented. Here, we simply place the sample of interest on the surface of the image sensor and capture the direct shadow images under the illumination. By taking advantage of the inherent motion of the microorganisms, we achieve high resolution (~1 µm) imaging and long term culture of motile microorganisms over ultra large field-of-view (5.7 mm × 4.4 mm) in a specialized ePetri platform. We apply the pixel super-resolution reconstruction to a set of low-resolution shadow images of the microorganisms as they move across the sensing area of an image sensor chip and render an improved resolution image. We perform longitudinal study of Euglena gracilis cultured in an ePetri platform and image based analysis on the motion and morphology of the cells. The ePetri device for imaging non-motile cells are also demonstrated, by using the sweeping illumination of a light emitting diode (LED) matrix for pixel super-resolution reconstruction of sub-pixel shifted shadow images. Using this prototype device, we demonstrate the detection of waterborne parasites for the effective diagnosis of enteric parasite infection in resource-limited settings.
Then, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope, which uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is also based on the image reconstruction with sweeping illumination technique, where the sequence of images are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.
Finally, we report on the implementation of fluorescence chip-scale microscope, based on a silo-filter structure fabricated on the pixel array of a CMOS image sensor. The extruded pixel design with metal walls between neighboring pixels successfully guides fluorescence emission through the thick absorptive filter to the photodiode layer of a pixel. Our silo-filter CMOS image sensor prototype achieves 13-µm resolution for fluorescence imaging over a wide field-of-view (4.8 mm × 4.4 mm). Here, we demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.
Resumo:
A substantial amount of important scientific information is contained within astronomical data at the submillimeter and far-infrared (FIR) wavelengths, including information regarding dusty galaxies, galaxy clusters, and star-forming regions; however, these wavelengths are among the least-explored fields in astronomy because of the technological difficulties involved in such research. Over the past 20 years, considerable efforts have been devoted to developing submillimeter- and millimeter-wavelength astronomical instruments and telescopes.
The number of detectors is an important property of such instruments and is the subject of the current study. Future telescopes will require as many as hundreds of thousands of detectors to meet the necessary requirements in terms of the field of view, scan speed, and resolution. A large pixel count is one benefit of the development of multiplexable detectors that use kinetic inductance detector (KID) technology.
This dissertation presents the development of a KID-based instrument including a portion of the millimeter-wave bandpass filters and all aspects of the readout electronics, which together enabled one of the largest detector counts achieved to date in submillimeter-/millimeter-wavelength imaging arrays: a total of 2304 detectors. The work presented in this dissertation has been implemented in the MUltiwavelength Submillimeter Inductance Camera (MUSIC), a new instrument for the Caltech Submillimeter Observatory (CSO).
Resumo:
A compact two-step modified-signed-digit arithmetic-logic array processor is proposed. When the reference digits are programmed, both addition and subtraction can be performed by the same binary logic operations regardless of the sign of the input digits. The optical implementation and experimental demonstration with an electron-trapping device are shown. Each digit is encoded by a single pixel, and no polarization is included. Any combinational logic can be easily performed without optoelectronic and electro-optic conversions of the intermediate results. The system is compact, general purpose, simple to align, and has a high signal-to-noise ratio. (C) 1999 Optical Society of America.
Resumo:
Negabinary is a component of the positional number system. A complete set of negabinary arithmetic operations are presented, including the basic addition/subtraction logic, the two-step carry-free addition/subtraction algorithm based on negabinary signed-digit (NSD) representation, parallel multiplication, and the fast conversion from NSD to the normal negabinary in the carry-look-ahead mode. All the arithmetic operations can be performed with binary logic. By programming the binary reference bits, addition and subtraction can be realized in parallel with the same binary logic functions. This offers a technique to perform space-variant arithmetic-logic functions with space-invariant instructions. Multiplication can be performed in the tree structure and it is simpler than the modified signed-digit (MSD) counterpart. The parallelism of the algorithms is very suitable for optical implementation. Correspondingly, a general-purpose optical logic system using an electron trapping device is suggested. Various complex logic functions can be performed by programming the illumination of the data arrays without additional temporal latency of the intermediate results. The system can be compact. These properties make the proposed negabinary arithmetic-logic system a strong candidate for future applications in digital optical computing with the development of smart pixel arrays. (C) 1999 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(99)00803-X].
Resumo:
A padronização para a fabricação de instrumentos endodônticos em aço inoxidável contribuiu para o desenvolvimento de novos aspectos geométricos. Surgiram propostas de alterações no desenho da haste helicoidal, da seção reta transversal, da ponta, da conicidade e do diâmetro na extremidade (D0). Concomitantemente, o emprego de ligas em Níquel-Titânio possibilitou a produção de instrumentos acionados a motor, largamente empregados hoje. A cada ano a indústria lança instrumentos com diversas modificações, sem, contudo, disponibilizar informações suficientes quanto às implicações clínicas destas modificações. Existe um crescente interesse no estudo dos diferentes aspectos geométricos e sua precisa metrologia. Tradicionalmente, a aferição de aspectos geométricos de instrumentos endodônticos é realizada visualmente através de microscopia ótica. Entretanto, esse procedimento visual é lento e subjetivo. Este trabalho propõe um novo método para a metrologia de instrumentos endodônticos baseado no microscópio eletrônico de varredura e na análise digital das imagens. A profundidade de campo do MEV permite obter a imagem de todo o relevo do instrumento endodôntico a uma distância de trabalho constante. Além disso, as imagens obtidas pelo detector de elétrons retro-espalhados possuem menos artefatos e sombras, tornando a obtenção e análise das imagens mais fáceis. Adicionalmente a análise das imagens permite formas de mensuração mais eficientes, com maior velocidade e qualidade. Um porta-amostras específico foi adaptado para obtenção das imagens dos instrumentos endodônticos. Ele é composto de um conector elétrico múltiplo com terminais parafusados de 12 pólos com 4 mm de diâmetro, numa base de alumínio coberta por discos de ouro. Os nichos do conector (terminais fêmeas) têm diâmetro apropriado (2,5 mm) para o encaixe dos instrumentos endodônticos. Outrossim, o posicionamento ordenado dos referidos instrumentos no conector elétrico permite a aquisição automatizada das imagens no MEV. Os alvos de ouro produzem, nas imagens de elétrons retro-espalhados, melhor contraste de número atômico entre o fundo em ouro e os instrumentos. No porta-amostras desenvolvido, os discos que compõem o fundo em ouro são na verdade, alvos do aparelho metalizador, comumente encontrados em laboratórios de MEV. Para cada instrumento, imagens de quatro a seis campos adjacentes de 100X de aumento são automaticamente obtidas para cobrir todo o comprimento do instrumento com a magnificação e resolução requeridas (3,12 m/pixel). As imagens obtidas são processadas e analisadas pelos programas Axiovision e KS400. Primeiro elas são dispostas num campo único estendido de cada instrumento por um procedimento de alinhamento semi-automático baseado na inter-relação com o Axiovision. Então a imagem de cada instrumento passa por uma rotina automatizada de análise de imagens no KS400. A rotina segue uma sequência padrão: pré-processamento, segmentação, pós-processamento e mensuração dos aspectos geométricos.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.