980 resultados para FOCAL-PLANE IRRADIANCE
Resumo:
An accurate and simple technique for determining the focal length of a lens is presented. It consists of measuring the period of the fringes produced by a diffraction grating at the near field when it is illuminated with a beam focused by the unknown lens. In paraxial approximation, the period of the fringes varies linearly with the distance. After some calculations, a simple extrapolation of data is performed to obtain the locations of the principal plane and the focal plane of the lens. Thus, the focal length is obtained as the distance between the two mentioned planes. The accuracy of the method is limited by the collimation degree of the incident beam and by the algorithm used to obtain the period of the fringes. We have checked the technique with two commercial lenses, one convergent and one divergent, with nominal focal lengths (+100±1) mm and (−100±1) mm respectively. We have experimentally obtained the focal lengths resulting into the interval given by the manufacturer but with an uncertainty of 0.1%, one order of magnitude lesser than the uncertainty given by the manufacturer.
Resumo:
In this paper, processing methods of Fourier optics implemented in a digital holographic microscopy system are presented. The proposed methodology is based on the possibility of the digital holography in carrying out the whole reconstruction of the recorded wave front and consequently, the determination of the phase and intensity distribution in any arbitrary plane located between the object and the recording plane. In this way, in digital holographic microscopy the field produced by the objective lens can be reconstructed along its propagation, allowing the reconstruction of the back focal plane of the lens, so that the complex amplitudes of the Fraunhofer diffraction, or equivalently the Fourier transform, of the light distribution across the object can be known. The manipulation of Fourier transform plane makes possible the design of digital methods of optical processing and image analysis. The proposed method has a great practical utility and represents a powerful tool in image analysis and data processing. The theoretical aspects of the method are presented, and its validity has been demonstrated using computer generated holograms and images simulations of microscopic objects. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
We report on a simple method to obtain surface gratings using a Michelson interferometer and femtosecond laser radiation. In the optical setup used, two parallel laser beams are generated using a beam splitter and then focused using the same focusing lens. An interference pattern is created in the focal plane of the focusing lens, which can be used to pattern the surface of materials. The main advantage of this method is that the optical paths difference of the interfering beams is independent of the distance between the beams. As a result, the fringes period can be varied without a need for major realignment of the optical system and the time coincidence between the interfering beams can be easily monitored. The potential of the method was demonstrated by patterning surface gratings with different periods on titanium surfaces in air.
Resumo:
A mathematical model that describes the behavior of low-resolution Fresnel lenses encoded in any low-resolution device (e.g., a spatial light modulator) is developed. The effects of low-resolution codification, such the appearance of new secondary lenses, are studied for a general case. General expressions for the phase of these lenses are developed, showing that each lens behaves as if it were encoded through all pixels of the low-resolution device. Simple expressions for the light distribution in the focal plane and its dependence on the encoded focal length are developed and commented on in detail. For a given codification device an optimum focal length is found for best lens performance. An optimization method for codification of a single lens with a short focal length is proposed.
Resumo:
Optical aberration due to the nonflatness of spatial light modulators used in holographic optical tweezers significantly deteriorates the quality of the trap and may easily prevent stable trapping of particles. We use a Shack-Hartmann sensor to measure the distorted wavefront at the modulator plane; the conjugate of this wavefront is then added to the holograms written into the display to counteract its own curvature and thus compensate the optical aberration of the system. For a Holoeye LC-R 2500 reflective device, flatness is improved from 0.8¿ to ¿/16 (¿=532 nm), leading to a diffraction-limited spot at the focal plane of the microscope objective, which makes stable trapping possible. This process could be fully automated in a closed-loop configuration and would eventually allow other sources of aberration in the optical setup to be corrected for.
Resumo:
Bright-field wholemount labeling techniques applied to the mammalian central nervous system (CNS) offer advantages over conventional methods based on sections since an immediate and three-dimensional view of the stained components is provided. It thereby becomes possible to survey and count large number of cells and fibers in their natural relationships. The ability of confocal laser scanning microscopy to visualize in one focal plane the fluorescence associated with multiple markers could be most valuable by the availability of reliable wholemount fluorescent techniques. Accordingly, based in our previously published bright-field wholemount protocols [Brain Res. Prot. 2 (1998) 165-173], we have devised an effective immmunofluorescence wholemount procedure. We show that reliable wholemount fluorescent staining can be obtained using isolated complete CNS aged up to rat embryonic day 17, with antibodies penetration in the millimeter range. Examples are shown of preparations in which colocalization can be observed in nerve cells of cytoskeletal and calcium-binding proteins.
Resumo:
Lasertarkkuusporauksella on tämän hetken teollisuudessa useita sovelluksia, kuten esimerkiksi mustesuihkukirjoittimet, dieselmoottoreiden polttoainesuuttimet, lääketieteen instrumentit, turbiinien lapojen jäähdytysreiät ja stensiilit. Tässä työssä on tutkittu laserporauksen mahdollisuuksia 99,9 % kupariin sekä EN 1.4301 ruostumattomaan teräkseen (vastaava AISI 304). Ainepaksuuksia oli käytettävissä kolmea: 0,1 mm, 0,5 mm ja 1,0 mm. Vertailun vuoksi valittiin tutkimukseen mukaan ainepaksuudeltaan 1,0 mm EN 1.4432 haponkestävää terästä (vastaava AISI 316L). Tutkimuksessa käytettiin kolmea eritehoista 1,064 µm aallonpituuden Nd:YAG – laseria ja yhtä CO2 – laseria. Poratut reiät kuvattiin elektronimikroskoopilla ja jokaisesta reiästä mitattiin halkaisija, ympyrämäisyys ja kartiokkuus. Lisäksi reiän laatua arvioitaessa tarkasteltiin purseen määrää reikien ympärillä. Tutkimus osoitti, että eri materiaaleihin voidaan porata, laserin säteen laadusta ja aallonpituudesta riippuen, hyvin erikokoisia reikiä. Kartiokkuuteen havaittiin voitavan vaikuttaa polttopisteen paikkaa siirtämällä.
Resumo:
It is known already from 1970´s that laser beam is suitable for processing paper materials. In this thesis, term paper materials mean all wood-fibre based materials, like dried pulp, copy paper, newspaper, cardboard, corrugated board, tissue paper etc. Accordingly, laser processing in this thesis means all laser treatments resulting material removal, like cutting, partial cutting, marking, creasing, perforation etc. that can be used to process paper materials. Laser technology provides many advantages for processing of paper materials: non-contact method, freedom of processing geometry, reliable technology for non-stop production etc. Especially packaging industry is very promising area for laser processing applications. However, there are only few industrial laser processing applications worldwide even in beginning of 2010´s. One reason for small-scale use of lasers in paper material manufacturing is that there is a shortage of published research and scientific articles. Another problem, restraining the use of laser for processing of paper materials, is colouration of paper material i.e. the yellowish and/or greyish colour of cut edge appearing during cutting or after cutting. These are the main reasons for selecting the topic of this thesis to concern characterization of interaction of laser beam and paper materials. This study was carried out in Laboratory of Laser Processing at Lappeenranta University of Technology (Finland). Laser equipment used in this study was TRUMPF TLF 2700 carbon dioxide laser that produces a beam with wavelength of 10.6 μm with power range of 190-2500 W (laser power on work piece). Study of laser beam and paper material interaction was carried out by treating dried kraft pulp (grammage of 67 g m-2) with different laser power levels, focal plane postion settings and interaction times. Interaction between laser beam and dried kraft pulp was detected with different monitoring devices, i.e. spectrometer, pyrometer and active illumination imaging system. This way it was possible to create an input and output parameter diagram and to study the effects of input and output parameters in this thesis. When interaction phenomena are understood also process development can be carried out and even new innovations developed. Fulfilling the lack of information on interaction phenomena can assist in the way of lasers for wider use of technology in paper making and converting industry. It was concluded in this thesis that interaction of laser beam and paper material has two mechanisms that are dependent on focal plane position range. Assumed interaction mechanism B appears in range of average focal plane position of 3.4 mm and 2.4 mm and assumed interaction mechanism A in range of average focal plane position of 0.4 mm and -0.6 mm both in used experimental set up. Focal plane position 1.4 mm represents midzone of these two mechanisms. Holes during laser beam and paper material interaction are formed gradually: first small hole is formed to interaction area in the centre of laser beam cross-section and after that, as function of interaction time, hole expands, until interaction between laser beam and dried kraft pulp is ended. By the image analysis it can be seen that in beginning of laser beam and dried kraft pulp material interaction small holes off very good quality are formed. It is obvious that black colour and heat affected zone appear as function of interaction time. This reveals that there still are different interaction phases within interaction mechanisms A and B. These interaction phases appear as function of time and also as function of peak intensity of laser beam. Limit peak intensity is the value that divides interaction mechanism A and B from one-phase interaction into dual-phase interaction. So all peak intensity values under limit peak intensity belong to MAOM (interaction mechanism A one-phase mode) or to MBOM (interaction mechanism B onephase mode) and values over that belong to MADM (interaction mechanism A dual-phase mode) or to MBDM (interaction mechanism B dual-phase mode). Decomposition process of cellulose is evolution of hydrocarbons when temperature is between 380- 500°C. This means that long cellulose molecule is split into smaller volatile hydrocarbons in this temperature range. As temperature increases, decomposition process of cellulose molecule changes. In range of 700-900°C, cellulose molecule is mainly decomposed into H2 gas; this is why this range is called evolution of hydrogen. Interaction in this range starts (as in range of MAOM and MBOM), when a small good quality hole is formed. This is due to “direct evaporation” of pulp via decomposition process of evolution of hydrogen. And this can be seen can be seen in spectrometer as high intensity peak of yellow light (in range of 588-589 nm) which refers to temperature of ~1750ºC. Pyrometer does not detect this high intensity peak since it is not able to detect physical phase change from solid kraft pulp to gaseous compounds. As interaction time between laser beam and dried kraft pulp continues, hypothesis is that three auto ignition processes occurs. Auto ignition of substance is the lowest temperature in which it will spontaneously ignite in a normal atmosphere without an external source of ignition, such as a flame or spark. Three auto ignition processes appears in range of MADM and MBDM, namely: 1. temperature of auto ignition of hydrogen atom (H2) is 500ºC, 2. temperature of auto ignition of carbon monoxide molecule (CO) is 609ºC and 3. temperature of auto ignition of carbon atom (C) is 700ºC. These three auto ignition processes leads to formation of plasma plume which has strong emission of radiation in range of visible light. Formation of this plasma plume can be seen as increase of intensity in wavelength range of ~475-652 nm. Pyrometer shows maximum temperature just after this ignition. This plasma plume is assumed to scatter laser beam so that it interacts with larger area of dried kraft pulp than what is actual area of beam cross-section. This assumed scattering reduces also peak intensity. So result shows that assumably scattered light with low peak intensity is interacting with large area of hole edges and due to low peak intensity this interaction happens in low temperature. So interaction between laser beam and dried kraft pulp turns from evolution of hydrogen to evolution of hydrocarbons. This leads to black colour of hole edges.
Resumo:
As improvements to the optical design of spectrometer and radiometer instruments evolve with advances in detector sensitivity, use of focal plane detector arrays and innovations in adaptive optics for large high altitude telescopes, interest in mid-infrared astronomy and remote sensing applications have been areas of progressive research in recent years. This research has promoted a number of developments in infrared coating performance, particularly by placing increased demands on the spectral imaging requirements of filters to precisely isolate radiation between discrete wavebands and improve photometric accuracy. The spectral design and construction of multilayer filters to accommodate these developments has subsequently been an area of challenging thin-film research, to achieve high spectral positioning accuracy, environmental durability and aging stability at cryogenic temperatures, whilst maximizing the far-infrared performance. In this paper we examine the design and fabrication of interference filters in instruments that utilize the mid-infrared N-band (6-15 µm) and Q-band (16-28 µm) atmospheric windows, together with a rationale for the selection of materials, deposition process, spectral measurements and assessment of environmental durability performance.
Resumo:
Fourier transform infrared (FTIR) spectroscopic imaging using a focal plane array detector has been used to study atherosclerotic arteries with a spatial resolution of 3-4 mum, i.e., at a level that is comparable with cellular dimensions. Such high spatial resolution is made possible using a micro-attenuated total reflection (ATR) germanium objective with a high refractive index and therefore high numerical aperture. This micro-ATR approach has enabled small structures within the vessel wall to be imaged for the first time by FTIR. Structures observed include the elastic lamellae of the tunica media and a heterogeneous distribution of small clusters of cholesterol esters within an atherosclerotic lesion, which may correspond to foam cells. A macro-ATR imaging method was also applied, which involves the use of a diamond macro-ATR accessory. This study of atherosclerosis is presented as an illustrative example of the wider potential of these A TR imaging approaches for cardiovascular medicine and biomedical applications. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Cooled infrared filters have been used in pressure modulation and filter radiometry to measure the dynamics, temperature distribution and concentrations of atmospheric elements in various satellite radiometers. Invariably such instruments use precision infrared bandpass filters and coatings for spectral selction, often operating at cryogenic temperatures. More recent developments in the use of spectrally-selective cooled detectors in focal plane arrays have simplified the optical layout and reduced the component count of radiometers but have placed additional demands on both the spectral and physical performance requirements of the filters. This paper describes and contrasts the more traditional radiometers using discrete detectors with those which use focal plane detector array technology, with particular emphasis on the function of the filters and coatings in the two cases. Additionally we discuss the spectral techniques and materials used to fabricate infrared coatings and filters for use in space optics, and give examples of their application in the fabrication of some demanding long wavelength dichroics and filters. We also discuss the effects of the space environment on the stability and durability of high performance infrared filters and materials exposed to low Earth orbit for 69 months on the NASA Long Duration Exposure Facility (LDEF).
Resumo:
The HIRDLS instrument contains 21 spectral channels spanning a wavelength range from 6 to 18mm. For each of these channels the spectral bandwidth and position are isolated by an interference bandpass filter at 301K placed at an intermediate focal plane of the instrument. A second filter cooled to 65K positioned at the same wavelength but designed with a wider bandwidth is placed directly in front of each cooled detector element to reduce stray radiation from internally reflected in-band signals, and to improve the out-of-band blocking. This paper describes the process of determining the spectral requirements for the two bandpass filters and the antireflection coatings used on the lenses and dewar window of the instrument. This process uses a system throughput performance approach taking the instrument spectral specification as a target. It takes into account the spectral characteristics of the transmissive optical materials, the relative spectral response of the detectors, thermal emission from the instrument, and the predicted atmospheric signal to determine the radiance profile for each channel. Using this design approach an optimal design for the filters can be achieved, minimising the number of layers to improve the in-band transmission and to aid manufacture. The use of this design method also permits the instrument spectral performance to be verified using the measured response from manufactured components. The spectral calculations for an example channel are discussed, together with the spreadsheet calculation method. All the contributions made by the spectrally active components to the resulting instrument channel throughput are identified and presented.
Resumo:
A complete census of planetary systems around a volume-limited sample of solar-type stars (FGK dwarfs) in the Solar neighborhood (d a parts per thousand currency signaEuro parts per thousand 15 pc) with uniform sensitivity down to Earth-mass planets within their Habitable Zones out to several AUs would be a major milestone in extrasolar planets astrophysics. This fundamental goal can be achieved with a mission concept such as NEAT-the Nearby Earth Astrometric Telescope. NEAT is designed to carry out space-borne extremely-high-precision astrometric measurements at the 0.05 mu as (1 sigma) accuracy level, sufficient to detect dynamical effects due to orbiting planets of mass even lower than Earth's around the nearest stars. Such a survey mission would provide the actual planetary masses and the full orbital geometry for all the components of the detected planetary systems down to the Earth-mass limit. The NEAT performance limits can be achieved by carrying out differential astrometry between the targets and a set of suitable reference stars in the field. The NEAT instrument design consists of an off-axis parabola single-mirror telescope (D = 1 m), a detector with a large field of view located 40 m away from the telescope and made of 8 small movable CCDs located around a fixed central CCD, and an interferometric calibration system monitoring dynamical Young's fringes originating from metrology fibers located at the primary mirror. The mission profile is driven by the fact that the two main modules of the payload, the telescope and the focal plane, must be located 40 m away leading to the choice of a formation flying option as the reference mission, and of a deployable boom option as an alternative choice. The proposed mission architecture relies on the use of two satellites, of about 700 kg each, operating at L2 for 5 years, flying in formation and offering a capability of more than 20,000 reconfigurations. The two satellites will be launched in a stacked configuration using a Soyuz ST launch vehicle. The NEAT primary science program will encompass an astrometric survey of our 200 closest F-, G- and K-type stellar neighbors, with an average of 50 visits each distributed over the nominal mission duration. The main survey operation will use approximately 70% of the mission lifetime. The remaining 30% of NEAT observing time might be allocated, for example, to improve the characterization of the architecture of selected planetary systems around nearby targets of specific interest (low-mass stars, young stars, etc.) discovered by Gaia, ground-based high-precision radial-velocity surveys, and other programs. With its exquisite, surgical astrometric precision, NEAT holds the promise to provide the first thorough census for Earth-mass planets around stars in the immediate vicinity of our Sun.
Resumo:
Research in art conservation has been developed from the early 1950s, giving a significant contribution to the conservation-restoration of cultural heritage artefacts. In fact, only through a profound knowledge about the nature and conditions of constituent materials, suitable decisions on the conservation and restoration measures can thus be adopted and preservation practices enhanced. The study of ancient artworks is particularly challenging as they can be considered as heterogeneous and multilayered systems where numerous interactions between the different components as well as degradation and ageing phenomena take place. However, difficulties to physically separate the different layers due to their thickness (1-200 µm) can result in the inaccurate attribution of the identified compounds to a specific layer. Therefore, details can only be analysed when the sample preparation method leaves the layer structure intact, as for example the preparation of embedding cross sections in synthetic resins. Hence, spatially resolved analytical techniques are required not only to exactly characterize the nature of the compounds but also to obtain precise chemical and physical information about ongoing changes. This thesis focuses on the application of FTIR microspectroscopic techniques for cultural heritage materials. The first section is aimed at introducing the use of FTIR microscopy in conservation science with a particular attention to the sampling criteria and sample preparation methods. The second section is aimed at evaluating and validating the use of different FTIR microscopic analytical methods applied to the study of different art conservation issues which may be encountered dealing with cultural heritage artefacts: the characterisation of the artistic execution technique (chapter II-1), the studies on degradation phenomena (chapter II-2) and finally the evaluation of protective treatments (chapter II-3). The third and last section is divided into three chapters which underline recent developments in FTIR spectroscopy for the characterisation of paint cross sections and in particular thin organic layers: a newly developed preparation method with embedding systems in infrared transparent salts (chapter III-1), the new opportunities offered by macro-ATR imaging spectroscopy (chapter III-2) and the possibilities achieved with the different FTIR microspectroscopic techniques nowadays available (chapter III-3). In chapter II-1, FTIR microspectroscopy as molecular analysis, is presented in an integrated approach with other analytical techniques. The proposed sequence is optimized in function of the limited quantity of sample available and this methodology permits to identify the painting materials and characterise the adopted execution technique and state of conservation. Chapter II-2 describes the characterisation of the degradation products with FTIR microscopy since the investigation on the ageing processes encountered in old artefacts represents one of the most important issues in conservation research. Metal carboxylates resulting from the interaction between pigments and binding media are characterized using synthesised metal palmitates and their production is detected on copper-, zinc-, manganese- and lead- (associated with lead carbonate) based pigments dispersed either in oil or egg tempera. Moreover, significant effects seem to be obtained with iron and cobalt (acceleration of the triglycerides hydrolysis). For the first time on sienna and umber paints, manganese carboxylates are also observed. Finally in chapter II-3, FTIR microscopy is combined with further elemental analyses to characterise and estimate the performances and stability of newly developed treatments, which should better fit conservation-restoration problems. In the second part, in chapter III-1, an innovative embedding system in potassium bromide is reported focusing on the characterisation and localisation of organic substances in cross sections. Not only the identification but also the distribution of proteinaceous, lipidic or resinaceous materials, are evidenced directly on different paint cross sections, especially in thin layers of the order of 10 µm. Chapter III-2 describes the use of a conventional diamond ATR accessory coupled with a focal plane array to obtain chemical images of multi-layered paint cross sections. A rapid and simple identification of the different compounds is achieved without the use of any infrared microscope objectives. Finally, the latest FTIR techniques available are highlighted in chapter III-3 in a comparative study for the characterisation of paint cross sections. Results in terms of spatial resolution, data quality and chemical information obtained are presented and in particular, a new FTIR microscope equipped with a linear array detector, which permits reducing the spatial resolution limit to approximately 5 µm, provides very promising results and may represent a good alternative to either mapping or imaging systems.
Resumo:
Several MCAO systems are under study to improve the angular resolution of the current and of the future generation large ground-based telescopes (diameters in the 8-40 m range). The subject of this PhD Thesis is embedded in this context. Two MCAO systems, in dierent realization phases, are addressed in this Thesis: NIRVANA, the 'double' MCAO system designed for one of the interferometric instruments of LBT, is in the integration and testing phase; MAORY, the future E-ELT MCAO module, is under preliminary study. These two systems takle the sky coverage problem in two dierent ways. The layer oriented approach of NIRVANA, coupled with multi-pyramids wavefront sensors, takes advantage of the optical co-addition of the signal coming from up to 12 NGS in a annular 2' to 6' technical FoV and up to 8 in the central 2' FoV. Summing the light coming from many natural sources permits to increase the limiting magnitude of the single NGS and to improve considerably the sky coverage. One of the two Wavefront Sensors for the mid- high altitude atmosphere analysis has been integrated and tested as a stand- alone unit in the laboratory at INAF-Osservatorio Astronomico di Bologna and afterwards delivered to the MPIA laboratories in Heidelberg, where was integrated and aligned to the post-focal optical relay of one LINC-NIRVANA arm. A number of tests were performed in order to characterize and optimize the system functionalities and performance. A report about this work is presented in Chapter 2. In the MAORY case, to ensure correction uniformity and sky coverage, the LGS-based approach is the current baseline. However, since the Sodium layer is approximately 10 km thick, the articial reference source looks elongated, especially when observed from the edge of a large aperture. On a 30-40 m class telescope, for instance, the maximum elongation varies between few arcsec and 10 arcsec, depending on the actual telescope diameter, on the Sodium layer properties and on the laser launcher position. The centroiding error in a Shack-Hartmann WFS increases proportionally to the elongation (in a photon noise dominated regime), strongly limiting the performance. To compensate for this effect a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of Chapter 3 is twofold: an analysis of the performance of three dierent algorithms (Weighted Center of Gravity, Correlation and Quad-cell) for the instantaneous LGS image position measurement in presence of elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. An alternative optical solution to the spot elongation problem is proposed in Section 3.4. Starting from the considerations presented in Chapter 3, a first order analysis of the LGS WFS for MAORY (number of subapertures, number of detected photons per subaperture, RON, focal plane sampling, subaperture FoV) is the subject of Chapter 4. An LGS WFS laboratory prototype was designed to reproduce the relevant aspects of an LGS SH WFS for the E-ELT and to evaluate the performance of different centroid algorithms in presence of elongated spots as investigated numerically and analytically in Chapter 3. This prototype permits to simulate realistic Sodium proles. A full testing plan for the prototype is set in Chapter 4.