993 resultados para Atmospheric Turbulence
Resumo:
Flow and turbulence above urban terrain is more complex than above rural terrain, due to the different momentum and heat transfer characteristics that are affected by the presence of buildings (e.g. pressure variations around buildings). The applicability of similarity theory (as developed over rural terrain) is tested using observations of flow from a sonic anemometer located at 190.3 m height in London, U.K. using about 6500 h of data. Turbulence statistics—dimensionless wind speed and temperature, standard deviations and correlation coefficients for momentum and heat transfer—were analysed in three ways. First, turbulence statistics were plotted as a function only of a local stability parameter z/Λ (where Λ is the local Obukhov length and z is the height above ground); the σ_i/u_* values (i = u, v, w) for neutral conditions are 2.3, 1.85 and 1.35 respectively, similar to canonical values. Second, analysis of urban mixed-layer formulations during daytime convective conditions over London was undertaken, showing that atmospheric turbulence at high altitude over large cities might not behave dissimilarly from that over rural terrain. Third, correlation coefficients for heat and momentum were analyzed with respect to local stability. The results give confidence in using the framework of local similarity for turbulence measured over London, and perhaps other cities. However, the following caveats for our data are worth noting: (i) the terrain is reasonably flat, (ii) building heights vary little over a large area, and (iii) the sensor height is above the mean roughness sublayer depth.
Resumo:
We investigate the spatial characteristics of urban-like canopy flow by applying particle image velocimetry (PIV) to atmospheric turbulence. The study site was a Comprehensive Outdoor Scale MOdel (COSMO) experiment for urban climate in Japan. The PIV system captured the two-dimensional flow field within the canopy layer continuously for an hour with a sampling frequency of 30 Hz, thereby providing reliable outdoor turbulence statistics. PIV measurements in a wind-tunnel facility using similar roughness geometry, but with a lower sampling frequency of 4 Hz, were also done for comparison. The turbulent momentum flux from COSMO, and the wind tunnel showed similar values and distributions when scaled using friction velocity. Some different characteristics between outdoor and indoor flow fields were mainly caused by the larger fluctuations in wind direction for the atmospheric turbulence. The focus of the analysis is on a variety of instantaneous turbulent flow structures. One remarkable flow structure is termed 'flushing', that is, a large-scale upward motion prevailing across the whole vertical cross-section of a building gap. This is observed intermittently, whereby tracer particles are flushed vertically out from the canopy layer. Flushing phenomena are also observed in the wind tunnel where there is neither thermal stratification nor outer-layer turbulence. It is suggested that flushing phenomena are correlated with the passing of large-scale low-momentum regions above the canopy.
Resumo:
The urban boundary layer (UBL) is the part of the atmosphere in which most of the planet’s population now lives, and is one of the most complex and least understood microclimates. Given potential climate change impacts and the requirement to develop cities sustainably, the need for sound modelling and observational tools becomes pressing. This review paper considers progress made in studies of the UBL in terms of a conceptual framework spanning microscale to mesoscale determinants of UBL structure and evolution. Considerable progress in observing and modelling the urban surface energy balance has been made. The urban roughness sub-layer is an important region requiring attention as assumptions about atmospheric turbulence break down in this layer and it may dominate coupling of the surface to the UBL due to its considerable depth. The upper 90% of the UBL (mixed and residual layers) remains under-researched but new remote sensing methods and high resolution modelling tools now permit rapid progress. Surface heterogeneity dominates from neighbourhood to regional scales and should be more strongly considered in future studies. Specific research priorities include humidity within the UBL, high-rise urban canopies and the development of long-term, spatially extensive measurement networks coupled strongly to model development.
Resumo:
An Adaptive Optic (AO) system is a fundamental requirement of 8m-class telescopes. We know that in order to obtain the maximum possible resolution allowed by these telescopes we need to correct the atmospheric turbulence. Thanks to adaptive optic systems we are able to use all the effective potential of these instruments, drawing all the information from the universe sources as best as possible. In an AO system there are two main components: the wavefront sensor (WFS) that is able to measure the aberrations on the incoming wavefront in the telescope, and the deformable mirror (DM) that is able to assume a shape opposite to the one measured by the sensor. The two subsystem are connected by the reconstructor (REC). In order to do this, the REC requires a “common language" between these two main AO components. It means that it needs a mapping between the sensor-space and the mirror-space, called an interaction matrix (IM). Therefore, in order to operate correctly, an AO system has a main requirement: the measure of an IM in order to obtain a calibration of the whole AO system. The IM measurement is a 'mile stone' for an AO system and must be done regardless of the telescope size or class. Usually, this calibration step is done adding to the telescope system an auxiliary artificial source of light (i.e a fiber) that illuminates both the deformable mirror and the sensor, permitting the calibration of the AO system. For large telescope (more than 8m, like Extremely Large Telescopes, ELTs) the fiber based IM measurement requires challenging optical setups that in some cases are also impractical to build. In these cases, new techniques to measure the IM are needed. In this PhD work we want to check the possibility of a different method of calibration that can be applied directly on sky, at the telescope, without any auxiliary source. Such a technique can be used to calibrate AO system on a telescope of any size. We want to test the new calibration technique, called “sinusoidal modulation technique”, on the Large Binocular Telescope (LBT) AO system, which is already a complete AO system with the two main components: a secondary deformable mirror with by 672 actuators, and a pyramid wavefront sensor. My first phase of PhD work was helping to implement the WFS board (containing the pyramid sensor and all the auxiliary optical components) working both optical alignments and tests of some optical components. Thanks to the “solar tower” facility of the Astrophysical Observatory of Arcetri (Firenze), we have been able to reproduce an environment very similar to the telescope one, testing the main LBT AO components: the pyramid sensor and the secondary deformable mirror. Thanks to this the second phase of my PhD thesis: the measure of IM applying the sinusoidal modulation technique. At first we have measured the IM using a fiber auxiliary source to calibrate the system, without any kind of disturbance injected. After that, we have tried to use this calibration technique in order to measure the IM directly “on sky”, so adding an atmospheric disturbance to the AO system. The results obtained in this PhD work measuring the IM directly in the Arcetri solar tower system are crucial for the future development: the possibility of the acquisition of IM directly on sky means that we are able to calibrate an AO system also for extremely large telescope class where classic IM measurements technique are problematic and, sometimes, impossible. Finally we have not to forget the reason why we need this: the main aim is to observe the universe. Thanks to these new big class of telescopes and only using their full capabilities, we will be able to increase our knowledge of the universe objects observed, because we will be able to resolve more detailed characteristics, discovering, analyzing and understanding the behavior of the universe components.
Resumo:
The Adaptive Optics is the measurement and correction in real time of the wavefront aberration of the star light caused by the atmospheric turbulence, that limits the angular resolution of ground based telescopes and thus their capabilities to deep explore faint and crowded astronomical objects. The lack of natural stars enough bright to be used as reference sources for the Adaptive Optics, over a relevant fraction of the sky, led to the introduction of artificial reference stars. The so-called Laser Guide Stars are produced by exciting the Sodium atoms in a layer laying at 90km of altitude, by a powerful laser beam projected toward the sky. The possibility to turn on a reference star close to the scientific targets of interest has the drawback in an increased difficulty in the wavefront measuring, mainly due to the time instability of the Sodium layer density. These issues are increased with the telescope diameter. In view of the construction of the 42m diameter European Extremely Large Telescope a detailed investigation of the achievable performances of Adaptive Optics becomes mandatory to exploit its unique angular resolution . The goal of this Thesis was to present a complete description of a laboratory Prototype development simulating a Shack-Hartmann wavefront sensor using Laser Guide Stars as references, in the expected conditions for a 42m telescope. From the conceptual design, through the opto-mechanical design, to the Assembly, Integration and Test, all the phases of the Prototype construction are explained. The tests carried out shown the reliability of the images produced by the Prototype that agreed with the numerical simulations. For this reason some possible upgrades regarding the opto-mechanical design are presented, to extend the system functionalities and let the Prototype become a more complete test bench to simulate the performances and drive the future Adaptive Optics modules design.
Resumo:
A free-space optical (FSO) laser communication system with perfect fast-tracking experiences random power fading due to atmospheric turbulence. For a FSO communication system without fast-tracking or with imperfect fast-tracking, the fading probability density function (pdf) is also affected by the pointing error. In this thesis, the overall fading pdfs of FSO communication system with pointing errors are calculated using an analytical method based on the fast-tracked on-axis and off-axis fading pdfs and the fast-tracked beam profile of a turbulence channel. The overall fading pdf is firstly studied for the FSO communication system with collimated laser beam. Large-scale numerical wave-optics simulations are performed to verify the analytically calculated fading pdf with collimated beam under various turbulence channels and pointing errors. The calculated overall fading pdfs are almost identical to the directly simulated fading pdfs. The calculated overall fading pdfs are also compared with the gamma-gamma (GG) and the log-normal (LN) fading pdf models. They fit better than both the GG and LN fading pdf models under different receiver aperture sizes in all the studied cases. Further, the analytical method is expanded to the FSO communication system with beam diverging angle case. It is shown that the gamma pdf model is still valid for the fast-tracked on-axis and off-axis fading pdfs with point-like receiver aperture when the laser beam is propagated with beam diverging angle. Large-scale numerical wave-optics simulations prove that the analytically calculated fading pdfs perfectly fit the overall fading pdfs for both focused and diverged beam cases. The influence of the fast-tracked on-axis and off-axis fading pdfs, the fast-tracked beam profile, and the pointing error on the overall fading pdf is also discussed. At last, the analytical method is compared with the previous heuristic fading pdf models proposed since 1970s. Although some of previously proposed fading pdf models provide close fit to the experiment and simulation data, these close fits only exist under particular conditions. Only analytical method shows accurate fit to the directly simulated fading pdfs under different turbulence strength, propagation distances, receiver aperture sizes and pointing errors.
Resumo:
Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.
Resumo:
Free space optical (FSO) communication links can experience extreme signal degradation due to atmospheric turbulence induced spatial and temporal irradiance fuctuations (scintillation) in the laser wavefront. In addition, turbulence can cause the laser beam centroid to wander resulting in power fading, and sometimes complete loss of the signal. Spreading of the laser beam and jitter are also artifacts of atmospheric turbulence. To accurately predict the signal fading that occurs in a laser communication system and to get a true picture of how this affects crucial performance parameters like bit error rate (BER) it is important to analyze the probability density function (PDF) of the integrated irradiance fuctuations at the receiver. In addition, it is desirable to find a theoretical distribution that accurately models these ?uctuations under all propagation conditions. The PDF of integrated irradiance fuctuations is calculated from numerical wave-optic simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to very strong. Our results show that the gamma-gamma PDF provides a good fit to the simulated data distribution for all aperture sizes studied from weak through moderate scintillation. In strong scintillation, the gamma-gamma PDF is a better fit to the distribution for point-like apertures and the lognormal PDF is a better fit for apertures the size of the atmospheric spatial coherence radius ρ0 or larger. In addition, the PDF of received power from a Gaussian laser beam, which has been adaptively compensated at the transmitter before propagation to the receiver of a FSO link in the moderate scintillation regime is investigated. The complexity of the adaptive optics (AO) system is increased in order to investigate the changes in the distribution of the received power and how this affects the BER. For the 10 km link, due to the non-reciprocal nature of the propagation path the optimal beam to transmit is unknown. These results show that a low-order level of complexity in the AO provides a better estimate for the optimal beam to transmit than a higher order for non-reciprocal paths. For the 20 km link distance it was found that, although minimal, all AO complexity levels provided an equivalent improvement in BER and that no AO complexity provided the correction needed for the optimal beam to transmit. Finally, the temporal power spectral density of received power from a FSO communication link is investigated. Simulated and experimental results for the coherence time calculated from the temporal correlation function are presented. Results for both simulation and experimental data show that the coherence time increases as the receiving aperture diameter increases. For finite apertures the coherence time increases as the communication link distance is increased. We conjecture that this is due to the increasing speckle size within the pupil plane of the receiving aperture for an increasing link distance.
Resumo:
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.
Resumo:
Includes bibliographical references.
Resumo:
In this study, we present the winter time surface energy balance at a polygonal tundra site in northern Siberia based on independent measurements of the net radiation, the sensible heat flux and the ground heat flux from two winter seasons. The latent heat flux is inferred from measurements of the atmospheric turbulence characteristics and a model approach. The long-wave radiation is found to be the dominant factor in the surface energy balance. The radiative losses are balanced to about 60 % by the ground heat flux and almost 40 % by the sensible heat fluxes, whereas the contribution of the latent heat flux is small. The main controlling factors of the surface energy budget are the snow cover, the cloudiness and the soil temperature gradient. Large spatial differences in the surface energy balance are observed between tundra soils and a small pond. The ground heat flux released at a freezing pond is by a factor of two higher compared to the freezing soil, whereas large differences in net radiation between the pond and soil are only observed at the end of the winter period. Differences in the surface energy balance between the two winter seasons are found to be related to differences in snow depth and cloud cover which strongly affect the temperature evolution and the freeze-up at the investigated pond.
Resumo:
In this thesis, we will introduce the innovative concept of a plenoptic sensor that can determine the phase and amplitude distortion in a coherent beam, for example a laser beam that has propagated through the turbulent atmosphere.. The plenoptic sensor can be applied to situations involving strong or deep atmospheric turbulence. This can improve free space optical communications by maintaining optical links more intelligently and efficiently. Also, in directed energy applications, the plenoptic sensor and its fast reconstruction algorithm can give instantaneous instructions to an adaptive optics (AO) system to create intelligent corrections in directing a beam through atmospheric turbulence. The hardware structure of the plenoptic sensor uses an objective lens and a microlens array (MLA) to form a mini “Keplerian” telescope array that shares the common objective lens. In principle, the objective lens helps to detect the phase gradient of the distorted laser beam and the microlens array (MLA) helps to retrieve the geometry of the distorted beam in various gradient segments. The software layer of the plenoptic sensor is developed based on different applications. Intuitively, since the device maximizes the observation of the light field in front of the sensor, different algorithms can be developed, such as detecting the atmospheric turbulence effects as well as retrieving undistorted images of distant objects. Efficient 3D simulations on atmospheric turbulence based on geometric optics have been established to help us perform optimization on system design and verify the correctness of our algorithms. A number of experimental platforms have been built to implement the plenoptic sensor in various application concepts and show its improvements when compared with traditional wavefront sensors. As a result, the plenoptic sensor brings a revolution to the study of atmospheric turbulence and generates new approaches to handle turbulence effect better.
Resumo:
Long, laminar plasma jets at atmospheric pressure of pure argon and a mixture of argon and nitrogen with jet length up to 45 fi,Hes its diameter could be generated with a DC are torch by! restricting the movement of arc root in the torch channel. Effects of torch structure, gas feeding, and characteristics of power supply on the length of plasma jets were experimentally examined. Plasma jets of considerable length and excellent stability could be obtained by regulating the generating parameters, including are channel geometry gas flow I ate, and feeding methods, etc. Influence of flow turbulence at the torch,nozzle exit on the temperature distribution of plasma jets was numerically simulated. The analysis indicated that laminar flow plasma with very low initial turbulent kinetic energy will produce a long jet, with low axial temperature gradient. This kind of long laminar plasma jet could greatly improve the controllability for materials processing, compared with a short turbulent are let.
Resumo:
Magnetic sensors have been added to a standard weather balloon radiosonde package to detect motion in turbulent air. These measure the terrestrial magnetic field and return data over the standard uhf radio telemetry. Variability in the magnetic sensor data is caused by motion of the instrument package. A series of radiosonde ascents carrying these sensors has been made near a Doppler lidar measuring atmospheric properties. Lidar-retrieved quantities include vertical velocity (w) profile and its standard deviation (w). w determined over 1 h is compared with the radiosonde motion variability at the same heights. Vertical motion in the radiosonde is found to be robustly increased when w>0.75 m s−1 and is linearly proportional to w. ©2009 American Institute of Physics
Resumo:
In this work a modelization of the turbulence in the atmospheric boundary layer, under convective condition, is made. For this aim, the equations that describe the atmospheric motion are expressed through Reynolds averages and, then, they need closures. This work consists in modifying the TKE-l closure used in the BOLAM (Bologna Limited Area Model) forecast model. In particular, the single column model extracted from BOLAM is used, which is modified to obtain other three different closure schemes: a non-local term is added to the flux- gradient relations used to close the second order moments present in the evolution equation of the turbulent kinetic energy, so that the flux-gradient relations become more suitable for simulating an unstable boundary layer. Furthermore, a comparison among the results obtained from the single column model, the ones obtained from the three new schemes and the observations provided by the known case in literature ”GABLS2” is made.