6 resultados para measuring capabilities
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
An Adaptive Optic (AO) system is a fundamental requirement of 8m-class telescopes. We know that in order to obtain the maximum possible resolution allowed by these telescopes we need to correct the atmospheric turbulence. Thanks to adaptive optic systems we are able to use all the effective potential of these instruments, drawing all the information from the universe sources as best as possible. In an AO system there are two main components: the wavefront sensor (WFS) that is able to measure the aberrations on the incoming wavefront in the telescope, and the deformable mirror (DM) that is able to assume a shape opposite to the one measured by the sensor. The two subsystem are connected by the reconstructor (REC). In order to do this, the REC requires a “common language" between these two main AO components. It means that it needs a mapping between the sensor-space and the mirror-space, called an interaction matrix (IM). Therefore, in order to operate correctly, an AO system has a main requirement: the measure of an IM in order to obtain a calibration of the whole AO system. The IM measurement is a 'mile stone' for an AO system and must be done regardless of the telescope size or class. Usually, this calibration step is done adding to the telescope system an auxiliary artificial source of light (i.e a fiber) that illuminates both the deformable mirror and the sensor, permitting the calibration of the AO system. For large telescope (more than 8m, like Extremely Large Telescopes, ELTs) the fiber based IM measurement requires challenging optical setups that in some cases are also impractical to build. In these cases, new techniques to measure the IM are needed. In this PhD work we want to check the possibility of a different method of calibration that can be applied directly on sky, at the telescope, without any auxiliary source. Such a technique can be used to calibrate AO system on a telescope of any size. We want to test the new calibration technique, called “sinusoidal modulation technique”, on the Large Binocular Telescope (LBT) AO system, which is already a complete AO system with the two main components: a secondary deformable mirror with by 672 actuators, and a pyramid wavefront sensor. My first phase of PhD work was helping to implement the WFS board (containing the pyramid sensor and all the auxiliary optical components) working both optical alignments and tests of some optical components. Thanks to the “solar tower” facility of the Astrophysical Observatory of Arcetri (Firenze), we have been able to reproduce an environment very similar to the telescope one, testing the main LBT AO components: the pyramid sensor and the secondary deformable mirror. Thanks to this the second phase of my PhD thesis: the measure of IM applying the sinusoidal modulation technique. At first we have measured the IM using a fiber auxiliary source to calibrate the system, without any kind of disturbance injected. After that, we have tried to use this calibration technique in order to measure the IM directly “on sky”, so adding an atmospheric disturbance to the AO system. The results obtained in this PhD work measuring the IM directly in the Arcetri solar tower system are crucial for the future development: the possibility of the acquisition of IM directly on sky means that we are able to calibrate an AO system also for extremely large telescope class where classic IM measurements technique are problematic and, sometimes, impossible. Finally we have not to forget the reason why we need this: the main aim is to observe the universe. Thanks to these new big class of telescopes and only using their full capabilities, we will be able to increase our knowledge of the universe objects observed, because we will be able to resolve more detailed characteristics, discovering, analyzing and understanding the behavior of the universe components.
Resumo:
The Adaptive Optics is the measurement and correction in real time of the wavefront aberration of the star light caused by the atmospheric turbulence, that limits the angular resolution of ground based telescopes and thus their capabilities to deep explore faint and crowded astronomical objects. The lack of natural stars enough bright to be used as reference sources for the Adaptive Optics, over a relevant fraction of the sky, led to the introduction of artificial reference stars. The so-called Laser Guide Stars are produced by exciting the Sodium atoms in a layer laying at 90km of altitude, by a powerful laser beam projected toward the sky. The possibility to turn on a reference star close to the scientific targets of interest has the drawback in an increased difficulty in the wavefront measuring, mainly due to the time instability of the Sodium layer density. These issues are increased with the telescope diameter. In view of the construction of the 42m diameter European Extremely Large Telescope a detailed investigation of the achievable performances of Adaptive Optics becomes mandatory to exploit its unique angular resolution . The goal of this Thesis was to present a complete description of a laboratory Prototype development simulating a Shack-Hartmann wavefront sensor using Laser Guide Stars as references, in the expected conditions for a 42m telescope. From the conceptual design, through the opto-mechanical design, to the Assembly, Integration and Test, all the phases of the Prototype construction are explained. The tests carried out shown the reliability of the images produced by the Prototype that agreed with the numerical simulations. For this reason some possible upgrades regarding the opto-mechanical design are presented, to extend the system functionalities and let the Prototype become a more complete test bench to simulate the performances and drive the future Adaptive Optics modules design.
Resumo:
The objective of the work is the evaluation of the potential capabilities of navigation satellite signals to retrieve basic atmospheric parameters. A capillary study have been performed on the assumptions more or less explicitly contained in the common processing steps of navigation signals. A probabilistic procedure has been designed for measuring vertical discretised profiles of pressure, temperature and water vapour and their associated errors. Numerical experiments on a synthetic dataset have been performed with the main objective of quantifying the information that could be gained from such approach, using entropy and relative entropy as testing parameters. A simulator of phase delay and bending of a GNSS signal travelling across the atmosphere has been developed to this aim.
Resumo:
This study focuses on the processes of change that firms undertake to overcome conditions of organizational rigidity and develop new dynamic capabilities, thanks to the contribution of external knowledge. When external contingencies highlight firms’ core rigidities, external actors can intervene in change projects, providing new competences to firms’ managers. Knowledge transfer and organizational learning processes can lead to the development of new dynamic capabilities. Existing literature does not completely explain how these processes develop and how external knowledge providers, as management consultants, influence them. Dynamic capabilities literature has become very rich in the last years; however, the models that explain how dynamic capabilities evolve are not particularly investigated. Adopting a qualitative approach, this research proposes four relevant case studies in which external actors introduce new knowledge within organizations, activating processes of change. Each case study consists of a management consulting project. Data are collected through in-depth interviews with consultants and managers. A large amount of documents supports evidences from interviews. A narrative approach is adopted to account for change processes and a synthetic approach is proposed to compare case studies along relevant dimensions. This study presents a model of capabilities evolution, supported by empirical evidence, to explain how external knowledge intervenes in capabilities evolution processes: first, external actors solve gaps between environmental demands and firms’ capabilities, changing organizational structures and routines; second, a knowledge transfer between consultants and managers leads to the creation of new ordinary capabilities; third, managers can develop new dynamic capabilities through a deliberate learning process that internalizes new tacit knowledge from consultants. After the end of the consulting project, two elements can influence the deliberate learning process: new external contingencies and changes in the perceptions about external actors.
Resumo:
Because of its aberrant activation, the PI3K/AKT/mTOR signaling pathway represents a pharmacological target in blast cells from patients with acute myelogenous leukemia (AML). Using Reverse Phase Protein Microarrays (RPMA), we have analyzed 20 phosphorylated epitopes of the PI3K/Akt/mTor signal pathway of peripheral blood and bone marrow specimens of 84 patients with newly diagnosed AML. Fresh blast cells were grown for 2 h, 4 h or 20 h untreated or treated with a panel of phase I or phase II Akt allosteric inhibitors, either alone or in combination with the mTOR kinase inhibitor Torin1 or the broad RTK inhibitor Sunitinib. By unsupervised hierarchical clustering a strong phosphorylation/activity of most of the sampled members of the PI3K/Akt/mTOR pathway was observed in 70% of samples from AML patients. Remarkably, however, we observed that inhibition of Akt phosphorylation, as well as of its substrates, was transient, and recovered or even increased far above basal level after 20 h in 60% samples. We demonstrated that inhibition of Akt induces FOXO-dependent insulin receptor expression and IRS-1 activation, attenuating the effect of drug treatment by reactivation of PI3K/Akt. Consistent with this model we found that combined inhibition of Akt and RTKs is much more effective than either alone, revealing the adaptive capabilities of signaling networks in blast cells and highliting the limations of these drugs if used as monotherapy.
Resumo:
In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.