960 resultados para Calibration curves


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a new approach to perform calculations with the certain standard classes in cohomology of the moduli spaces of curves. It is based on an important lemma of Ionel relating the intersection theoriy of the moduli space of curves and that of the space of admissible coverings. As particular results, we obtain expressions of Hurwitz numbers in terms of the intersections in the tautological ring, expressions of the simplest intersection numbers in terms of Hurwitz numbers, an algorithm of calculation of certain correlators which are the subject of the Witten conjecture, an improved algorithm for intersections related to the Boussinesq hierarchy, expressions for the Hodge integrals over two-pointed ramification cycles, cut-and-join type equations for a large class of intersection numbers, etc.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A regional envelope curve (REC) of flood flows summarises the current bound on our experience of extreme floods in a region. RECs are available for most regions of the world. Recent scientific papers introduced a probabilistic interpretation of these curves and formulated an empirical estimator of the recurrence interval T associated with a REC, which, in principle, enables us to use RECs for design purposes in ungauged basins. The main aim of this work is twofold. First, it extends the REC concept to extreme rainstorm events by introducing the Depth-Duration Envelope Curves (DDEC), which are defined as the regional upper bound on all the record rainfall depths at present for various rainfall duration. Second, it adapts the probabilistic interpretation proposed for RECs to DDECs and it assesses the suitability of these curves for estimating the T-year rainfall event associated with a given duration and large T values. Probabilistic DDECs are complementary to regional frequency analysis of rainstorms and their utilization in combination with a suitable rainfall-runoff model can provide useful indications on the magnitude of extreme floods for gauged and ungauged basins. The study focuses on two different national datasets, the peak over threshold (POT) series of rainfall depths with duration 30 min., 1, 3, 9 and 24 hrs. obtained for 700 Austrian raingauges and the Annual Maximum Series (AMS) of rainfall depths with duration spanning from 5 min. to 24 hrs. collected at 220 raingauges located in northern-central Italy. The estimation of the recurrence interval of DDEC requires the quantification of the equivalent number of independent data which, in turn, is a function of the cross-correlation among sequences. While the quantification and modelling of intersite dependence is a straightforward task for AMS series, it may be cumbersome for POT series. This paper proposes a possible approach to address this problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An amperometric glucose biosensor was developed using an anionic clay matrix (LDH) as enzyme support. The enzyme glucose oxidase (GOx) was immobilized on a layered double hydroxide Ni/Al-NO3 LDH during the electrosynthesis, which was followed by crosslinking with glutaraldehyde (GA) vapours or with GA and bovine serum albumin (GABSA) to avoid the enzyme release. The electrochemical reaction was carried out potentiostatically, at -0.9V vs. SCE, using a rotating disc Pt electrode to assure homogeneity of the electrodeposition suspension, containing GOx, Ni(NO3)2 and Al(NO3)3 in 0.3 M KNO3. The mechanism responsible of the LDH electrodeposition involves the precipitation of the LDH due to the increase of pH at the surface of the electrode, following the cathodic reduction of nitrates. The Pt surface modified with the Ni/Al-NO3 LDH shows a much reduced noise, giving rise to a better signal to noise ratio for the currents relative to H2O2 oxidation, and a linear range for H2O2 determination wider than the one observed for bare Pt electrodes. We pointed out the performances of the biosensor in terms of sensitivity to glucose, calculated from the slope of the linear part of the calibration curve for enzimatically produced H2O2; the sensitivity was dependent on parameters related to the electrodeposition in addition to working conditions. In order to optimise the glucose biosensor performances, with a reduced number of experimental runs, we applied an experimental design. A first screening was performed considering the following variables: deposition time (30 - 120 s), enzyme concentration (0.5 - 3.0 mg/mL), Ni/Al molar ratio (3:1 or 2:1) of the electrodeposition solution at a total metals concentration of 0.03 M and pH of the working buffer solution (5.5-7.0). On the basis of the results from this screening, a full factorial design was carried out, taking into account only enzyme concentration and Ni/Al molar ratio of the electrosynthesis solution. A full factorial design was performed to study linear interactions between factors and their quadratic effects and the optimal setup was evaluated by the isoresponse curves. The significant factors were: enzyme concentration (linear and quadratic terms) and the interaction between enzyme concentration and Ni/Al molar ratio. Since the major obstacle for application of amperometric glucose biosensors is the interference signal resulting from other electro-oxidizable species present in the real matrices, such as ascorbate (AA), the use of different permselective membranes on Pt-LDHGOx modified electrode was discussed with the aim of improving biosensor selectivity and stability. Conventional membranes obtained using Nafion, glutaraldehyde (GA) vapours, GA-BSA were tested together with more innovative materials like palladium hexacyanoferrate (PdHCF) and titania hydrogels. Particular attention has been devoted to hydrogels, because they possess some attractive features, which are generally considered to favour biosensor materials biocompatibility and, consequently, the functional enzyme stability. The Pt-LDH-GOx-PdHCF hydrogel biosensor presented an anti-interferant ability so that to be applied for an accurate glucose analysis in blood. To further improve the biosensor selectivity, protective membranes containing horseradish peroxidase (HRP) were also investigated with the aim of oxidising the interferants before they reach the electrode surface. In such a case glucose determination was also accomplished in real matrices with high AA content. Furthermore, the application of a LDH containing nickel in the oxidised state was performed not only as a support for the enzyme, but also as anti-interferant sistem. The result is very promising and it could be the starting point for further applications in the field of amperometric biosensors; the study could be extended to other oxidase enzymes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]This work presents the calibration and validation of an air quality finite element model applied to emissions from a thermal power plant located in Gran Canaria. The calibration is performed using genetic algorithms. To calibrate and validate the model, the authors use empirical measures of pollutants concentrations from 4 stations located nearby the power plant; an hourly record per station during 3 days is available. Measures from 3 stations will be used to calibrate, while validation will use measures from the remaining station…

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]This work presents the calibration and validation of an air quality finite element model applied to the surroundings of Jinamar electric power plant in Gran Canaria island (Spain). The model involves the generation of an adaptive tetrahedral mesh, the computation of an ambient wind field, the inclusion of the plume rise effect in the wind field, and the simulation of transport and reaction of pollutants. The main advantage of the model is the treatment of complex terrains that introduces an alternative to the standard implementation of current models. In addition, it improves the computational cost through the use of unstructured meshes...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is focused on the financial model for interest rates called the LIBOR Market Model. In the appendixes, we provide the necessary mathematical theory. In the inner chapters, firstly, we define the main interest rates and financial instruments concerning with the interest rate models, then, we set the LIBOR market model, demonstrate its existence, derive the dynamics of forward LIBOR rates and justify the pricing of caps according to the Black’s formula. Then, we also present the Swap Market Model, which models the forward swap rates instead of the LIBOR ones. Even this model is justified by a theoretical demonstration and the resulting formula to price the swaptions coincides with the Black’s one. However, the two models are not compatible from a theoretical point. Therefore, we derive various analytical approximating formulae to price the swaptions in the LIBOR market model and we explain how to perform a Monte Carlo simulation. Finally, we present the calibration of the LIBOR market model to the markets of both caps and swaptions, together with various examples of application to the historical correlation matrix and the cascade calibration of the forward volatilities to the matrix of implied swaption volatilities provided by the market.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An Adaptive Optic (AO) system is a fundamental requirement of 8m-class telescopes. We know that in order to obtain the maximum possible resolution allowed by these telescopes we need to correct the atmospheric turbulence. Thanks to adaptive optic systems we are able to use all the effective potential of these instruments, drawing all the information from the universe sources as best as possible. In an AO system there are two main components: the wavefront sensor (WFS) that is able to measure the aberrations on the incoming wavefront in the telescope, and the deformable mirror (DM) that is able to assume a shape opposite to the one measured by the sensor. The two subsystem are connected by the reconstructor (REC). In order to do this, the REC requires a “common language" between these two main AO components. It means that it needs a mapping between the sensor-space and the mirror-space, called an interaction matrix (IM). Therefore, in order to operate correctly, an AO system has a main requirement: the measure of an IM in order to obtain a calibration of the whole AO system. The IM measurement is a 'mile stone' for an AO system and must be done regardless of the telescope size or class. Usually, this calibration step is done adding to the telescope system an auxiliary artificial source of light (i.e a fiber) that illuminates both the deformable mirror and the sensor, permitting the calibration of the AO system. For large telescope (more than 8m, like Extremely Large Telescopes, ELTs) the fiber based IM measurement requires challenging optical setups that in some cases are also impractical to build. In these cases, new techniques to measure the IM are needed. In this PhD work we want to check the possibility of a different method of calibration that can be applied directly on sky, at the telescope, without any auxiliary source. Such a technique can be used to calibrate AO system on a telescope of any size. We want to test the new calibration technique, called “sinusoidal modulation technique”, on the Large Binocular Telescope (LBT) AO system, which is already a complete AO system with the two main components: a secondary deformable mirror with by 672 actuators, and a pyramid wavefront sensor. My first phase of PhD work was helping to implement the WFS board (containing the pyramid sensor and all the auxiliary optical components) working both optical alignments and tests of some optical components. Thanks to the “solar tower” facility of the Astrophysical Observatory of Arcetri (Firenze), we have been able to reproduce an environment very similar to the telescope one, testing the main LBT AO components: the pyramid sensor and the secondary deformable mirror. Thanks to this the second phase of my PhD thesis: the measure of IM applying the sinusoidal modulation technique. At first we have measured the IM using a fiber auxiliary source to calibrate the system, without any kind of disturbance injected. After that, we have tried to use this calibration technique in order to measure the IM directly “on sky”, so adding an atmospheric disturbance to the AO system. The results obtained in this PhD work measuring the IM directly in the Arcetri solar tower system are crucial for the future development: the possibility of the acquisition of IM directly on sky means that we are able to calibrate an AO system also for extremely large telescope class where classic IM measurements technique are problematic and, sometimes, impossible. Finally we have not to forget the reason why we need this: the main aim is to observe the universe. Thanks to these new big class of telescopes and only using their full capabilities, we will be able to increase our knowledge of the universe objects observed, because we will be able to resolve more detailed characteristics, discovering, analyzing and understanding the behavior of the universe components.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis provides efficient and robust algorithms for the computation of the intersection curve between a torus and a simple surface (e.g. a plane, a natural quadric or another torus), based on algebraic and numeric methods. The algebraic part includes the classification of the topological type of the intersection curve and the detection of degenerate situations like embedded conic sections and singularities. Moreover, reference points for each connected intersection curve component are determined. The required computations are realised efficiently by solving quartic polynomials at most and exactly by using exact arithmetic. The numeric part includes algorithms for the tracing of each intersection curve component, starting from the previously computed reference points. Using interval arithmetic, accidental incorrectness like jumping between branches or the skipping of parts are prevented. Furthermore, the environments of singularities are correctly treated. Our algorithms are complete in the sense that any kind of input can be handled including degenerate and singular configurations. They are verified, since the results are topologically correct and approximate the real intersection curve up to any arbitrary given error bound. The algorithms are robust, since no human intervention is required and they are efficient in the way that the treatment of algebraic equations of high degree is avoided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. Knowledge of the spatial and temporal distribution of CCN in the atmosphere is essential to understand and describe the effects of aerosols in meteorological models. In this study, CCN properties were measured in polluted and pristine air of different continental regions, and the results were parameterized for efficient prediction of CCN concentrations.The continuous-flow CCN counter used for size-resolved measurements of CCN efficiency spectra (activation curves) was calibrated with ammonium sulfate and sodium chloride aerosols for a wide range of water vapor supersaturations (S=0.068% to 1.27%). A comprehensive uncertainty analysis showed that the instrument calibration depends strongly on the applied particle generation techniques, Köhler model calculations, and water activity parameterizations (relative deviations in S up to 25%). Laboratory experiments and a comparison with other CCN instruments confirmed the high accuracy and precision of the calibration and measurement procedures developed and applied in this study.The mean CCN number concentrations (NCCN,S) observed in polluted mega-city air and biomass burning smoke (Beijing and Pearl River Delta, China) ranged from 1000 cm−3 at S=0.068% to 16 000 cm−3 at S=1.27%, which is about two orders of magnitude higher than in pristine air at remote continental sites (Swiss Alps, Amazonian rainforest). Effective average hygroscopicity parameters, κ, describing the influence of chemical composition on the CCN activity of aerosol particles were derived from the measurement data. They varied in the range of 0.3±0.2, were size-dependent, and could be parameterized as a function of organic and inorganic aerosol mass fraction. At low S (≤0.27%), substantial portions of externally mixed CCN-inactive particles with much lower hygroscopicity were observed in polluted air (fresh soot particles with κ≈0.01). Thus, the aerosol particle mixing state needs to be known for highly accurate predictions of NCCN,S. Nevertheless, the observed CCN number concentrations could be efficiently approximated using measured aerosol particle number size distributions and a simple κ-Köhler model with a single proxy for the effective average particle hygroscopicity. The relative deviations between observations and model predictions were on average less than 20% when a constant average value of κ=0.3 was used in conjunction with variable size distribution data. With a constant average size distribution, however, the deviations increased up to 100% and more. The measurement and model results demonstrate that the aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the measurement results and parameterizations presented in this study can be directly implemented in detailed process models as well as in large-scale atmospheric and climate models for efficient description of the CCN activity of atmospheric aerosols.