842 resultados para consistency in indexing
Resumo:
In the thesis is presented the measurement of the neutrino velocity with the OPERA experiment in the CNGS beam, a muon neutrino beam produced at CERN. The OPERA detector observes muon neutrinos 730 km away from the source. Previous measurements of the neutrino velocity have been performed by other experiments. Since the OPERA experiment aims the direct observation of muon neutrinos oscillations into tau neutrinos, a higher energy beam is employed. This characteristic together with the higher number of interactions in the detector allows for a measurement with a much smaller statistical uncertainty. Moreover, a much more sophisticated timing system (composed by cesium clocks and GPS receivers operating in “common view mode”), and a Fast Waveform Digitizer (installed at CERN and able to measure the internal time structure of the proton pulses used for the CNGS beam), allows for a new measurement with a smaller systematic error. Theoretical models on Lorentz violating effects can be investigated by neutrino velocity measurements with terrestrial beams. The analysis has been carried out with blind method in order to guarantee the internal consistency and the goodness of each calibration measurement. The performed measurement is the most precise one done with a terrestrial neutrino beam, the statistical accuracy achieved by the OPERA measurement is about 10 ns and the systematic error is about 20 ns.
Resumo:
In dieser Arbeit geht es um die Schätzung von Parametern in zeitdiskreten ergodischen Markov-Prozessen im allgemeinen und im CIR-Modell im besonderen. Beim CIR-Modell handelt es sich um eine stochastische Differentialgleichung, die von Cox, Ingersoll und Ross (1985) zur Beschreibung der Dynamik von Zinsraten vorgeschlagen wurde. Problemstellung ist die Schätzung der Parameter des Drift- und des Diffusionskoeffizienten aufgrund von äquidistanten diskreten Beobachtungen des CIR-Prozesses. Nach einer kurzen Einführung in das CIR-Modell verwenden wir die insbesondere von Bibby und Sørensen untersuchte Methode der Martingal-Schätzfunktionen und -Schätzgleichungen, um das Problem der Parameterschätzung in ergodischen Markov-Prozessen zunächst ganz allgemein zu untersuchen. Im Anschluss an Untersuchungen von Sørensen (1999) werden hinreichende Bedingungen (im Sinne von Regularitätsvoraussetzungen an die Schätzfunktion) für die Existenz, starke Konsistenz und asymptotische Normalität von Lösungen einer Martingal-Schätzgleichung angegeben. Angewandt auf den Spezialfall der Likelihood-Schätzung stellen diese Bedingungen zugleich lokal-asymptotische Normalität des Modells sicher. Ferner wird ein einfaches Kriterium für Godambe-Heyde-Optimalität von Schätzfunktionen angegeben und skizziert, wie dies in wichtigen Spezialfällen zur expliziten Konstruktion optimaler Schätzfunktionen verwendet werden kann. Die allgemeinen Resultate werden anschließend auf das diskretisierte CIR-Modell angewendet. Wir analysieren einige von Overbeck und Rydén (1997) vorgeschlagene Schätzer für den Drift- und den Diffusionskoeffizienten, welche als Lösungen quadratischer Martingal-Schätzfunktionen definiert sind, und berechnen das optimale Element in dieser Klasse. Abschließend verallgemeinern wir Ergebnisse von Overbeck und Rydén (1997), indem wir die Existenz einer stark konsistenten und asymptotisch normalen Lösung der Likelihood-Gleichung zeigen und lokal-asymptotische Normalität für das CIR-Modell ohne Einschränkungen an den Parameterraum beweisen.
Resumo:
Lo studio svolto in merito alle tecniche di produzione di componenti strutturali in materiale composito ha permesso il raggiungimento di una precisa consapevolezza dello stato dell’arte del settore, in particolare in riferimento ai processi attualmente utilizzati per l’industrializzazione in media-grande serie. Con l’obiettivo di sintetizzare i principali vantaggi delle tecnologie suddette e permettere la realizzazione di forme più complesse, si è proceduto all’analisi di fattibilità, attraverso uno studio funzionale e una prima progettazione di una tecnologia di produzione per nastratura automatizzata di componenti strutturali in materiale composito. Si è voluto quindi dimostrare la flessibilità e la consistenza del processo disegnando un telaio nastrato in carbonio, intercambiabile al telaio FSAE 2009 in tubolare d’acciaio (stessi punti di attacco motore, punti di attacco telaietto posteriore, attacchi sospensioni anteriori) e che garantisca un sostanziale vantaggio in termini di peso, a pari rigidezza torsionale. La caratterizzazione di tale telaio è stata eseguita mediante l'utilizzo del calcolo strutturale, validato da prove sperimentali.
Resumo:
This thesis is mainly concerned with a model calculation for generalized parton distributions (GPDs). We calculate vectorial- and axial GPDs for the N N and N Delta transition in the framework of a light front quark model. This requires the elaboration of a connection between transition amplitudes and GPDs. We provide the first quark model calculations for N Delta GPDs. The examination of transition amplitudes leads to various model independent consistency relations. These relations are not exactly obeyed by our model calculation since the use of the impulse approximation in the light front quark model leads to a violation of Poincare covariance. We explore the impact of this covariance breaking on the GPDs and form factors which we determine in our model calculation and find large effects. The reference frame dependence of our results which originates from the breaking of Poincare covariance can be eliminated by introducing spurious covariants. We extend this formalism in order to obtain frame independent results from our transition amplitudes.
Resumo:
Abstract Originalsprache (englisch) Visual perception relies on a two-dimensional projection of the viewed scene on the retinas of both eyes. Thus, visual depth has to be reconstructed from a number of different cues that are subsequently integrated to obtain robust depth percepts. Existing models of sensory integration are mainly based on the reliabilities of individual cues and disregard potential cue interactions. In the current study, an extended Bayesian model is proposed that takes into account both cue reliability and consistency. Four experiments were carried out to test this model's predictions. Observers had to judge visual displays of hemi-cylinders with an elliptical cross section, which were constructed to allow for an orthogonal variation of several competing depth cues. In Experiment 1 and 2, observers estimated the cylinder's depth as defined by shading, texture, and motion gradients. The degree of consistency among these cues was systematically varied. It turned out that the extended Bayesian model provided a better fit to the empirical data compared to the traditional model which disregards covariations among cues. To circumvent the potentially problematic assessment of single-cue reliabilities, Experiment 3 used a multiple-observation task, which allowed for estimating perceptual weights from multiple-cue stimuli. Using the same multiple-observation task, the integration of stereoscopic disparity, shading, and texture gradients was examined in Experiment 4. It turned out that less reliable cues were downweighted in the combined percept. Moreover, a specific influence of cue consistency was revealed. Shading and disparity seemed to be processed interactively while other cue combinations could be well described by additive integration rules. These results suggest that cue combination in visual depth perception is highly flexible and depends on single-cue properties as well as on interrelations among cues. The extension of the traditional cue combination model is defended in terms of the necessity for robust perception in ecologically valid environments and the current findings are discussed in the light of emerging computational theories and neuroscientific approaches.
Resumo:
Die bedeutendste Folge der Luftverschmutzung ist eine erhöhte Konzentration an Ozon (O3) in der Troposphäre innerhalb der letzten 150 Jahre. Ozon ist ein photochemisches Oxidationsmittel und ein Treibhausgas, das als wichtigste Vorstufe des Hydroxyradikals OH die Oxidationskraft der Atmosphäre stark beeinflusst. Um die Oxidationskraft der Atmosphäre und ihren Einfluss auf das Klima verstehen zu können, ist es von großer Bedeutung ein detailliertes Wissen über die Photochemie des Ozons und seiner Vorläufer, den Stickoxiden (NOx), in der Troposphäre zu besitzen. Dies erfordert das Verstehen der Bildungs- und Abbaumechanismen von Ozon und seiner Vorläufer. Als eine für den chemischen Ozonabbau wichtige Region kann die vom Menschen weitgehend unberührte marine Grenzschicht (Marine boundary layer (MBL)) angesehen werden. Bisher wurden für diese Region jedoch kaum Spurengasmessungen durchgeführt, und so sind die dort ablaufenden photochemischen Prozesse wenig untersucht. Da etwa 70 % der Erdoberfläche mit Ozeanen bedeckt sind, können die in der marinen Granzschicht ablaufenden Prozesse als signifikant für die gesamte Atmosphäre angesehen werden. Dies macht eine genaue Untersuchung dieser Region interessant. Um die photochemische Produktion und den Abbau von Ozon abschätzen zu können und den Einfluss antrophogener Emissionen auf troposphärisches Ozon zu quantifizieren, sind aktuelle Messergebnisse von NOx im pptv-Bereich für diese Region erforderlich. Die notwendigen Messungen von NO, NO2, O3, JNO2, J(O1D), HO2, OH, ROx sowie einiger meteorologischer Parameter wurden während der Fahrt des französischen Forschungsschiffes Marion-Dufresne auf dem südlichen Atlantik (28°S-57°S, 46°W-34°E) im März 2007 durchgeführt. Dabei sind für NO und NO2 die bisher niedrigsten gemessenen Werte zu verzeichnen. Die während der Messcampagne gewonnen Daten wurden hinsichtlich Ihrer Übereinstimmung mit den Bedingungen des photochemischen stationären Gleichgewichts (photochemical steady state (PSS)) überprüft. Dabei konnte eine Abweichung vom PSS festgestellt werden, welche unter Bedingungen niedriger NOx-Konzentrationen (5 bis 25pptv) einen unerwarteten Trend im Leighton-Verhältnis bewirkt, der abhängig vom NOx Mischungsverhältnis und der JNO2 Intensität ist. Signifikante Abweichungen vom Verhältnis liegen bei einer Zunahme der JNO2 Intensität vor. Diese Ergebnisse zeigen, dass die Abweichung vom PSS nicht beim Minimum der NOx-Konzentrationen und der JNO2 Werte liegt, so wie es in bisherigen theoretischen Studien dargelegt wurde und können als Hinweis auf weitere photochemische Prozesse bei höheren JNO2-Werten in einem System mit niedrigem NOx verstanden werden. Das wichtigste Ergebnis dieser Untersuchung, ist die Verifizierung des Leighton-Verhältnisses, das zur Charakterisierung des PSS dient, bei sehr geringen NOx-Konzentrationen in der MBL. Die bei dieser Doktorarbeit gewonnenen Erkenntnisse beweisen, dass unter den Bedingungen der marinen Granzschicht rein photochemischer Abbau von Ozon stattfindet und als Hauptursache hierfür während des Tages die Photolyse gilt. Mit Hilfe der gemessenen Parameter wurde der kritische NO-Level auf Werte zwischen 5 und 9 pptv abgeschätzt, wobei diese Werte im Vergleich zu bisherigen Studien vergleichsweise niedrig sind. Möglicherweise bedeutet dies, dass das Ozon Produktion/ Abbau-Potential des südlichen Atlantiks deutlich stärker auf die Verfügbarkeit von NO reagiert, als es in anderen Regionen der Fall ist. Im Rahmen der Doktorarbeit wurde desweiteren ein direkter Vergleich der gemessenen Spezies mit dem Modelergebnis eines 3-dimensionalen Zirkulationsmodel zur Simulation atmosphären chemischer Prozesse (EMAC) entlang der exakten Schiffsstrecke durchgeführt. Um die Übereinstimmung der Messergebnisse mit dem bisherigen Verständnis der atmosphärischen Radikalchemie zu überprüfen, wurde ein Gleichgewichtspunktmodel entwickelt, das die während der Überfahrt erhaltenen Daten für Berechungen verwendet. Ein Vergleich zwischen der gemessenen und der modellierten ROx Konzentrationen in einer Umgebung mit niedrigem NOx zeigt, dass die herkömmliche Theorie zur Reproduktion der Beobachtungen unzureichend ist. Die möglichen Gründe hierfür und die Folgen werden in dieser Doktorarbeit diskutiert.
Resumo:
The Thermodynamic Bethe Ansatz analysis is carried out for the extended-CP^N class of integrable 2-dimensional Non-Linear Sigma Models related to the low energy limit of the AdS_4xCP^3 type IIA superstring theory. The principal aim of this program is to obtain further non-perturbative consistency check to the S-matrix proposed to describe the scattering processes between the fundamental excitations of the theory by analyzing the structure of the Renormalization Group flow. As a noteworthy byproduct we eventually obtain a novel class of TBA models which fits in the known classification but with several important differences. The TBA framework allows the evaluation of some exact quantities related to the conformal UV limit of the model: effective central charge, conformal dimension of the perturbing operator and field content of the underlying CFT. The knowledge of this physical quantities has led to the possibility of conjecturing a perturbed CFT realization of the integrable models in terms of coset Kac-Moody CFT. The set of numerical tools and programs developed ad hoc to solve the problem at hand is also discussed in some detail with references to the code.
Resumo:
Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.
Resumo:
This thesis work is devoted to the conceptual and technical development of the Adaptive Resolution Scheme (AdResS), a molecular dynamics method that allows the simulation of a system with different levels of resolution simultaneously. The simulation domain is divided into high and low resolution zones and a transition region that links them, through which molecules can freely diffuse.rnThe first issue of this work regards the thermodynamic consistency of the method, which is tested and verified in a model liquid of tetrahedral molecules. The results allow the introduction of the concept of the Thermodynamic Force, an external field able to correct spurious density fluctuations present in the transition region in usual AdResS simulations.rnThe AdResS is also applied to a system where two different representations with the same degree of resolution are confronted. This simple test extends the method from an Adaptive Resolution Scheme to an Adaptive Representation Scheme, providing a way of coupling different force fields based on thermodynamic consistency arguments. The Thermodynamic Force is successfully applied to the example described in this work as well.rnAn alternative approach of deducing the Thermodynamic Force from pressure consistency considerations allows the interpretation of AdResS as a first step towards a molecular dynamics simulation in the Grand Canonical ensemble. Additionally, such a definition leads to a practical way of determining the Thermodynamic Force, tested in the well studied tetrahedral liquid. The effects of AdResS and this correction on the atomistic domain are analyzed by inspecting the local distribution of velocities, radial distribution functions, pressure and particle number fluctuation. Their comparison with analogous results coming from purely atomistic simulations shows good agreement, which is greatly improved under the effect of the external field.rnA further step in the development of AdResS, necessary for several applications in biophysics and material science, consists of its application to multicomponent systems. To this aim, the high-resolution representation of a model binary mixture is confronted with its coarse-grained representation systematically parametrized. The Thermodynamic Force, whose development requires a more delicate treatment, also gives satisfactory results.rnFinally, AdResS is tested in systems including two-body bonded forces, through the simulation of a model polymer allowed to adaptively change its representation. It is shown that the distribution functions that characterize the polymer structure are in practice not affected by the change of resolution.rnThe technical details of the implementation of AdResS in the ESPResSo package conclude this thesis work.
Resumo:
Urease is a nickel-dependent enzyme that catalyzes hydrolysis of urea in the last step of organic nitrogen mineralization. Its active site contains a dinuclear center for Ni(II) ions that must be inserted into the apo-enzyme through the action of four accessory proteins (UreD, UreE, UreF, UreG) leading to activation of urease. UreE, acting as a metallo-chaperone, delivers Ni(II) to the preformed complex of apo-urease-UreDFG and has the capability to enhance the GTPase activity of UreG. This study, focused on characterization of UreE from Sporosarcina pasteurii (SpUreE), represents a piece of information on the structure/mobility-function relationships that control nickel binding by SpUreE and its interaction with SpUreG. A calorimetric analysis revealed the occurrence of a binding event between these proteins with positive cooperativity and a stoichiometry consistent with the formation of the (UreE)2-(UreG)2 hetero-oligomer complex. Chemical Shift Perturbations induced by the protein-protein interaction were analyzed using high-resolution NMR spectroscopy, which allowed to characterize the molecular details of the protein surface of SpUreE involved in the complex formation with SpUreG. Moreover, backbone dynamic properties of SpUreE, determined using 15N relaxation analysis, revealed a general mobility in the nanoseconds time-scale, with the fastest motions observed at the C-termini. The latter analysis made it possible for the first time to characterize of the C-terminal portions, known to contain key residues for metal ion binding, that were not observed in the crystal structure of UreE because of disorder. The residues belonging to this portion of SpUreE feature large CSPs upon addition of SpUreG, showing that their chemical environment is directly affected by protein-protein interaction. Metal ion selectivity and affinity of SpUreE for cognate Ni(II) and non cognate Zn(II) metal ions were determined, and the ability of the protein to select Ni(II) over Zn(II), in consistency with the proposed role in Ni(II) cations transport, was established.
Resumo:
In un contesto dominato da invecchiamento della popolazione, prevalenza della cronicità e presenza crescente di pazienti multiproblematici e non autosufficienti è indispensabile spostare il baricentro delle cure dall'acuzie alla cronicità, e quindi assicurare la continuità e la coerenza fra i diversi setting di cura, sia sanitari che socio-sanitari (ospedale, servizi sanitari territoriali, domicilio, strutture residenziali di Long term care). Dall'analisi della letteratura emerge che il maggiore ostacolo a realizzare questa continuità è rappresentato dalla presenza, caratteristica del sistema di welfare italiano, di molteplici attori e strutture con competenze, obiettivi e funzioni diverse e separate, e la raccomandazione di lavorare per l'integrazione contemporaneamente su più livelli: - normativo-istituzionale - programmatorio - professionale e gestionale Il sistema della "governance" realizzato in Emilia-Romagna per l'integrazione socio-sanitaria è stato valutato alla luce di queste raccomandazioni, seguendo il modello della Realist evaluation per i Social complex interventions: enucleando le "teorie" alla base dell'intervento ed analizzando i diversi step della sua implementazione. Alla luce di questa valutazione, il modello della "governance" è risultato coerente con le indicazioni delle linee guida, ed effettivamente capace di produrre risultati al fine della continuità e della coerenza fra cure sanitarie e assistenza sociale e sanitaria complessa. Resta da realizzare una valutazione complessiva dell'impatto su efficacia, costi e soddisfazione dei pazienti.
Resumo:
The aim of this study involving 170 patients suffering from non-specific low back pain was to test the validity of the spinal function sort (SFS) in a European rehabilitation setting. The SFS, a picture-based questionnaire, assesses perceived functional ability of work tasks involving the spine. All measurements were taken by a blinded research assistant; work status was assessed with questionnaires. Our study demonstrated a high internal consistency shown by a Cronbach's alpha of 0.98, reasonable evidence for unidimensionality, spearman correlations of >0.6 with work activities, and discriminating power for work status at 3 and 12 months by ROC curve analysis (area under curve = 0.760 (95% CI 0.689-0.822), respectively, 0.801 (95% CI 0.731-0.859). The standardised response mean within the two treatment groups was 0.18 and -0.31. As a result, we conclude that the perceived functional ability for work tasks can be validly assessed with the SFS in a European rehabilitation setting in patients with non-specific low back pain, and is predictive for future work status.
Resumo:
The original 'Örebro Musculoskeletal Pain Questionnaire' (original-ÖMPQ) has been shown to have limitations in practicality, factor structure, face and content validity. This study addressed these concerns by modifying its content producing the 'Örebro Musculoskeletal Screening Questionnaire' (ÖMSQ). The ÖMSQ and original-ÖMPQ were tested concurrently in acute/subacute low back pain working populations (pilot n = 44, main n = 106). The ÖMSQ showed improved face and content validity, which broadened potential application, and improved practicality with two-thirds less missing responses. High reliability (0.975, p < 0.05, ICC: 2.1), criterion validity (Spearman's r = 0.97) and internal consistency (α = 0.84) were achieved, as were predictive ability cut-off scores from ROC curves (112-120 ÖMSQ-points), statistically different ÖMSQ scores (p < 0.001) for each outcome trait, and a strong correlation with recovery time (Spearman's, r = 0.71). The six-component factor structure reflected the constructs originally proposed. The ÖMSQ can be substituted for the original-ÖMPQ in this population. Further research will assess its applicability in broader populations.
Resumo:
Two experiments plus a pilot investigated the role of melodic structure on short-term memory for musical notation by musicians and nonmusicians. In the pilot experiment, visually similar melodies that had been rated as either "good" or "bad" were presented briefly, followed by a 15-sec retention interval and then recall. Musicians remembered good melodies better than they remembered bad ones: nonmusicians did not distinguish between them. In the second experiment, good, bad, and random melodies were briefly presented, followed by immediate recall. The advantage of musicians over nonmusicians decreased as the melody type progressed from good to bad to random. In the third experiment, musicians and nonmusicians divided the stimulus melodies into groups. For each melody, the consistency of grouping was correlated with memory performance in the first two experiments. Evidence was found for use of musical groupings by musicians and for use of a simple visual strategy by nonmusicians. The nature of these musical groupings and how they may be learned are considered. The relation of this work to other studies of comprehension of symbolic diagrams is also discussed.
Resumo:
This article contributes to the research on demographics and public health of urban populations of preindustrial Europe. The key source is a burial register that contains information on the deceased, such as age and sex, residence and cause of death. This register is one of the earliest compilations of data sets of individuals with this high degree of completeness and consistency. Critical assessment of the register's origin, formation and upkeep promises high validity and reliability. Between 1805 and 1815, 4,390 deceased inhabitants were registered. Information concerning these individuals provides the basis for this study. Life tables of Bern's population were created using different models. The causes of death were classified and their frequency calculated. Furthermore, the susceptibility of age groups to certain causes of death was established. Special attention was given to causes of death and mortality of newborns, infants and birth-giving women. In comparison to other cities and regions in Central Europe, Bern's mortality structure shows low rates for infants (q0=0.144) and children (q1-4=0.068). This could have simply indicated better living conditions. Life expectancy at birth was 43 years. Mortality was high in winter and spring, and decreased in summer to a low level with a short rise in August. The study of the causes of death was inhibited by difficulties in translating early 19th century nomenclature into the modern medical system. Nonetheless, death from metabolic disorders, illnesses of the respiratory system, and debilitation were the most prominent causes in Bern. Apparently, the worst killer of infants up to 12 months was the "gichteren", an obsolete German term for lethal spasmodic convulsions. The exact modern identification of this disease remains unclear. Possibilities such as infant tetanus or infant epilepsy are discussed. The maternal death rate of 0.72% is comparable with values calculated from contemporaneous sources. Relevance of childbed fever in the early 1800s was low. Bern's data indicate that the extent of deaths related to childbirth in this period is overrated. This research has an explicit interdisciplinary value for various fields including both the humanities and natural sciences, since information reported here represents the complete age and sex structure of a deceased population. Physical anthropologists can use these data as a true reference group for their palaeodemographic studies of preindustrial Central Europe of the late 18th and early 19th century. It is a call to both historians and anthropologists to use our resources to a better effect through combination of methods and exchange of knowledge.