904 resultados para two-stage sampling


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Questa tesi riguarda l'analisi delle trasmissioni ad ingranaggi e delle ruote dentate in generale, nell'ottica della minimizzazione delle perdite di energia. È stato messo a punto un modello per il calcolo della energia e del calore dissipati in un riduttore, sia ad assi paralleli sia epicicloidale. Tale modello consente di stimare la temperatura di equilibrio dell'olio al variare delle condizioni di funzionamento. Il calcolo termico è ancora poco diffuso nel progetto di riduttori, ma si è visto essere importante soprattutto per riduttori compatti, come i riduttori epicicloidali, per i quali la massima potenza trasmissibile è solitamente determinata proprio da considerazioni termiche. Il modello è stato implementato in un sistema di calcolo automatizzato, che può essere adattato a varie tipologie di riduttore. Tale sistema di calcolo consente, inoltre, di stimare l'energia dissipata in varie condizioni di lubrificazione ed è stato utilizzato per valutare le differenze tra lubrificazione tradizionale in bagno d'olio e lubrificazione a “carter secco” o a “carter umido”. Il modello è stato applicato al caso particolare di un riduttore ad ingranaggi a due stadi: il primo ad assi paralleli ed il secondo epicicloidale. Nell'ambito di un contratto di ricerca tra il DIEM e la Brevini S.p.A. di Reggio Emilia, sono state condotte prove sperimentali su un prototipo di tale riduttore, prove che hanno consentito di tarare il modello proposto [1]. Un ulteriore campo di indagine è stato lo studio dell’energia dissipata per ingranamento tra due ruote dentate utilizzando modelli che prevedano il calcolo di un coefficiente d'attrito variabile lungo il segmento di contatto. I modelli più comuni, al contrario, si basano su un coefficiente di attrito medio, mentre si può constatare che esso varia sensibilmente durante l’ingranamento. In particolare, non trovando in letteratura come varia il rendimento nel caso di ruote corrette, ci si è concentrati sul valore dell'energia dissipata negli ingranaggi al variare dello spostamento del profilo. Questo studio è riportato in [2]. È stata condotta una ricerca sul funzionamento di attuatori lineari vite-madrevite. Si sono studiati i meccanismi che determinano le condizioni di usura dell'accoppiamento vite-madrevite in attuatori lineari, con particolare riferimento agli aspetti termici del fenomeno. Si è visto, infatti, che la temperatura di contatto tra vite e chiocciola è il parametro più critico nel funzionamento di questi attuatori. Mediante una prova sperimentale, è stata trovata una legge che, data pressione, velocità e fattore di servizio, stima la temperatura di esercizio. Di tale legge sperimentale è stata data un'interpretazione sulla base dei modelli teorici noti. Questo studio è stato condotto nell'ambito di un contratto di ricerca tra il DIEM e la Ognibene Meccanica S.r.l. di Bologna ed è pubblicato in [3].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die vorliegende Arbeit behandelt die Entwicklung des 570 Ma alten, neoproterozoischen Agardagh - Tes-Chem Ophioliths (ATCO) in Zentralasien. Dieser Ophiolith liegt südwestlich des Baikalsees (50.5° N, 95° E) und wurde im frühen Stadium der Akkretion des Zentralasiatischen Mobilgürtels auf den nordwestlichen Rand des Tuvinisch-Mongolischen Mikrokontinentes aufgeschoben. Bei dem Zentralasiatische Mobilgürtel handelt es sich um einen riesigen Akkretions-Subduktionskomplex, der heute das größte zusammenhängende Orogen der Erde darstellt. Im Rahmen dieser Arbeit wurden eine Reihe plutonischer und vulkanischer Gesteine, sowie verschiedene Mantelgesteine des ATCO mittels mikroanalytischer und geochemischer Verfahren untersucht (Elektronenstrahlmikrosonde, Ionenstrahlmikrosonde, Spurenelement- und Isotopengeochemie). Die Auswertung dieser Daten ermöglichte die Entwicklung eines geodynamisch-petrologischen Modells zur Entstehung des ATCO. Die vulkanischen Gesteine lassen sich aufgrund ihrer Spurenelement- und Isotopenzusammensetzung in inselbogenbezogene und back-arc Becken bezogene Gesteine (IA-Gesteine und BAB-Gesteine) unterscheiden. Darüber hinaus gibt es eine weitere, nicht eindeutig zuzuordnende Gruppe, die hauptsächlich mafische Gänge umfasst. Der grösste Teil der untersuchen Vulkanite gehört zur Gruppe der IA-Gesteine. Es handelt sich um Al-reiche Basalte und basaltische Andesite, welche aus einem evolvierten Stammmagma mit Mg# 0.60, Cr ~ 180 µg/g und Ni ~ 95 µg/g hauptsächlich durch Klinopyroxenfraktionierung entstanden sind. Das Stammmagma selbst entstand durch Fraktionierung von ca. 12 % Olivin und geringen Anteilen von Cr-Spinell aus einer primären, aus dem Mantel abgeleiteten Schmelze. Die IA-Gesteine haben hohe Konzentrationen an inkompatiblen Spurenelementen (leichte-(L)- Seltenerdelement-(SEE)-Konzentrationen etwa 100-fach chondritisch, chondrit-normierte (La/Yb)c von 14.6 - 5.1), negative Nb-Anomalien (Nb/La = 0.37 - 0.62) und niedrige Zr/Nb Verhältnisse (7 - 14) relativ zu den BAB-Gesteinen. Initiale eNd Werte liegen bei etwa +5.5, initiale Bleiisotopenverhältnisse sind: 206Pb/204Pb = 17.39 - 18.45, 207Pb/204Pb = 15.49 - 15.61, 208Pb/204Pb = 37.06 - 38.05. Die Anreicherung lithophiler inkompatibler Spurenelemente (LILE) in dieser Gruppe ist signifikant (Ba/La = 11 - 130) und zeigt den Einfluss subduzierter Komponenten an. Die BAB-Gesteine repräsentieren Schmelzen, die sehr wahrscheinlich aus der gleichen Mantelquelle wie die IA-Gesteine stammen, aber durch höhere Aufschmelzgrade (8 - 15 %) und ohne den Einfluss subduzierter Komponenten entstanden sind. Sie haben niedrigere Konzentrationen an inkompatiblen Spurenelementen, flache SEE-Muster ((La/Yb)c = 0.6 - 2.4) und höhere initiale eNd Werte zwischen +7.8 und +8.5. Nb Anomalien existieren nicht und Zr/Nb Verhältnisse sind hoch (21 - 48). Um die geochemische Entwicklung der vulkanischen Gesteine des ATCO zu erklären, sind mindestens drei Komponenten erforderlich: (1) eine angereicherte, ozeaninselbasalt-ähnliche Komponente mit hoher Nb Konzentration über ~ 30 µg/g, einem niedrigen Zr/Nb Verhältnis (ca. 6.5), einem niedrigen initialen eNd Wert (um 0), aber mit radiogenen 206Pb/204Pb-, 207Pb/204Pb- und 208Pb/204Pb-Verhältnissen; (2) eine N-MORB ähnliche back-arc Becken Komponente mit flachem SEE-Muster und einem hohen initialen eNd Wert von mindestens +8.5, und (3) eine Inselbogen-Komponente aus einer verarmten Mantelquelle, welche durch die abtauchende Platte geochemisch modifiziert wurde. Die geochemische Entstehung der ATCO Vulkanite lässt sich dann am besten durch eine Kombination aus Quellenkontamination, fraktionierte Kristallisation und Magmenmischung erklären. Geodynamisch gesehen entstand der ATCO sehr wahrscheinlich in einem intraozeanischen Inselbogen - back-arc System. Bei den untersuchten Plutoniten handelt es sich um ultramafische Kumulate (Wehrlite und Pyroxenite) sowie um gabbroische Plutonite (Olivin-Gabbros bis Diorite). Die geochemischen Charakteristika der mafischen Plutonite sind deutlich unterschiedlich zu denen der vulkanischen Gesteine, weshalb sie sehr wahrscheinlich ein späteres Entwicklungsstadium des ATCO repräsentieren. Die Spurenelement-Konzentrationen in den Klinopyroxenen der ultramafischen Kumulate sind extrem niedrig, mit etwa 0.1- bis 1-fach chondritischen SEE-Konzentrationen und mit deutlich LSEE-verarmten Mustern ((La/Yb)c = 0.27 - 0.52). Berechnete Gleichgewichtsschmelzen der ultramafischen Kumulate zeigen grosse Ähnlichkeit zu primären boninitischen Schmelzen. Die primären Magmen waren daher boninitischer Zusammensetzung und entstanden in dem durch vorausgegangene Schmelzprozesse stark verarmten Mantelkeil über einer Subduktionszone. Niedrige Spurenelement-Konzentrationen zeigen einen geringen Einfluss der abtauchenden Platte an. Die Spurenelement-Konzentrationen der Gabbros sind ebenfalls niedrig, mit etwa 0.5 - 10-fach chondritischen SEE-Konzentrationen und mit variablen SEE-Mustern ((La/Yb)c = 0.25 - 2.6). Analog zu den Vulkaniten der IA-Gruppe haben alle Gabbros eine negative Nb-Anomalie mit Nb/La = 0.01 - 0.31. Die initialen eNd Werte der Gabbros variieren zwischen +4.8 und +7.1, mit einem Mittelwert von +5.9, und sind damit identisch mit denen der IA-Vulkanite. Bei den untersuchten Mantelgesteinen handelt es sich um teilweise serpentinisierte Dunite und Harzburgite, die alle durch hohe Mg/Si- und niedrige Al/Si-Verhältnisse gekennzeichnet sind. Dies zeigt einen refraktären Charakter an und steht in guter Übereinstimmung mit den hohen Cr-Zahlen (Cr#) der Spinelle (bis zu Cr# = 0.83), auf deren Basis der Aufschmelzgrad der residuellen Mantelgesteine berechnet wurde. Dieser beträgt etwa 25 %. Die geochemische Zusammensetzung und die petrologischen Daten der Ultramafite und Gabbros lassen sich am besten erklären, wenn man für die Entstehung dieser Gesteine einen zweistufigen Prozess annimmt. In einer ersten Stufe entstanden die ultramafischen Kumulate unter hohem Druck in einer Magmenkammer an der Krustenbasis, hauptsächlich durch Klinopyroxen-Fraktionierung. Bei dieser Magmenkammer handelte es sich um ein offenes System, dem von unten laufend neue Schmelze zugeführt wurde, und aus dem im oberen Bereich evolviertere Schmelzen geringerer Dichte entwichen. Diese evolvierten Schmelzen stiegen in flachere krustale Bereiche auf und bildeten dort meist isolierte Intrusionskörper. Diese Intrusionskörper erstarrten ohne Magmen-Nachschub, weshalb petrographisch sehr unterschiedliche Gesteine entstehen konnten. Eine geochemische Modifikation der abkühlenden Schmelzen erfolgte allerdings durch die Assimilation von Nebengestein. Da innerhalb der Gabbros keine signifikante Variation der initalen eNd Werte existiert, handelte es sich bei dem assimilierten Material hauptsächlich um vulkanische Gesteine des ATCO und nicht um ältere, möglicherweise kontinentale Kruste.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Gulf of Aqaba represents a small scale, easy to access, regional analogue of larger oceanic oligotrophic systems. In this Gulf, the seasonal cycles of stratification and mixing drives the seasonal phytoplankton dynamics. In summer and fall, when nutrient concentrations are very low, Prochlorococcus and Synechococcus are more abundant in the surface water. This two populations are exposed to phosphate limitation. During winter mixing, when nutrient concentrations are high, Chlorophyceae and Cryptophyceae are dominant but scarce or absent during summer. In this study it was tried to develop a simulation model based on historical data to predict the phytoplankton dynamics in the northern Gulf of Aqaba. The purpose is to understand what forces operate, and how, to determine the phytoplankton dynamics in this Gulf. To make the models data sampled in two different sampling station (Fish Farm Station and Station A) were used. The data of chemical, biological and physical factors, are available from 14th January 2007 to 28th December 2009. The Fish Farm Station point was near a Fish Farm that was operational until 17th June 2008, complete closure date of the Fish Farm, about halfway through the total sampling time. The Station A sampling point is about 13 Km away from the Fish Farm Station. To build the model, the MATLAB software was used (version 7.6.0.324 R2008a), in particular a tool named Simulink. The Fish Farm Station models shows that the Fish Farm activity has altered the nutrient concentrations and as a consequence the normal phytoplankton dynamics. Despite the distance between the two sampling stations, there might be an influence from the Fish Farm activities also in the Station A ecosystem. The models about this sampling station shows that the Fish Farm impact appears to be much lower than the impact in the Fish Farm Station, because the phytoplankton dynamics appears to be driven mainly by the seasonal mixing cycle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The modern stratigraphy of clastic continental margins is the result of the interaction between several geological processes acting on different time scales, among which sea level oscillations, sediment supply fluctuations and local tectonics are the main mechanisms. During the past three years my PhD was focused on understanding the impact of each of these process in the deposition of the central and northern Adriatic sedimentary successions, with the aim of reconstructing and quantifying the Late Quaternary eustatic fluctuations. In the last few decades, several Authors tried to quantify past eustatic fluctuations through the analysis of direct sea level indicators, among which drowned barrier-island deposits or coral reefs, or indirect methods, such as Oxygen isotope ratios (δ18O) or modeling simulations. Sea level curves, obtained from direct sea level indicators, record a composite signal, formed by the contribution of the global eustatic change and regional factors, as tectonic processes or glacial-isostatic rebound effects: the eustatic signal has to be obtained by removing the contribution of these other mechanisms. To obtain the most realistic sea level reconstructions it is important to quantify the tectonic regime of the central Adriatic margin. This result has been achieved integrating a numerical approach with the analysis of high-resolution seismic profiles. In detail, the subsidence trend obtained from the geohistory analysis and the backstripping of the borehole PRAD1.2 (the borehole PRAD1.2 is a 71 m continuous borehole drilled in -185 m of water depth, south of the Mid Adriatic Deep - MAD - during the European Project PROMESS 1, Profile Across Mediterranean Sedimentary Systems, Part 1), has been confirmed by the analysis of lowstand paleoshorelines and by benthic foraminifera associations investigated through the borehole. This work showed an evolution from inner-shelf environment, during Marine Isotopic Stage (MIS) 10, to upper-slope conditions, during MIS 2. Once the tectonic regime of the central Adriatic margin has been constrained, it is possible to investigate the impact of sea level and sediment supply fluctuations on the deposition of the Late Pleistocene-Holocene transgressive deposits. The Adriatic transgressive record (TST - Transgressive Systems Tract) is formed by three correlative sedimentary bodies, deposited in less then 14 kyr since the Last Glacial Maximum (LGM); in particular: along the central Adriatic shelf and in the adjacent slope basin the TST is formed by marine units, while along the northern Adriatic shelf the TST is represented by costal deposits in a backstepping configuration. The central Adriatic margin, characterized by a thick transgressive sedimentary succession, is the ideal site to investigate the impact of late Pleistocene climatic and eustatic fluctuations, among which Meltwater Pulses 1A and 1B and the Younger Dryas cold event. The central Adriatic TST is formed by a tripartite deposit bounded by two regional unconformities. In particular, the middle TST unit includes two prograding wedges, deposited in the interval between the two Meltwater Pulse events, as highlighted by several 14C age estimates, and likely recorded the Younger Dryas cold interval. Modeling simulations, obtained with the two coupled models HydroTrend 3.0 and 2D-Sedflux 1.0C (developed by the Community Surface Dynamics Modeling System - CSDMS), integrated by the analysis of high resolution seismic profiles and core samples, indicate that: 1 - the prograding middle TST unit, deposited during the Younger Dryas, was formed as a consequence of an increase in sediment flux, likely connected to a decline in vegetation cover in the catchment area due to the establishment of sub glacial arid conditions; 2 - the two-stage prograding geometry was the consequence of a sea level still-stand (or possibly a fall) during the Younger Dryas event. The northern Adriatic margin, characterized by a broad and gentle shelf (350 km wide with a low angle plunge of 0.02° to the SE), is the ideal site to quantify the timing of each steps of the post LGM sea level rise. The modern shelf is characterized by sandy deposits of barrier-island systems in a backstepping configuration, showing younger ages at progressively shallower depths, which recorded the step-wise nature of the last sea level rise. The age-depth model, obtained by dated samples of basal peat layers, is in good agreement with previous published sea level curves, and highlights the post-glacial eustatic trend. The interval corresponding to the Younger Dyas cold reversal, instead, is more complex: two coeval coastal deposits characterize the northern Adriatic shelf at very different water depths. Several explanations and different models can be attempted to explain this conundrum, but the problem remains still unsolved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The growing interest in environmental protection has led to the development of emerging biotechnologies for environmental remediation also introducing the biorefinery concept. This work mainly aimed to evaluate the applicability of innovative biotechnologies for environmental remediation and bioenergy production, throught fermentative processes. The investigated biotechnologies for waste and wastewater treatment and for the valorisation of specific feedstocks and energy recovery, were mainly focused on four research lines. 1. Biotechnology for textile wastewater treatment and water reuse that involving anaerobic and aerobic processes in combination with membrane technologies. Combinations of different treatments were also implemented for water reuse in a textile company. 2. Biotechnology for the treatment of solid waste and leachate in landfill and for biogas production. Landfill operated as Bioreactor with recirculation of the generated leachate was proposed for organic matter biostabilisation and for ammonia removal from leachate by favouring the Anammox process. 3. An innovative two-stage anaerobic process for effective codigestion of waste from the dairy industry, as cheese whey and dairy manure, was studied by combining conventional fermentative processes with a simplified system design for enhancing biomethanisation. 4) The valorisation of the glycerol waste as surplus by-product of the biodiesel industry was investigated via microbial conversion to value-added chemicals, as 1,3-propanediol. The investigated fermentative processes have been successfully implemented and reached high yields of the produced bio-chemical. The studied biotechnological systems proved to be feasible for environmental remediation and bioenergy and chemicals production.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In questo lavoro di tesi si è elaborato un quadro di riferimento per l’utilizzo combinato di due metodologie di valutazione di impatti LCA e RA, per tecnologie emergenti. L’originalità dello studio sta nell’aver proposto e anche applicato il quadro di riferimento ad un caso studio, in particolare ad una tecnologia innovativa di refrigerazione, basata su nanofluidi (NF), sviluppata da partner del progetto Europeo Nanohex che hanno collaborato all’elaborazione degli studi soprattutto per quanto riguarda l’inventario dei dati necessari. La complessità dello studio è da ritrovare tanto nella difficile integrazione di due metodologie nate per scopi differenti e strutturate per assolvere a quegli scopi, quanto nel settore di applicazione che seppur in forte espansione ha delle forti lacune di informazioni circa processi di produzione e comportamento delle sostanze. L’applicazione è stata effettuata sulla produzione di nanofluido (NF) di allumina secondo due vie produttive (single-stage e two-stage) per valutare e confrontare gli impatti per la salute umana e l’ambiente. Occorre specificare che il LCA è stato quantitativo ma non ha considerato gli impatti dei NM nelle categorie di tossicità. Per quanto concerne il RA è stato sviluppato uno studio di tipo qualitativo, a causa della problematica di carenza di parametri tossicologici e di esposizione su citata avente come focus la categoria dei lavoratori, pertanto è stata fatta l’assunzione che i rilasci in ambiente durante la fase di produzione sono trascurabili. Per il RA qualitativo è stato utilizzato un SW specifico, lo Stoffenmanger-Nano che rende possibile la prioritizzazione dei rischi associati ad inalazione in ambiente di lavoro. Il quadro di riferimento prevede una procedura articolata in quattro fasi: DEFINIZIONE SISTEMA TECNOLOGICO, RACCOLTA DATI, VALUTAZIONE DEL RISCHIO E QUANTIFICAZIONE DEGLI IMPATTI, INTERPRETAZIONE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Waste management represents an important issue in our society and Waste-to-Energy incineration plants have been playing a significant role in the last decades, showing an increased importance in Europe. One of the main issues posed by waste combustion is the generation of air contaminants. Particular concern is present about acid gases, mainly hydrogen chloride and sulfur oxides, due to their potential impact on the environment and on human health. Therefore, in the present study the main available technological options for flue gas treatment were analyzed, focusing on dry treatment systems, which are increasingly applied in Municipal Solid Wastes (MSW) incinerators. An operational model was proposed to describe and optimize acid gas removal process. It was applied to an existing MSW incineration plant, where acid gases are neutralized in a two-stage dry treatment system. This process is based on the injection of powdered calcium hydroxide and sodium bicarbonate in reactors followed by fabric filters. HCl and SO2 conversions were expressed as a function of reactants flow rates, calculating model parameters from literature and plant data. The implementation in a software for process simulation allowed the identification of optimal operating conditions, taking into account the reactant feed rates, the amount of solid products and the recycle of the sorbent. Alternative configurations of the reference plant were also assessed. The applicability of the operational model was extended developing also a fundamental approach to the issue. A predictive model was developed, describing mass transfer and kinetic phenomena governing the acid gas neutralization with solid sorbents. The rate controlling steps were identified through the reproduction of literature data, allowing the description of acid gas removal in the case study analyzed. A laboratory device was also designed and started up to assess the required model parameters.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The etiology of complex diseases is heterogeneous. The presence of risk alleles in one or more genetic loci affects the function of a variety of intermediate biological pathways, resulting in the overt expression of disease. Hence, there is an increasing focus on identifying the genetic basis of disease by sytematically studying phenotypic traits pertaining to the underlying biological functions. In this paper we focus on identifying genetic loci linked to quantitative phenotypic traits in experimental crosses. Such genetic mapping methods often use a one stage design by genotyping all the markers of interest on the available subjects. A genome scan based on single locus or multi-locus models is used to identify the putative loci. Since the number of quantitative trait loci (QTLs) is very likely to be small relative to the number of markers genotyped, a one-stage selective genotyping approach is commonly used to reduce the genotyping burden, whereby markers are genotyped solely on individuals with extreme trait values. This approach is powerful in the presence of a single quantitative trait locus (QTL) but may result in substantial loss of information in the presence of multiple QTLs. Here we investigate the efficiency of sequential two stage designs to identify QTLs in experimental populations. Our investigations for backcross and F2 crosses suggest that genotyping all the markers on 60% of the subjects in Stage 1 and genotyping the chromosomes significant at 20% level using additional subjects in Stage 2 and testing using all the subjects provides an efficient approach to identify the QTLs and utilizes only 70% of the genotyping burden relative to a one stage design, regardless of the heritability and genotyping density. Complex traits are a consequence of multiple QTLs conferring main effects as well as epistatic interactions. We propose a two-stage analytic approach where a single-locus genome scan is conducted in Stage 1 to identify promising chromosomes, and interactions are examined using the loci on these chromosomes in Stage 2. We examine settings under which the two-stage analytic approach provides sufficient power to detect the putative QTLs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The advances in computational biology have made simultaneous monitoring of thousands of features possible. The high throughput technologies not only bring about a much richer information context in which to study various aspects of gene functions but they also present challenge of analyzing data with large number of covariates and few samples. As an integral part of machine learning, classification of samples into two or more categories is almost always of interest to scientists. In this paper, we address the question of classification in this setting by extending partial least squares (PLS), a popular dimension reduction tool in chemometrics, in the context of generalized linear regression based on a previous approach, Iteratively ReWeighted Partial Least Squares, i.e. IRWPLS (Marx, 1996). We compare our results with two-stage PLS (Nguyen and Rocke, 2002A; Nguyen and Rocke, 2002B) and other classifiers. We show that by phrasing the problem in a generalized linear model setting and by applying bias correction to the likelihood to avoid (quasi)separation, we often get lower classification error rates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-site time series studies of air pollution and mortality and morbidity have figured prominently in the literature as comprehensive approaches for estimating acute effects of air pollution on health. Hierarchical models are generally used to combine site-specific information and estimate pooled air pollution effects taking into account both within-site statistical uncertainty, and across-site heterogeneity. Within a site, characteristics of time series data of air pollution and health (small pollution effects, missing data, highly correlated predictors, non linear confounding etc.) make modelling all sources of uncertainty challenging. One potential consequence is underestimation of the statistical variance of the site-specific effects to be combined. In this paper we investigate the impact of variance underestimation on the pooled relative rate estimate. We focus on two-stage normal-normal hierarchical models and on under- estimation of the statistical variance at the first stage. By mathematical considerations and simulation studies, we found that variance underestimation does not affect the pooled estimate substantially. However, some sensitivity of the pooled estimate to variance underestimation is observed when the number of sites is small and underestimation is severe. These simulation results are applicable to any two-stage normal-normal hierarchical model for combining information of site-specific results, and they can be easily extended to more general hierarchical formulations. We also examined the impact of variance underestimation on the national average relative rate estimate from the National Morbidity Mortality Air Pollution Study and we found that variance underestimation as much as 40% has little effect on the national average.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we develop Bayesian hierarchical distributed lag models for estimating associations between daily variations in summer ozone levels and daily variations in cardiovascular and respiratory (CVDRESP) mortality counts for 19 U.S. large cities included in the National Morbidity Mortality Air Pollution Study (NMMAPS) for the period 1987 - 1994. At the first stage, we define a semi-parametric distributed lag Poisson regression model to estimate city-specific relative rates of CVDRESP associated with short-term exposure to summer ozone. At the second stage, we specify a class of distributions for the true city-specific relative rates to estimate an overall effect by taking into account the variability within and across cities. We perform the calculations with respect to several random effects distributions (normal, t-student, and mixture of normal), thus relaxing the common assumption of a two-stage normal-normal hierarchical model. We assess the sensitivity of the results to: 1) lag structure for ozone exposure; 2) degree of adjustment for long-term trends; 3) inclusion of other pollutants in the model;4) heat waves; 5) random effects distributions; and 6) prior hyperparameters. On average across cities, we found that a 10ppb increase in summer ozone level for every day in the previous week is associated with 1.25 percent increase in CVDRESP mortality (95% posterior regions: 0.47, 2.03). The relative rate estimates are also positive and statistically significant at lags 0, 1, and 2. We found that associations between summer ozone and CVDRESP mortality are sensitive to the confounding adjustment for PM_10, but are robust to: 1) the adjustment for long-term trends, other gaseous pollutants (NO_2, SO_2, and CO); 2) the distributional assumptions at the second stage of the hierarchical model; and 3) the prior distributions on all unknown parameters. Bayesian hierarchical distributed lag models and their application to the NMMAPS data allow us estimation of an acute health effect associated with exposure to ambient air pollution in the last few days on average across several locations. The application of these methods and the systematic assessment of the sensitivity of findings to model assumptions provide important epidemiological evidence for future air quality regulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES: The aim of this prospective study was to evaluate the 5-year performance and success rate of titanium screw-type implants with the titanium plasma spray (TPS) or the sand-blasted, large grit, acid-etched (SLA) surface inserted in a two-stage sinus floor elevation (SFE) procedure in the posterior maxilla. MATERIAL AND METHODS: A total of 59 delayed SFEs were performed in 56 patients between January 1997 and December 2001, using a composite graft with autogenous bone chips combined with deproteinized bovine bone mineral (DBBM) or synthetic porous beta-tricalcium phosphate (beta-TCP). After a healing period averaging 7.75 months, 111 dental implants were inserted. After an additional 8-14-week healing period, all implants were functionally loaded with cemented crowns or fixed partial dentures. The patients were recalled at 12 and 60 months for clinical and radiographic examination. RESULTS: One patient developed an acute infection in the right maxillary sinus after SFE and did not undergo implant therapy. Two of the 111 inserted implants had to be removed because of a developing atypical facial pain, and 11 implants were lost to follow-up and were considered drop-outs. The remaining 98 implants showed favorable clinical and radiographic findings at the 5-year examination. The peri-implant soft tissues were stable over time; the mean probing depths and mean attachment levels did not change during the follow-up period. The measurement of the bone crest levels (DIB values) indicated stability as well. Based on strict success criteria, all 98 implants were considered successfully integrated, resulting in a 5-year success rate of 98% (for TPS implants 89%, for SLA implants 100%). CONCLUSION: This prospective study assessing the performance of dental implants inserted after SFE demonstrated that titanium implants can achieve and maintain successful tissue integration with high predictability for at least 5 years of follow-up in carefully selected patients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heroin prices are a reflection of supply and demand, and similar to any other market, profits motivate participation. The intent of this research is to examine the change in Afghan opium production due to political conflict affecting Europe’s heroin market and government policies. If the Taliban remain in power, or a new Afghan government is formed, the changes will affect the heroin market in Europe to a certain degree. In the heroin market, the degree of change is dependent on many socioeconomic forces such as law enforcement, corruption, and proximity to Afghanistan. An econometric model that examines the degree of these socioeconomic effects has not been applied to the heroin trade in Afghanistan before. This research uses a two-stage least squares econometric model to reveal the supply and demand of heroin in 36 different countries from the Middle East to Western Europe in 2008. An application of the two-stage least squares model to the heroin market in Europe will attempt to predict the socioeconomic consequences of Afghanistan opium production.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accurate seasonal to interannual streamflow forecasts based on climate information are critical for optimal management and operation of water resources systems. Considering most water supply systems are multipurpose, operating these systems to meet increasing demand under the growing stresses of climate variability and climate change, population and economic growth, and environmental concerns could be very challenging. This study was to investigate improvement in water resources systems management through the use of seasonal climate forecasts. Hydrological persistence (streamflow and precipitation) and large-scale recurrent oceanic-atmospheric patterns such as the El Niño/Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), the Atlantic Multidecadal Oscillation (AMO), the Pacific North American (PNA), and customized sea surface temperature (SST) indices were investigated for their potential to improve streamflow forecast accuracy and increase forecast lead-time in a river basin in central Texas. First, an ordinal polytomous logistic regression approach is proposed as a means of incorporating multiple predictor variables into a probabilistic forecast model. Forecast performance is assessed through a cross-validation procedure, using distributions-oriented metrics, and implications for decision making are discussed. Results indicate that, of the predictors evaluated, only hydrologic persistence and Pacific Ocean sea surface temperature patterns associated with ENSO and PDO provide forecasts which are statistically better than climatology. Secondly, a class of data mining techniques, known as tree-structured models, is investigated to address the nonlinear dynamics of climate teleconnections and screen promising probabilistic streamflow forecast models for river-reservoir systems. Results show that the tree-structured models can effectively capture the nonlinear features hidden in the data. Skill scores of probabilistic forecasts generated by both classification trees and logistic regression trees indicate that seasonal inflows throughout the system can be predicted with sufficient accuracy to improve water management, especially in the winter and spring seasons in central Texas. Lastly, a simplified two-stage stochastic economic-optimization model was proposed to investigate improvement in water use efficiency and the potential value of using seasonal forecasts, under the assumption of optimal decision making under uncertainty. Model results demonstrate that incorporating the probabilistic inflow forecasts into the optimization model can provide a significant improvement in seasonal water contract benefits over climatology, with lower average deficits (increased reliability) for a given average contract amount, or improved mean contract benefits for a given level of reliability compared to climatology. The results also illustrate the trade-off between the expected contract amount and reliability, i.e., larger contracts can be signed at greater risk.