934 resultados para 1 sigma standard deviation for the average


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nontronite, the main metalliferous phase of the Galapagos mounds, occurs at a subsurface depth of ~2-20 m; Mn-oxide material is limited to the upper 2 m of these mounds. The nontronite forms intervals of up to a few metres thickness, consisting essentially of 100% nontronite granules, which alternate with intervals of normal pelagic sediment. The metalliferous phases represent essentially authigenic precipitates, apparently formed in the presence of upwelling basement-derived hydrothermal solutions which dissolved pre-existent pelagic sediment. Electron microprobe analyses of nontronite granules from different core samples indicate that: (1) there is little difference in major-element composition between nontronitic material from varying locations within the mounds; and (2) adjacent granules from a given sample have very similar compositions and are internally homogeneous. This indicates that the granules are composed of a single mineral of essentially constant composition, consistent with relatively uniform conditions of solution Eh and composition during nontronite formation. The Pb-isotopic composition of the nontronite and Mn-oxide sediments indicates that they were formed from solutions which contained variable proportions of basaltic Pb, introduced into pore waters by basement-derived solutions, and of normal-seawater Pb. However, the Sr-isotopic composition of these sediments is essentially indistinguishable from the value for modern seawater. On the basis of 18O/16O ratios, formation temperatures of ~20-30°C have been estimated for the nontronites. By comparison, temperatures of up to 11.5°C at 9 m depth have been directly measured within the mounds and heat flow data suggest present basement-sediment interface temperatures of 15-25°C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge of the uncertainty of measurement of testing results is important when results have to be compared with limits and specifications. In the measurement of sound insulation following standards UNE EN ISO 140-4 the uncertainty of the final magnitude is mainly associated to the average sound pressure levels L1 and L2 measured. A parameter that allows us to quantify the spatial variation of the sound pressure level is the standard deviation of the pressure levels measured at different points of the room. In this work, for a wide number of measurements following standards UNE EN ISO 140-4 we analyzed qualitatively the behaviour of the standard deviation for L1 and L2. The study of sound fields in enclosed spaces is very difficult. There are a wide variety of rooms with different sound fields depending on factors as volume, geometry and materials. In general, we observe that the L1 and L2 standard deviations contain peaks and dips independent on characteristics of the rooms at single frequencies that could correspond to critical frequencies of walls, floors and windows or even to temporal alterations of the sound field. Also, in most measurements according to UNE EN ISO 140-4 a large similitude between L1 and L2 standard deviation is found. We believe that such result points to a coupled system between source and receiving rooms, mainly at low frequencies the shape of the L1 and L2 standard deviations is comparable to the velocity level standard deviation on a wall

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation that includes most of the P. PH.D research work during 2001~2002 covers the large-scale distribution of continental earthquakes in mainland China, the mechanism and statistic features of grouped strong earthquakes related to the tidal triggering, some results in earthquake prediction with correlativity analysis methods, and the flushes from the two strong continental earthquakes in South Asia in 2001. Mainland China is the only continental sub-plate that is compressed by collision boundaries at its two sides, within which earthquakes are dispersive and distributed as seismic belts with different widths. The control capability of the continental block boundaries on the strong earthquakes and seismic hazards is calculated and analyzed in this dissertation. By mapping the distribution of the 31282 ML:3s2,0 earthquakes, I found that the depth of continental earthquakes depend on the tectonic zonings. The events on the boundaries of relatively integrated blocks are deep and those on the new-developed ruptures are shallow. The average depth of earthquakes in the West of China is about 5km deeper than that in the east. The western and southwestern brim of Tarim Basin generated the deepest earthquakes in mainland China. The statistic results from correlation between the grouped M7 earthquakes and the tidal stress show that the strong events were modulated by tidal stress in active periods. Taking Taiwan area as an example, the dependence of moderate events on the moon phase angles (£>) is analyzed, which shows that the number of the earthquakes in Taiwan when D is 50° ,50° +90° and 50° +180° is more than 2 times of standard deviation over the average frequency at each degree, corresponding to the 4th, 12th and 19th solar day after the new moon. The probability of earthquake attack to the densely populated Taiwan island on the 4th solar day is about 4 times of that on other solar days. On the practice of earthquake prediction, I calculated and analyzed the temporal correlation of the earthquakes in Xinjinag area, Qinghai-Tibet area, west Yunnan area, North China area and those in their adjacent areas, and predicted at the end of 2000 that it would be a special time interval from 2001 to 2003, within which moderate to strong earthquakes would be more active in the west of China. What happened in 2001 partly validated the prediction. Within 10 months, there were 2 great continental earthquakes in south Asia, i.e., the M7.8 event in India on Jan 26 and M8.1 event in China on Nov. 14, 2001, which are the largest earthquake in the past 50 years both for India and China. No records for two great earthquakes in Asia within so short time interval. We should speculate the following aspects from the two incidences: The influence of the fallacious deployment of seismic stations on the fine location and focal mechanism determination of strong earthquakes must be affronted. It is very important to introduce comparative seismology research to seismic hazard analysis and earthquake prediction research. The improvement or changes in real-time prediction of strong earthquakes with precursors is urged. Methods need to be refreshed to protect environment and historical relics in earthquake-prone areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulations of ozone loss rates using a three-dimensional chemical transport model and a box model during recent Antarctic and Arctic winters are compared with experimental loss rates. The study focused on the Antarctic winter 2003, during which the first Antarctic Match campaign was organized, and on Arctic winters 1999/2000, 2002/2003. The maximum ozone loss rates retrieved by the Match technique for the winters and levels studied reached 6 ppbv/sunlit hour and both types of simulations could generally reproduce the observations at 2-sigma error bar level. In some cases, for example, for the Arctic winter 2002/2003 at 475 K level, an excellent agreement within 1-sigma standard deviation level was obtained. An overestimation was also found with the box model simulation at some isentropic levels for the Antarctic winter and the Arctic winter 1999/2000, indicating an overestimation of chlorine activation in the model. Loss rates in the Antarctic show signs of saturation in September, which have to be considered in the comparison. Sensitivity tests were performed with the box model in order to assess the impact of kinetic parameters of the ClO-Cl2O2 catalytic cycle and total bromine content on the ozone loss rate. These tests resulted in a maximum change in ozone loss rates of 1.2 ppbv/sunlit hour, generally in high solar zenith angle conditions. In some cases, a better agreement was achieved with fastest photolysis of Cl2O2 and additional source of total inorganic bromine but at the expense of overestimation of smaller ozone loss rates derived later in the winter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present results from an intercomparison program of CO2, δ(O2/N2) and δ13CO2 measurements from atmospheric flask samples. Flask samples are collected on a bi-weekly basis at the High Altitude Research Station Jungfraujoch in Switzerland for three European laboratories: the University of Bern, Switzerland, the University of Groningen, the Netherlands and the Max Planck Institute for Biogeochemistry in Jena, Germany. Almost 4 years of measurements of CO2, δ(O2/N2) and δ13CO2 are compared in this paper to assess the measurement compatibility of the three laboratories. While the average difference for the CO2 measurements between the laboratories in Bern and Jena meets the required compatibility goal as defined by the World Meteorological Organization, the standard deviation of the average differences between all laboratories is not within the required goal. However, the obtained annual trend and seasonalities are the same within their estimated uncertainties. For δ(O2/N2) significant differences are observed between the three laboratories. The comparison for δ13CO2 yields the least compatible results and the required goals are not met between the three laboratories. Our study shows the importance of regular intercomparison exercises to identify potential biases between laboratories and the need to improve the quality of atmospheric measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Precise measurements of ice-flow velocities are necessary for a proper understanding of the dynamics of glaciers and their response to climate change. We use stand-alone single-frequency GPS receivers for this purpose. They are designed to operate unattended for 1-3 years, allowing uninterrupted measurements for long periods with hourly temporal resolution. We present the system and illustrate its functioning using data from 9 GPS receivers deployed on Nordenskiöldbreen, Svalbard, for the period 2006-2009. The accuracy of the receivers is 1.62 m based on the standard deviation in the average location of a stationary reference station (NBRef). Both the location of NBRef and the observed flow velocities agree within one standard deviation with DGPS measurements. Periodicity (6, 8, 12, 24 h) in the NBRef data is largely explained by the atmospheric, mainly ionospheric, influence on the GPS signal. A (weighed) running-average on the observed locations significantly reduces the standard deviation and removes high frequency periodicities, but also reduces the temporal resolution. Results show annual average velocities varying between 40 and 55 m/yr at stations on the central flow-line. On weekly to monthly time-scales we observe a peak in the flow velocities (from 60 to 90 m/yr) at the beginning of July related to increased melt-rates. No significant lag is observed between the timing of the maximum speed between different stations. This is likely due to the limited temporal resolution after averaging in combination with the relatively small distance (max. ±13 km) between the stations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using the concept of 'orbital tuning', a continuous, high-resolution deep-sea chronostratigraphy has been developed spanning the last 300,000 yr. The chronology is developed using a stacked oxygen-isotope stratigraphy and four different orbital tuning approaches, each of which is based upon a different assumption concerning the response of the orbital signal recorded in the data. Each approach yields a separate chronology. The error measured by the standard deviation about the average of these four results (which represents the 'best' chronology) has an average magnitude of only 2500 yr. This small value indicates that the chronology produced is insensitive to the specific orbital tuning technique used. Excellent convergence between chronologies developed using each of five different paleoclimatological indicators (from a single core) is also obtained. The resultant chronology is also insensitive to the specific indicator used. The error associated with each tuning approach is estimated independently and propagated through to the average result. The resulting error estimate is independent of that associated with the degree of convergence and has an average magnitude of 3500 yr, in excellent agreement with the 2500-yr estimate. Transfer of the final chronology to the stacked record leads to an estimated error of +/-1500 yr. Thus the final chronology has an average error of +/-5000 yr.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction : Les statines ont prouvé leur efficacité dans le traitement des dyslipidémies. Cependant, ces molécules sont associées à des effets secondaires d’ordre musculaire. Puisque ces effets peuvent avoir des conséquences graves sur la vie des patients en plus d’être possiblement à l’origine de la non-observance d’une proportion importante des patients recevant une statine, un outil pharmacogénomique qui permettrait d’identifier a priori les patients susceptibles de développer des effets secondaires musculaires induits par une statine (ESMIS) serait très utile. L’objectif de la présente étude était donc de déterminer la valeur monétaire d’un tel type d’outil étant donné que cet aspect représenterait une composante importante pour sa commercialisation et son implantation dans la pratique médicale courante. Méthode : Une première simulation fut effectuée à l’aide de la méthode de Markov, mais celle-ci ne permettait pas de tenir compte de tous les éléments désirés. C’est pourquoi la méthode de simulation d'évènements discrets fut utilisée pour étudier une population de 100 000 patients hypothétiques nouvellement initiés sur une statine. Cette population virtuelle a été dupliquée pour obtenir deux cohortes de patients identiques. Une cohorte recevait le test et un traitement approprié alors que l'autre cohorte recevait le traitement standard actuel—i.e., une statine. Le modèle de simulation a permis de faire évoluer les deux cohortes sur une période de 15 ans en tenant compte du risque de maladies cardio-vasculaires (MCV) fatal ou non-fatal, d'ESMIS et de mortalité provenant d’une autre cause que d’une MCV. Les conséquences encourues (MCV, ESMIS, mortalité) par ces deux populations et les coûts associés furent ensuite comparés. Finalement, l’expérience fut répétée à 25 reprises pour évaluer la stabilité des résultats et diverses analyses de sensibilité ont été effectuées. Résultats : La différence moyenne des coûts en traitement des MCV et des ESMIS, en perte de capital humain et en médicament était de 28,89 $ entre les deux cohortes pour la durée totale de l’expérimentation (15 ans). Les coûts étant plus élevés chez celle qui n’était pas soumise au test. Toutefois, l’écart-type à la moyenne était considérable (416,22 $) remettant en question la validité de l’estimation monétaire du test pharmacogénomique. De plus, cette valeur était fortement influencée par la proportion de patients prédisposés aux ESMIS, par l’efficacité et le coût des agents hypolipidémiants alternatifs ainsi que par les coûts des traitements des ESMIS et de la valeur attribuée à un mois de vie supplémentaire. Conclusion : Ces résultats suggèrent qu’un test de prédisposition génétique aux ESMIS aurait une valeur d’environ 30 $ chez des patients s’apprêtant à commencer un traitement à base de statine. Toutefois, l’incertitude entourant la valeur obtenue est très importante et plusieurs variables dont les données réelles ne sont pas disponibles dans la littérature ont une influence importante sur la valeur. La valeur réelle de cet outil génétique ne pourra donc être déterminée seulement lorsque le modèle sera mis à jour avec des données plus précises sur la prévalence des ESMIS et leur impact sur l’observance au traitement puis analysé avec un plus grand nombre de patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The clinical syndrome of heart failure is one of the leading causes of hospitalisation and mortality in older adults. Due to ageing of the general population and improved survival from cardiac disease the prevalence of heart failure is rising. Despite the fact that the majority of patients with heart failure are aged over 65 years old, many with multiple co-morbidities, the association between cognitive impairment and heart failure has received relatively little research interest compared to other aspects of cardiac disease. The presence of concomitant cognitive impairment has implications for the management of patients with heart failure in the community. There are many evidence based pharmacological therapies used in heart failure management which obviously rely on patient education regarding compliance. Also central to the treatment of heart failure is patient self-monitoring for signs indicative of clinical deterioration which may prompt them to seek medical assistance or initiate a therapeutic intervention e.g. taking additional diuretic. Adherence and self-management may be jeopardised by cognitive impairment. Formal diagnosis of cognitive impairment requires evidence of abnormalities on neuropsychological testing (typically a result ≥1.5 standard deviation below the age-standardised mean) in at least one cognitive domain. Cognitive impairment is associated with an increased risk of dementia and people with mild cognitive impairment develop dementia at a rate of 10-15% per year, compared with a rate of 1-2% per year in healthy controls.1 Cognitive impairment has been reported in a variety of cardiovascular disorders. It is well documented among patients with hypertension, atrial fibrillation and coronary artery disease, especially after coronary artery bypass grafting. This background is relevant to the study of patients with heart failure as many, if not most, have a history of one or more of these co-morbidities. A systematic review of the literature to date has shown a wide variation in the reported prevalence of cognitive impairment in heart failure. This range in variation probably reflects small study sample sizes, differences in the heart failure populations studied (inpatients versus outpatients), neuropsychological tests employed and threshold values used to define cognitive impairment. The main aim of this study was to identify the prevalence of cognitive impairment in a representative sample of heart failure patients and to examine whether this association was due to heart failure per se rather than the common cardiovascular co-morbidities that often accompany it such as atherosclerosis and atrial fibrillation. Of the 817 potential participants screened, 344 were included in this study. The study cohort included 196 patients with HF, 61 patients with ischaemic heart disease and no HF and 87 healthy control participants. The HF cohort consisted of 70 patients with HF and coronary artery disease in sinus rhythm, 51 patients with no coronary artery disease in sinus rhythm and 75 patients with HF and atrial fibrillation. All patients with HF had evidence of HF-REF with a LVEF <45% on transthoracic echocardiography. The majority of the cohort was male and elderly. HF patients with AF were more likely to have multiple co-morbidities. Patients recruited from cardiac rehabilitation clinics had proven coronary artery disease, no clinical HF and a LVEF >55%. The ischaemic heart disease group were relatively well matched to healthy controls who had no previous diagnosis of any chronic illness, prescribed no regular medication and also had a LVEF >55%. All participants underwent the same baseline investigations and there were no obvious differences in baseline demographics between each of the cohorts. All 344 participants attended for 2 study visits. Baseline investigations including physiological measurements, electrocardiography, echocardiography and laboratory testing were all completed at the initial screening visit. Participants were then invited to attend their second study visit within 10 days of the screening visit. 342 participants completed all neuropsychological assessments (2 participants failed to complete 1 questionnaire). A full comprehensive battery of neuropsychological assessment tools were administered in the 90 minute study visit. These included three global cognitive screening assessment tools (mini mental state examination, Montreal cognitive assessment tool and the repeatable battery for the assessment of neuropsychological status) and additional measures of executive function (an area we believe has been understudied to date). In total there were 9 cognitive tests performed. These were generally well tolerated. Data were also collected using quality of life questionnaires and health status measures. In addition to this, carers of the study participant were asked to complete a measure of caregiver strain and an informant questionnaire on cognitive decline. The prevalence of cognitive impairment varied significantly depending on the neuropsychological assessment tool used and cut-off value used to define cognitive impairment. Despite this, all assessment tools showed the same pattern of results with those patients with heart failure and atrial fibrillation having poorer cognitive performance than those with heart failure in sinus rhythm. Cognitive impairment was also more common in patients with cardiac disease (either coronary artery disease or heart failure) than age-, sex- and education-matched healthy controls, even after adjustment for common vascular risk factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study uses the global Ocean Topography Experiment (TOPEX)/Jason-1 altimeters` time series to estimate the 13-yr trend in sea surface height anomaly. These trends are estimated at each grid point by two methods: one fits a straight line to the time series and the other is based on the difference between the average height between the two halves of the time series. In both cases the trend shows large regional variability, mostly where the intense western boundary currents turn. The authors hypothesize that the regional variability of the sea surface height trends leads to changes in the local geostrophic transport. This in turn affects the instability-related processes that generate mesoscale eddies and enhances the Rossby wave signals. This hypothesis is verified by estimates of the trend of the amplitude of the filtered sea surface height anomaly that contains the spectral bands associated with Rossby waves and mesoscale eddies. The authors found predominantly positive tendency in the amplitude of Rossby waves and eddies, which suggests that, on average, these events are becoming more energetic. In some regions, the variation in amplitude over 13 yr is comparable to the standard deviation of the data and is statistically significant according to both methods employed in this study. It is plausible that in this case, the energy is transferred from the mean currents to the waves and eddies through barotropic and baroclinic instability processes that are more pronounced in the western boundary current extension regions. If these heat storage patterns and trends are confirmed on longer time series, then it will be justified to argue that the warming trend of the last century provides the energy that amplifies both Rossby waves and mesoscale eddies.