991 resultados para phase uncertainty
Resumo:
The isotopic fractionation of hydrogen during the biosynthesis of alkenones produced by marine haptophyte algae has been shown to depend on salinity and, as such, the hydrogen isotopic composition of alkenones is emerging as a palaeosalinity proxy. The relationship between fractionation and salinity has previously only been determined during exponential growth, whilst it is not yet known in which growth phases natural haptophyte populations predominantly exist. We have therefore determined the relationship between the fractionation factor, alpha alkenones-water, and salinity for C37 alkenones produced in different growth phases of batch cultures of the major alkenone-producing coastal haptophytes Isochrysis galbana (strain CCMP 1323) and Chrysotila lamellosa (strain CCMP 1307) over a range in salinity from ca. 10 to ca. 35. alpha alkenones-water was similar in both species, ranging over 0.841-0.900 for I. galbana and 0.838-0.865 for C. lamellosa. A strong (0.85 <= R**2 <= 0.97; p < 0.0001) relationship between salinity and fractionation factor was observed in both species at all growth phases investigated. This suggests that alkenone dD has the potential to be used as a salinity proxy in coastal areas where haptophyte communities are dominated by these coastal species. However, there was a marked difference in the sensitivity of alpha alkenones-water to salinity between different growth phases: in the exponential growth phase of I. galbana, alpha alkenones-water increased by 0.0019 per salinity unit (S 1), but was less sensitive at 0.0010 S 1 and 0.0008 S 1 during the stationary and decline phases, respectively. Similarly, in C. lamellosa alpha alkenones-water increased by 0.0010 S 1 in the early stationary phase and by 0.0008 S 1 during the late stationary phase. Assuming the shift in sensitivity of alpha alkenones-water to salinity observed at the end of exponential growth in I. galbana is similar in other alkenone-producing species, the predominant growth phase of natural populations of haptophytes will affect the sensitivity of the alkenone salinity proxy. The proxy is likely to be most sensitive to salinity when alkenones are produced in a state similar to exponential growth.
Resumo:
In the framework of the OECD/NEA project on Benchmark for Uncertainty Analysis in Modeling (UAM) for Design, Operation, and Safety Analysis of LWRs, several approaches and codes are being used to deal with the exercises proposed in Phase I, “Specifications and Support Data for Neutronics Cases.” At UPM, our research group treats these exercises with sensitivity calculations and the “sandwich formula” to propagate cross-section uncertainties. Two different codes are employed to calculate the sensitivity coefficients of to cross sections in criticality calculations: MCNPX-2.7e and SCALE-6.1. The former uses the Differential Operator Technique and the latter uses the Adjoint-Weighted Technique. In this paper, the main results for exercise I-2 “Lattice Physics” are presented for the criticality calculations of PWR. These criticality calculations are done for a TMI fuel assembly at four different states: HZP-Unrodded, HZP-Rodded, HFP-Unrodded, and HFP-Rodded. The results of the two different codes above are presented and compared. The comparison proves a good agreement between SCALE-6.1 and MCNPX-2.7e in uncertainty that comes from the sensitivity coefficients calculated by both codes. Differences are found when the sensitivity profiles are analysed, but they do not lead to differences in the uncertainty.
Resumo:
In this paper we present a global overview of the recent study carried out in Spain for the new hazard map, which final goal is the revision of the Building Code in our country (NCSE-02). The study was carried our for a working group joining experts from The Instituto Geografico Nacional (IGN) and the Technical University of Madrid (UPM) , being the different phases of the work supervised by an expert Committee integrated by national experts from public institutions involved in subject of seismic hazard. The PSHA method (Probabilistic Seismic Hazard Assessment) has been followed, quantifying the epistemic uncertainties through a logic tree and the aleatory ones linked to variability of parameters by means of probability density functions and Monte Carlo simulations. In a first phase, the inputs have been prepared, which essentially are: 1) a project catalogue update and homogenization at Mw 2) proposal of zoning models and source characterization 3) calibration of Ground Motion Prediction Equations (GMPE’s) with actual data and development of a local model with data collected in Spain for Mw < 5.5. In a second phase, a sensitivity analysis of the different input options on hazard results has been carried out in order to have criteria for defining the branches of the logic tree and their weights. Finally, the hazard estimation was done with the logic tree shown in figure 1, including nodes for quantifying uncertainties corresponding to: 1) method for estimation of hazard (zoning and zoneless); 2) zoning models, 3) GMPE combinations used and 4) regression method for estimation of source parameters. In addition, the aleatory uncertainties corresponding to the magnitude of the events, recurrence parameters and maximum magnitude for each zone have been also considered including probability density functions and Monte Carlo simulations The main conclusions of the study are presented here, together with the obtained results in terms of PGA and other spectral accelerations SA (T) for return periods of 475, 975 and 2475 years. The map of the coefficient of variation (COV) are also represented to give an idea of the zones where the dispersion among results are the highest and the zones where the results are robust.
Resumo:
Both in industry and research, the quality control of micrometric manufactured parts is based on the measurement of parameters whose traceability is sometimes difficult to guarantee. In some of these parts, the confocal microscopy shows great aptitudes to characterize a measurand qualitatively and quantitatively. The confocal microscopy allows the acquisition of 2D and 3D images that are easily manipulated. Nowadays, this equipment is manufactured by many different brands, each of them claiming a resolution probably not in accord to their real performance. The Laser Center (Technical University of Madrid) has a confocal microscope to verify the dimensions of the micro mechanizing in their own research projects. The present study pretends to confirm that the magnitudes obtained are true and reliable. To achieve this, a methodology for confocal microscope calibration is proposed, as well as an experimental phase for dimensionally valuing the equipment by 4 different standard positions, with its seven magnifications and the six objective lenses that the equipment currently has, in the x–y and z axis. From the results the uncertainty will be estimated along with an effect analysis of the different magnifications in each of the objective lenses.
Resumo:
Esta tesis doctoral presenta un procedimiento integral de control de calidad en centrales fotovoltaicas, que comprende desde la fase inicial de estimación de las expectativas de producción hasta la vigilancia del funcionamiento de la instalación una vez en operación, y que permite reducir la incertidumbre asociada su comportamiento y aumentar su fiabilidad a largo plazo, optimizando su funcionamiento. La coyuntura de la tecnología fotovoltaica ha evolucionado enormemente en los últimos años, haciendo que las centrales fotovoltaicas sean capaces de producir energía a unos precios totalmente competitivos en relación con otras fuentes de energía. Esto hace que aumente la exigencia sobre el funcionamiento y la fiabilidad de estas instalaciones. Para cumplir con dicha exigencia, es necesaria la adecuación de los procedimientos de control de calidad aplicados, así como el desarrollo de nuevos métodos que deriven en un conocimiento más completo del estado de las centrales, y que permitan mantener la vigilancia sobre las mismas a lo largo del tiempo. Además, los ajustados márgenes de explotación actuales requieren que durante la fase de diseño se disponga de métodos de estimación de la producción que comporten la menor incertidumbre posible. La propuesta de control de calidad presentada en este trabajo parte de protocolos anteriores orientados a la fase de puesta en marcha de una instalación fotovoltaica, y las complementa con métodos aplicables a la fase de operación, prestando especial atención a los principales problemas que aparecen en las centrales a lo largo de su vida útil (puntos calientes, impacto de la suciedad, envejecimiento…). Además, incorpora un protocolo de vigilancia y análisis del funcionamiento de las instalaciones a partir de sus datos de monitorización, que incluye desde la comprobación de la validez de los propios datos registrados hasta la detección y el diagnóstico de fallos, y que permite un conocimiento automatizado y detallado de las plantas. Dicho procedimiento está orientado a facilitar las tareas de operación y mantenimiento, de manera que se garantice una alta disponibilidad de funcionamiento de la instalación. De vuelta a la fase inicial de cálculo de las expectativas de producción, se utilizan los datos registrados en las centrales para llevar a cabo una mejora de los métodos de estimación de la radiación, que es la componente que más incertidumbre añade al proceso de modelado. El desarrollo y la aplicación de este procedimiento de control de calidad se han llevado a cabo en 39 grandes centrales fotovoltaicas, que totalizan una potencia de 250 MW, distribuidas por varios países de Europa y América Latina. ABSTRACT This thesis presents a comprehensive quality control procedure to be applied in photovoltaic plants, which covers from the initial phase of energy production estimation to the monitoring of the installation performance, once it is in operation. This protocol allows reducing the uncertainty associated to the photovoltaic plants behaviour and increases their long term reliability, therefore optimizing their performance. The situation of photovoltaic technology has drastically evolved in recent years, making photovoltaic plants capable of producing energy at fully competitive prices, in relation to other energy sources. This fact increases the requirements on the performance and reliability of these facilities. To meet this demand, it is necessary to adapt the quality control procedures and to develop new methods able to provide a more complete knowledge of the state of health of the plants, and able to maintain surveillance on them over time. In addition, the current meagre margins in which these installations operate require procedures capable of estimating energy production with the lower possible uncertainty during the design phase. The quality control procedure presented in this work starts from previous protocols oriented to the commissioning phase of a photovoltaic system, and complete them with procedures for the operation phase, paying particular attention to the major problems that arise in photovoltaic plants during their lifetime (hot spots, dust impact, ageing...). It also incorporates a protocol to control and analyse the installation performance directly from its monitoring data, which comprises from checking the validity of the recorded data itself to the detection and diagnosis of failures, and which allows an automated and detailed knowledge of the PV plant performance that can be oriented to facilitate the operation and maintenance of the installation, so as to ensure a high operation availability of the system. Back to the initial stage of calculating production expectations, the data recorded in the photovoltaic plants is used to improved methods for estimating the incident irradiation, which is the component that adds more uncertainty to the modelling process. The development and implementation of the presented quality control procedure has been carried out in 39 large photovoltaic plants, with a total power of 250 MW, located in different European and Latin-American countries.
Resumo:
Owls and other animals, including humans, use the difference in arrival time of sounds between the ears to determine the direction of a sound source in the horizontal plane. When an interaural time difference (ITD) is conveyed by a narrowband signal such as a tone, human beings may fail to derive the direction represented by that ITD. This is because they cannot distinguish the true ITD contained in the signal from its phase equivalents that are ITD ± nT, where T is the period of the stimulus tone and n is an integer. This uncertainty is called phase-ambiguity. All ITD-sensitive neurons in birds and mammals respond to an ITD and its phase equivalents when the ITD is contained in narrowband signals. It is not known, however, if these animals show phase-ambiguity in the localization of narrowband signals. The present work shows that barn owls (Tyto alba) experience phase-ambiguity in the localization of tones delivered by earphones. We used sound-induced head-turning responses to measure the sound-source directions perceived by two owls. In both owls, head-turning angles varied as a sinusoidal function of ITD. One owl always pointed to the direction represented by the smaller of the two ITDs, whereas a second owl always chose the direction represented by the larger ITD (i.e., ITD − T).
Resumo:
Orbital tuning is central for ice core chronologies beyond annual layer counting, available back to 60 ka (i.e. thousands of years before 1950) for Greenland ice cores. While several complementary orbital tuning tools have recently been developed using δ¹⁸Oatm, δO₂⁄N₂ and air content with different orbital targets, quantifying their uncertainties remains a challenge. Indeed, the exact processes linking variations of these parameters, measured in the air trapped in ice, to their orbital targets are not yet fully understood. Here, we provide new series of δO₂∕N₂ and δ¹⁸Oatm data encompassing Marine Isotopic Stage (MIS) 5 (between 100 and 160 ka) and the oldest part (340–800 ka) of the East Antarctic EPICA Dome C (EDC) ice core. For the first time, the measurements over MIS 5 allow an inter-comparison of δO₂∕N₂ and δ¹⁸Oatm records from three East Antarctic ice core sites (EDC, Vostok and Dome F). This comparison highlights some site-specific δO₂∕N₂ variations. Such an observation, the evidence of a 100 ka periodicity in the δO₂∕N₂ signal and the difficulty to identify extrema and mid-slopes in δO2∕N2 increase the uncertainty associated with the use of δO₂∕N₂ as an orbital tuning tool, now calculated to be 3–4 ka. When combining records of δ¹⁸Oatm and δO₂∕N₂ from Vostok and EDC, we find a loss of orbital signature for these two parameters during periods of minimum eccentricity (∼ 400 ka, ∼ 720–800 ka). Our data set reveals a time-varying offset between δO₂∕N₂ and δ¹⁸Oatm records over the last 800 ka that we interpret as variations in the lagged response of δ¹⁸Oatm to precession. The largest offsets are identified during Terminations II, MIS 8 and MIS 16, corresponding to periods of destabilization of the Northern polar ice sheets. We therefore suggest that the occurrence of Heinrich–like events influences the response of δ¹⁸Oatm to precession.
Resumo:
This study developed and tested a model of job uncertainty for survivors and victims of downsizing. Data were collected from three samples of employees in a public hospital, each representing three phases of the downsizing process: immediately before the announcement of the redeployment of staff, during the implementation of the downsizing, and towards the end of the official change programme. As predicted, levels of job uncertainty and personal control had a direct relationship with emotional exhaustion and job satisfaction, In addition, there was evidence to suggest that personal control mediated the relationship between job uncertainty and employee adjustment, a pattern of results that varied across each of the three phases of the change event. From the perspective of the organization's overall climate, it was found that levels of job uncertainty, personal control and job satisfaction improved and/or stabilized over the downsizing process. During the implementation phase, survivors experienced higher levels of personal control than victims, but both groups of employees reported similar levels of job uncertainty. We discuss the implications of our results for strategically managing uncertainty during and after organizational change.
Resumo:
With luminance gratings, psychophysical thresholds for detecting a small increase in the contrast of a weak ‘pedestal’ grating are 2–3 times lower than for detection of a grating when the pedestal is absent. This is the ‘dipper effect’ – a reliable improvement whose interpretation remains controversial. Analogies between luminance and depth (disparity) processing have attracted interest in the existence of a ‘disparity dipper’. Are thresholds for disparity modulation (corrugated surfaces), facilitated by the presence of a weak disparity-modulated pedestal? We used a 14-bit greyscale to render small disparities accurately, and measured 2AFC discrimination thresholds for disparity modulation (0.3 or 0.6 c/deg) of a random texture at various pedestal levels. In the first experiment, a clear dipper was found. Thresholds were about 2× lower with weak pedestals than without. But here the phase of modulation (0 or 180 deg) was varied from trial to trial. In a noisy signal-detection framework, this creates uncertainty that is reduced by the pedestal, which thus improves performance. When the uncertainty was eliminated by keeping phase constant within sessions, the dipper effect was weak or absent. Monte Carlo simulations showed that the influence of uncertainty could account well for the results of both experiments. A corollary is that the visual depth response to small disparities is probably linear, with no threshold-like nonlinearity.
Resumo:
Based on theoretical considerations an explanation for the temperature dependence of the thermal expansion and the bulk modulus is proposed. A new equation state is also derived. Additionally a physical explanation for the latent heat of fusion is presented. These theoretical predictions are tested against experiments on highly symmetrical monatomic structures. ^ The volume is not an independent variable and must be broken down into its fundamental components when the relationships to the pressure and temperature are defined. Using zero pressure and temperature reference frame, the initial parameters, volume at zero pressure and temperature[V°], bulk modulus at zero temperature [K°] and volume coefficient of thermal expansion at zero pressure[α°] are defined. ^ The new derived EoS is tested against the experiments on perovskite and epsilon iron. The Root-mean-square-deviations (RMSD) of the residuals of the molar volume, pressure, and temperature are in the range of the uncertainty of the experiments. ^ Separating the experiments into 200 K ranges, the new EoS was compared to the most widely used finite strain, interatomic potential, and empirical isothermal EoSs such as the Burch-Murnaghan, the Vinet, and the Roy-Roy respectively. Correlation coefficients, RMSD's of the residuals, and Akaike Information Criteria were used for evaluating the fitting. Based on these fitting parameters, the new p-V-T EoS is superior in every temperature range relative to the investigated conventional isothermal EoS. ^ The new EoS for epsilon iron reproduces the preliminary-reference earth-model (PREM) densities at 6100-7400 K indicating that the presence of light elements might not be necessary to explain the Earth's inner core densities. ^ It is suggested that the latent heat of fusion supplies the energy required for overcoming on the viscous drag resistance of the atoms. The calculated energies for melts formed from highly symmetrical packing arrangements correlate very well with experimentally determined latent heat values. ^ The optical investigation of carhonado-diamond is also part of the dissertation. The collected first complete infrared FTIR absorption spectra for carhonado-diamond confirm the interstellar origin for the most enigmatic diamonds known as carbonado. ^
Resumo:
Understanding who evacuates and who does not has been one of the cornerstones of research on the pre-impact phase of both natural and technological hazards. Its history is rich in descriptive illustrations focusing on lists of characteristics of those who flee to safety. Early models of evacuation focused almost exclusively on the relationship between whether warnings were heard and ultimately believed and evacuation behavior. How people came to believe these warnings and even how they interpreted the warnings were not incorporated. In fact, the individual seemed almost removed from the picture with analysis focusing exclusively on external measures. ^ This study built and tested a more comprehensive model of evacuation that centers on the decision-making process, rather than decision outcomes. The model focused on three important factors that alter and shape the evacuation decision-making landscape. These factors are: individual level indicators which exist independently of the hazard itself and act as cultural lenses through which information is heard, processed and interpreted; hazard specific variables that directly relate to the specific hazard threat; and risk perception. The ultimate goal is to determine what factors influence the evacuation decision-making process. Using data collected for 1998's Hurricane Georges, logistic regression models were used to evaluate how well the three main factors help our understanding of how individuals come to their decisions to either flee to safety during a hurricane or remain in their homes. ^ The results of the logistic regression were significant emphasizing that the three broad types of factors tested in the model influence the decision making process. Conclusions drawn from the data analysis focus on how decision-making frames are different for those who can be designated “evacuators” and for those in evacuation zones. ^
Resumo:
Résumé : Les réanimateurs ont recours à des interventions à la fois médicales et chirurgicales en contexte de choc traumatique. Le rôle des vasopresseurs dans cette prise en charge est controversé. Alors que les lignes directrices américaines considèrent que les vasopresseurs sont contre-indiqués, certains experts européens en encouragent l’utilisation pour diminuer le recours aux liquides intraveineux. Avant d’élaborer un essai clinique, il importe de comprendre la pratique actuelle à laquelle se comparera une intervention expérimentale, ainsi que de connaître le niveau d’incertitude dans la littérature entourant la question de recherche. Le Chapitre 2 de ce travail présente une étude observationnelle effectuée dans un centre régional de traumatologie québécois. Cette étude documente les pratiques de réanimation adoptées par les équipes de traumatologie en 2013, particulièrement le recours aux liquides intraveineux et aux vasopresseurs. Les résultats démontrent que les vasopresseurs ont été utilisés chez plus de 40% des patients, particulièrement les victimes de traumatismes crâniens (RC 10.2, IC 95% 2.7-38.5). De plus, les vasopresseurs ont été administrés dans les phases précoces de la réanimation, soit avant l’administration d’un volume important de liquides. Le Chapitre 3 présente une revue systématique portant sur l’utilisation précoce de vasopresseurs en traumatologie. Les bases de données MEDLINE, EMBASE, CENTRAL et ClinicalTrials.gov ont été interrogées, ainsi que les abrégés présentés dans les conférences majeures en traumatologie depuis 2005. La sélection des études et l’extraction des données ont été effectuées en duplicata. Aucune donnée interprétable n’a pu être extraite des études observationnelles et le seul essai clinique identifié n’avait pas une puissance suffisante (RR de mortalité avec vasopresseurs 1.24, IC 95 % 0.64-2.43). Cette synthèse met en lumière l’incertitude scientifique sur le rôle des vasopresseurs en traumatologie. Les vasopresseurs ont des bénéfices potentiels importants, puisqu’ils permettent entre autres de supporter étroitement l’hémodynamie des patients. En revanche, ils présentent aussi un fort potentiel de dangerosité. Ils sont utilisés fréquemment, malgré l’absence de données sur leurs risques et bénéfices. Ces trouvailles établissent clairement la pertinence clinique et le bien-fondé éthique d’un essai clinique sur le rôle des vasopresseurs dans la prise en charge précoce des victimes de traumatismes.
Resumo:
The reinforcer devaluation paradigm has been regarded as a canonical paradigm to detect habit-like behavior in animal and human instrumental learning. Though less studied, avoidance situations set a scenario where habit-like behavior may be of great experimental and clinical interest. On the other hand, proactive intolerance of uncertainty has been shown as a factor facilitating responses in uncertain situations. Thus, avoidance situations in which uncertainty is favoured, may be taken as a relevant paradigm to examine the role of intolerance of uncertainty as a facilitatory factor for habit-like behavior to occur. In our experiment we used a free-operant discriminative avoidance procedure to implement a devaluation paradigm. Participants learned to avoid an aversive noise presented either to the right or to the left ear by pressing two different keys. After a devaluation phase where the volume of one of the noises was reduced, they went through a test phase identical to the avoidance phase except for the fact that the noise was never administered. Sensitivity to reinforcer devaluation was examined by comparing the response rate to the cue associated to the devalued reinforcer with that to the cue associated to the still aversive reinforcer. The results showed that intolerance of uncertainty was positively associated to insensitivity to reinforcer devaluation. Finally, we discuss the theoretical and clinical implications of the habit-like behavior obtained in our avoidance procedure.
Resumo:
Many mental disorders are characterised by the presence of compulsions and incontrollable habits. Most studies on habit learning, both in animals and in humans, are based on positive reinforcement paradigms. However, the compulsions and habits involved in some mental disorders may be better understood as avoidance behaviours, which involve some peculiarities, such as anxiety states, that have been shown to promote habitual responses. Consequently, we studied habit acquisition by using a free-operant discriminated avoidance procedure. Furthermore, we checked whether intolerance of uncertainty could predispose to avoidance habit acquisition. Participants learned to avoid an aversive noise presented either to the right or to the left ear by pressing two different keys. After a devaluation phase where the volume of the noise presented to one of the ears was reduced, participants went through a test phase identical to the avoidance learning phase except for the fact that the noise was never administered. Habit acquisition was inferred by comparing the rate of responses to the stimulus signalling the devalued reinforcer and to the stimulus signalling the non-devalued reinforcer. The results showed that intolerance of uncertainty was related to the absence of differences between the referred conditions, which entail avoidance habit acquisition.