969 resultados para Probability distribution functions


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many datasets used by economists and other social scientists are collected by stratified sampling. The sampling scheme used to collect the data induces a probability distribution on the observed sample that differs from the target or underlying distribution for which inference is to be made. If this effect is not taken into account, subsequent statistical inference can be seriously biased. This paper shows how to do efficient semiparametric inference in moment restriction models when data from the target population is collected by three widely used sampling schemes: variable probability sampling, multinomial sampling, and standard stratified sampling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores the dynamic linkages that portray different facets of the joint probability distribution of stock market returns in NAFTA (i.e., Canada, Mexico, and the US). Our examination of interactions of the NAFTA stock markets considers three issues. First, we examine the long-run relationship between the three markets, using cointegration techniques. Second, we evaluate the dynamic relationships between the three markets, using impulse-response analysis. Finally, we explore the volatility transmission process between the three markets, using a variety of multivariate GARCH models. Our results also exhibit significant volatility transmission between the second moments of the NAFTA stock markets, albeit not homogenous. The magnitude and trend of the conditional correlations indicate that in the last few years, the Mexican stock market exhibited a tendency toward increased integration with the US market. Finally, we do note that evidence exists that the Peso and Asian financial crises as well as the stock-market crash in the US affect the return and volatility time-series relationships.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Part One, the foundations of Bayesian inference are reviewed, and the technicalities of the Bayesian method are illustrated. Part Two applies the Bayesian meta-analysis program, the Confidence Profile Method (CPM), to clinical trial data and evaluates the merits of using Bayesian meta-analysis for overviews of clinical trials.^ The Bayesian method of meta-analysis produced similar results to the classical results because of the large sample size, along with the input of a non-preferential prior probability distribution. These results were anticipated through explanations in Part One of the mechanics of the Bayesian approach. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new record of eolian dust flux to the western Subarctic North Pacific (SNP) covering the past 27000 years based on a core from the Detroit Seamount. Comparing the SNP dust record to the NGRIP ice core record shows significant differences in the amplitude of dust changes to the two regions during the last deglaciation, while the timing of abrupt changes is synchronous. If dust deposition in the SNP faithfully records its mobilization in East Asian source regions, then the difference in the relative amplitude must reflect climate-related changes in atmospheric dust transport to Greenland. Based on the synchronicity in the timing of dust changes in the SNP and Greenland, we tie abrupt deglacial transitions in the 230Th-normalized 4He flux record to corresponding transitions in the well-dated NGRIP dust flux record to provide a new chronostratigraphic technique for marine sediments from the SNP. Results from this technique are complemented by radiocarbon dating, which allows us to independently constrain radiocarbon paleoreservoir ages. We find paleoreservoir ages of 745 ± 140 yr at 11653 yr BP, 680 ± 228 yr at 14630 yr BP and 790 ± 498 yr at 23290 yr BP. Our reconstructed paleoreservoir ages are consistent with modern surface water reservoir ages in the western SNP. Good temporal synchronicity between eolian dust records from the Subantarctic Atlantic and equatorial Pacific and the ice core record from Antarctica supports the reliability of the proposed dust tuning method to be used more widely in other global ocean regions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The modern subarctic Pacific is characterized by a steep salinity-driven surface water stratification, which hampers the supply of saline and nutrient-rich deeper waters into the euphotic zone, limiting productivity. However, the strength of the halocline might have varied in the past. Here, we present diatom oxygen (d18Odiat) and silicon (d30Sidiat) stable isotope data from the open subarctic North-East (NE) Pacific (SO202-27-6; Gulf of Alaska), in combination with other proxy data (Neogloboquadrina pachydermasin d18O, biogenic opal, Ca and Fe intensities, IRD), to evaluate changes in surface water hydrography and productivity during Marine Isotope Stage (MIS) 3, characterized by millennial-scale temperature changes (Dansgaard-Oeschger (D-O) cycles) documented in Greenland ice cores.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The glacial-to-Holocene evolution of subarctic Pacific surface water stratification and silicic acid (Si) dynamics is investigated based on new combined diatom oxygen (d18Odiat) and silicon (d30Sidiat) isotope records, along with new biogenic opal, subsurface foraminiferal d18O, alkenone-based sea surface temperature, sea ice, diatom, and core logging data from the NE Pacific. Our results suggest that d18Odiat values are primarily influenced by changes in freshwater discharge from the Cordilleran Ice Sheet (CIS), while corresponding d30Sidiat are primarily influenced by changes in Si supply to surface waters. Our data indicate enhanced glacial to mid Heinrich Stadial 1 (HS1) NE Pacific surface water stratification, generally limiting the Si supply to surface waters. However, we suggest that an increase in Si supply during early HS1, when surface waters were still stratified, is linked to increased North Pacific Intermediate Water formation. The coincidence between fresh surface waters during HS1 and enhanced ice-rafted debris sedimentation in the North Atlantic indicates a close link between CIS and Laurentide Ice Sheet dynamics and a dominant atmospheric control on CIS deglaciation. The Bølling/Allerød (B/A) is characterized by destratification in the subarctic Pacific and an increased supply of saline, Si-rich waters to surface waters. This change toward increased convection occurred prior to the Bølling warming and is likely triggered by a switch to sea ice-free conditions during late HS1. Our results furthermore indicate a decreased efficiency of the biological pump during late HS1 and the B/A (possibly also the Younger Dryas), suggesting that the subarctic Pacific has then been a source region of atmospheric CO2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A 6200 year old peat sequence, cored in a volcanic crater on the sub-Antarctic Ile de la Possession (Iles Crozet), has been investigated, based on a multi-proxy approach. The methods applied are macrobotanical (mosses, seeds and fruits) and diatom analyses, complemented by geochemical (Rock-Eval6) and rock magnetic measurements. The chronology of the core is based on 5 radiocarbon dates. When combining all the proxy data the following changes could be inferred. From the onset of the peat formation (6200 cal yr BP) until ca. 5550 cal yr BP, biological production was high and climatic conditions must have been relatively warm. At ca. 5550 cal yr BP a shift to low biological production occurred, lasting until ca. 4600 cal yr BP. During this period the organic matter is well preserved, pointing to a cold and/or wet environment. At ca. 4600 cal yr BP, biological production increased again. From ca. 4600 cal yr BP until ca. 4100 cal yr BP a 'hollow and hummock' micro topography developed at the peat surface, resulting in the presence of a mixture of wetter and drier species in the macrobotanical record. After ca. 4100 cal yr BP, the wet species disappear and a generally drier, acidic bog came into existence. A major shift in all the proxy data is observed at ca. 2800 cal yr BP, pointing to wetter and especially windier climatic conditions on the island probably caused by an intensification and/or latitudinal shift of the southern westerly belt. Caused by a stronger wind regime, erosion of the peat surface occurred at that time and a lake was formed in the peat deposits of the crater, which is still present today.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The episodic occurrence of debris flow events in response to stochastic precipitation and wildfire events makes hazard prediction challenging. Previous work has shown that frequency-magnitude distributions of non-fire-related debris flows follow a power law, but less is known about the distribution of post-fire debris flows. As a first step in parameterizing hazard models, we use frequency-magnitude distributions and cumulative distribution functions to compare volumes of post-fire debris flows to non-fire-related debris flows. Due to the large number of events required to parameterize frequency-magnitude distributions, and the relatively small number of post-fire event magnitudes recorded in the literature, we collected data on 73 recent post-fire events in the field. The resulting catalog of 988 debris flow events is presented as an appendix to this article. We found that the empirical cumulative distribution function of post-fire debris flow volumes is composed of smaller events than that of non-fire-related debris flows. In addition, the slope of the frequency-magnitude distribution of post-fire debris flows is steeper than that of non-fire-related debris flows, evidence that differences in the post-fire environment tend to produce a higher proportion of small events. We propose two possible explanations: 1) post-fire events occur on shorter return intervals than debris flows in similar basins that do not experience fire, causing their distribution to shift toward smaller events due to limitations in sediment supply, or 2) fire causes changes in resisting and driving forces on a package of sediment, such that a smaller perturbation of the system is required in order for a debris flow to occur, resulting in smaller event volumes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Botanical data are widely used as terrestrial proxy data for climate reconstructions. Using a newly established method based on probability density functions (pdf-method), the temperature development throughout the last interglacial, the Eemian, is reconstructed for the two German sites Bispingen and Grobern and the French site La Grande Pile. The results are compared with previous reconstructions using other methods. After a steep increase in January as well as July temperatures in the early phase of the interglacial, the reconstructed most probable climate appears to be slightly warmer than today. While the temperature is reconstructed as relatively stable throughout the Eemian, a certain tendency towards cooler January temperatures is evident. January temperatures decreased from approx. 2-3° C in the early part to approx. -3° C in the later part at Bispingen, and from approx. 2° C to approx. -1° C at Grobern and La Grande Pile. A major drop to about -8° C marks the very end of the interglacial at all three sites. While these results agree well with other proxy data and former reconstructions based on the indicator species method, the results differ significantly from reconstructions based on the modern pollen analogue technique ("pollen transfer functions"). The lack of modern analogues is assumed to be the main reason for the discrepancies. It is concluded that any reconstruction method needs to be evaluated carefully in this respect if used for periods lacking modern analogous plant communities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a theoretical analysis and an optimization method for envelope amplifier. Highly efficient envelope amplifiers based on a switching converter in parallel or series with a linear regulator have been analyzed and optimized. The results of the optimization process have been shown and these two architectures are compared regarding their complexity and efficiency. The optimization method that is proposed is based on the previous knowledge about the transmitted signal type (OFDM, WCDMA...) and it can be applied to any signal type as long as the envelope probability distribution is known. Finally, it is shown that the analyzed architectures have an inherent efficiency limit.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La propulsión eléctrica constituye hoy una tecnología muy competitiva y de gran proyección de futuro. Dentro de los diversos motores de plasma existentes, el motor de efecto Hall ha adquirido una gran madurez y constituye un medio de propulsión idóneo para un rango amplio de misiones. En la presente Tesis se estudian los motores Hall con geometría convencional y paredes dieléctricas. La compleja interacción entre los múltiples fenómenos físicos presentes hace que sea difícil la simulación del plasma en estos motores. Los modelos híbridos son los que representan un mejor compromiso entre precisión y tiempo de cálculo. Se basan en utilizar un modelo fluido para los electrones y algoritmos de dinámica de partículas PIC (Particle-In- Cell) para los iones y los neutros. Permiten hacer uso de la hipótesis de cuasineutralidad del plasma, a cambio de resolver separadamente las capas límite (o vainas) que se forman en torno a las paredes de la cámara. Partiendo de un código híbrido existente, llamado HPHall-2, el objetivo de la Tesis doctoral ha sido el desarrollo de un código híbrido avanzado que mejorara la simulación de la descarga de plasma en un motor de efecto Hall. Las actualizaciones y mejoras realizadas en las diferentes partes que componen el código comprenden tanto aspectos teóricos como numéricos. Fruto de la extensa revisión de la algoritmia del código HPHall-2 se han conseguido reducir los errores de precisión un orden de magnitud, y se ha incrementado notablemente su consistencia y robustez, permitiendo la simulación del motor en un amplio rango de condiciones. Algunos aspectos relevantes a destacar en el subcódigo de partículas son: la implementación de un nuevo algoritmo de pesado que permite determinar de forma más precisa el flujo de las magnitudes del plasma; la implementación de un nuevo algoritmo de control de población, que permite tener suficiente número de partículas cerca de las paredes de la cámara, donde los gradientes son mayores y las condiciones de cálculo son más críticas; las mejoras en los balances de masa y energía; y un mejor cálculo del campo eléctrico en una malla no uniforme. Merece especial atención el cumplimiento de la condición de Bohm en el borde de vaina, que en los códigos híbridos representa una condición de contorno necesaria para obtener una solución consistente con el modelo de interacción plasma-pared, y que en HPHall-2 aún no se había resuelto satisfactoriamente. En esta Tesis se ha implementado el criterio cinético de Bohm para una población de iones con diferentes cargas eléctricas y una gran dispersión de velocidades. En el código, el cumplimiento de la condición cinética de Bohm se consigue por medio de un algoritmo que introduce una fina capa de aceleración nocolisional adyacente a la vaina y mide adecuadamente el flujo de partículas en el espacio y en el tiempo. Las mejoras realizadas en el subcódigo de electrones incrementan la capacidad de simulación del código, especialmente en la región aguas abajo del motor, donde se simula la neutralización del chorro del plasma por medio de un modelo de cátodo volumétrico. Sin abordar el estudio detallado de la turbulencia del plasma, se implementan modelos sencillos de ajuste de la difusión anómala de Bohm, que permiten reproducir los valores experimentales del potencial y la temperatura del plasma, así como la corriente de descarga del motor. En cuanto a los aspectos teóricos, se hace especial énfasis en la interacción plasma-pared y en la dinámica de los electrones secundarios libres en el interior del plasma, cuestiones que representan hoy en día problemas abiertos en la simulación de los motores Hall. Los nuevos modelos desarrollados buscan una imagen más fiel a la realidad. Así, se implementa el modelo de vaina de termalización parcial, que considera una función de distribución no-Maxwelliana para los electrones primarios y contabiliza unas pérdidas energéticas más cercanas a la realidad. Respecto a los electrones secundarios, se realiza un estudio cinético simplificado para evaluar su grado de confinamiento en el plasma, y mediante un modelo fluido en el límite no-colisional, se determinan las densidades y energías de los electrones secundarios libres, así como su posible efecto en la ionización. El resultado obtenido muestra que los electrones secundarios se pierden en las paredes rápidamente, por lo que su efecto en el plasma es despreciable, no así en las vainas, donde determinan el salto de potencial. Por último, el trabajo teórico y de simulación numérica se complementa con el trabajo experimental realizado en el Pnnceton Plasma Physics Laboratory, en el que se analiza el interesante transitorio inicial que experimenta el motor en el proceso de arranque. Del estudio se extrae que la presencia de gases residuales adheridos a las paredes juegan un papel relevante, y se recomienda, en general, la purga completa del motor antes del modo normal de operación. El resultado final de la investigación muestra que el código híbrido desarrollado representa una buena herramienta de simulación de un motor Hall. Reproduce adecuadamente la física del motor, proporcionando resultados similares a los experimentales, y demuestra ser un buen laboratorio numérico para estudiar el plasma en el interior del motor. Abstract Electric propulsion is today a very competitive technology and has a great projection into the future. Among the various existing plasma thrusters, the Hall effect thruster has acquired a considerable maturity and constitutes an ideal means of propulsion for a wide range of missions. In the present Thesis only Hall thrusters with conventional geometry and dielectric walls are studied. The complex interaction between multiple physical phenomena makes difficult the plasma simulation in these engines. Hybrid models are those representing a better compromise between precision and computational cost. They use a fluid model for electrons and Particle-In-Cell (PIC) algorithms for ions and neutrals. The hypothesis of plasma quasineutrality is invoked, which requires to solve separately the sheaths formed around the chamber walls. On the basis of an existing hybrid code, called HPHall-2, the aim of this doctoral Thesis is to develop an advanced hybrid code that better simulates the plasma discharge in a Hall effect thruster. Updates and improvements of the code include both theoretical and numerical issues. The extensive revision of the algorithms has succeeded in reducing the accuracy errors in one order of magnitude, and the consistency and robustness of the code have been notably increased, allowing the simulation of the thruster in a wide range of conditions. The most relevant achievements related to the particle subcode are: the implementation of a new weighing algorithm that determines more accurately the plasma flux magnitudes; the implementation of a new algorithm to control the particle population, assuring enough number of particles near the chamber walls, where there are strong gradients and the conditions to perform good computations are more critical; improvements in the mass and energy balances; and a new algorithm to compute the electric field in a non-uniform mesh. It deserves special attention the fulfilment of the Bohm condition at the edge of the sheath, which represents a boundary condition necessary to match consistently the hybrid code solution with the plasma-wall interaction, and remained as a question unsatisfactory solved in the HPHall-2 code. In this Thesis, the kinetic Bohm criterion has been implemented for an ion particle population with different electric charges and a large dispersion in their velocities. In the code, the fulfilment of the kinetic Bohm condition is accomplished by an algorithm that introduces a thin non-collisional layer next to the sheaths, producing the ion acceleration, and measures properly the flux of particles in time and space. The improvements made in the electron subcode increase the code simulation capabilities, specially in the region downstream of the thruster, where the neutralization of the plasma jet is simulated using a volumetric cathode model. Without addressing the detailed study of the plasma turbulence, simple models for a parametric adjustment of the anomalous Bohm difussion are implemented in the code. They allow to reproduce the experimental values of the plasma potential and the electron temperature, as well as the discharge current of the thruster. Regarding the theoretical issues, special emphasis has been made in the plasma-wall interaction of the thruster and in the dynamics of free secondary electrons within the plasma, questions that still remain unsolved in the simulation of Hall thrusters. The new developed models look for results closer to reality, such as the partial thermalization sheath model, that assumes a non-Maxwellian distribution functions for primary electrons, and better computes the energy losses at the walls. The evaluation of secondary electrons confinement within the chamber is addressed by a simplified kinetic study; and using a collisionless fluid model, the densities and energies of free secondary electrons are computed, as well as their effect on the plasma ionization. Simulations show that secondary electrons are quickly lost at walls, with a negligible effect in the bulk of the plasma, but they determine the potential fall at sheaths. Finally, numerical simulation and theoretical work is complemented by the experimental work carried out at the Princeton Plasma Physics Laboratory, devoted to analyze the interesting transitional regime experienced by the thruster in the startup process. It is concluded that the gas impurities adhered to the thruster walls play a relevant role in the transitional regime and, as a general recomendation, a complete purge of the thruster before starting its normal mode of operation it is suggested. The final result of the research conducted in this Thesis shows that the developed code represents a good tool for the simulation of Hall thrusters. The code reproduces properly the physics of the thruster, with results similar to the experimental ones, and represents a good numerical laboratory to study the plasma inside the thruster.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The quality and the reliability of the power generated by large grid-connected photovoltaic (PV) plants are negatively affected by the source characteristic variability. This paper deals with the smoothing of power fluctuations because of geographical dispersion of PV systems. The fluctuation frequency and the maximum fluctuation registered at a PV plant ensemble are analyzed to study these effects. We propose an empirical expression to compare the fluctuation attenuation because of both the size and the number of PV plants grouped. The convolution of single PV plants frequency distribution functions has turned out to be a successful tool to statistically describe the behavior of an ensemble of PV plants and determine their maximum output fluctuation. Our work is based on experimental 1-s data collected throughout 2009 from seven PV plants, 20 MWp in total, separated between 6 and 360 km.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this paper is to present a program written in Matlab-Octave for the simulation of the time evolution of student curricula, i.e, how students pass their subjects along time until graduation. The program computes, from the simulations, the academic performance rates for the subjects of the study plan for each semester as well as the overall rates, which are a) the efficiency rate defined as the ratio of the number of students passing the exam to the number of students who registered for it and b) the success rate, defined as the ratio of the number of students passing the exam to the number of students who not only registered for it but also actually took it. Additionally, we compute the rates for the bachelor academic degree which are established for Spain by the National Quality Evaluation and Accreditation Agency (ANECA) and which are the graduation rate (measured as the percentage of students who finish as scheduled in the plan or taking an extra year) and the efficiency rate (measured as the percentage of credits which a student who graduated has really taken). The simulation is done in terms of the probabilities of passing all the subjects in their study plan. The application of the simulator to Polytech students in Madrid, where requirements for passing are specially stiff in first and second year subjects, is particularly relevant to analyze student cohorts and the probabilities of students finishing in the minimum of four years, or taking and extra year or two extra years, and so forth. It is a very useful tool when designing new study plans. The calculation of the probability distribution of the random variable "number of semesters a student has taken to complete the curricula and graduate" is difficult or even unfeasible to obtain analytically, and this is even truer when we incorporate uncertainty in parameter estimation. This is why we apply Monte Carlo simulation which not only provides illustration of the stochastic process but also a method for computation. The stochastic simulator is proving to be a useful tool for identification of the subjects most critical in the distribution of the number of semesters for curriculum vitae (CV) completion and subsequently for a decision making process in terms of CV planning and passing standards in the University. Simulations are performed through a graphical interface where also the results are presented in appropriate figures. The Project has been funded by the Call for Innovation in Education Projects of Universidad Politécnica de Madrid (UPM) through a Project of its school Escuela Técnica Superior de Ingenieros Industriales ETSII during the period September 2010-September 2011.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lately, several researchers have pointed out that climate change is expected to increase temperatures and lower rainfall in Mediterranean regions, simultaneously increasing the intensity of extreme rainfall events. These changes could have consequences regarding rainfall regime, erosion, sediment transport and water quality, soil management, and new designs in diversion ditches. Climate change is expected to result in increasingly unpredictable and variable rainfall, in amount and timing, changing seasonal patterns and increasing the frequency of extreme weather events. Consequently, the evolution of frequency and intensity of drought periods is of most important as in agro-ecosystems many processes will be affected by them. Realising the complex and important consequences of an increasing frequency of extreme droughts at the Ebro River basin, our aim is to study the evolution of drought events at this site statistically, with emphasis on the occurrence and intensity of them. For this purpose, fourteen meteorological stations were selected based on the length of the rainfall series and the climatic classification to obtain a representative untreated dataset from the river basin. Daily rainfall series from 1957 to 2002 were obtained from each meteorological station and no-rain period frequency as the consecutive numbers of days were extracted. Based on this data, we study changes in the probability distribution in several sub-periods. Moreover we used the Standardized Precipitation Index (SPI) for identification of drought events in a year scale and then we use this index to fit log-linear models to the contingency tables between the SPI index and the sub-periods, this adjusted is carried out with the help of ANOVA inference.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose distributed algorithms for sampling networks based on a new class of random walks that we call Centrifugal Random Walks (CRW). A CRW is a random walk that starts at a source and always moves away from it. We propose CRW algorithms for connected networks with arbitrary probability distributions, and for grids and networks with regular concentric connectivity with distance based distributions. All CRW sampling algorithms select a node with the exact probability distribution, do not need warm-up, and end in a number of hops bounded by the network diameter.