881 resultados para Minimum return guarantee
Resumo:
The Earth s climate is a highly dynamic and complex system in which atmospheric aerosols have been increasingly recognized to play a key role. Aerosol particles affect the climate through a multitude of processes, directly by absorbing and reflecting radiation and indirectly by changing the properties of clouds. Because of the complexity, quantification of the effects of aerosols continues to be a highly uncertain science. Better understanding of the effects of aerosols requires more information on aerosol chemistry. Before the determination of aerosol chemical composition by the various available analytical techniques, aerosol particles must be reliably sampled and prepared. Indeed, sampling is one of the most challenging steps in aerosol studies, since all available sampling techniques harbor drawbacks. In this study, novel methodologies were developed for sampling and determination of the chemical composition of atmospheric aerosols. In the particle-into-liquid sampler (PILS), aerosol particles grow in saturated water vapor with further impaction and dissolution in liquid water. Once in water, the aerosol sample can then be transported and analyzed by various off-line or on-line techniques. In this study, PILS was modified and the sampling procedure was optimized to obtain less altered aerosol samples with good time resolution. A combination of denuders with different coatings was tested to adsorb gas phase compounds before PILS. Mixtures of water with alcohols were introduced to increase the solubility of aerosols. Minimum sampling time required was determined by collecting samples off-line every hour and proceeding with liquid-liquid extraction (LLE) and analysis by gas chromatography-mass spectrometry (GC-MS). The laboriousness of LLE followed by GC-MS analysis next prompted an evaluation of solid-phase extraction (SPE) for the extraction of aldehydes and acids in aerosol samples. These two compound groups are thought to be key for aerosol growth. Octadecylsilica, hydrophilic-lipophilic balance (HLB), and mixed phase anion exchange (MAX) were tested as extraction materials. MAX proved to be efficient for acids, but no tested material offered sufficient adsorption for aldehydes. Thus, PILS samples were extracted only with MAX to guarantee good results for organic acids determined by liquid chromatography-mass spectrometry (HPLC-MS). On-line coupling of SPE with HPLC-MS is relatively easy, and here on-line coupling of PILS with HPLC-MS through the SPE trap produced some interesting data on relevant acids in atmospheric aerosol samples. A completely different approach to aerosol sampling, namely, differential mobility analyzer (DMA)-assisted filter sampling, was employed in this study to provide information about the size dependent chemical composition of aerosols and understanding of the processes driving aerosol growth from nano-size clusters to climatically relevant particles (>40 nm). The DMA was set to sample particles with diameters of 50, 40, and 30 nm and aerosols were collected on teflon or quartz fiber filters. To clarify the gas-phase contribution, zero gas-phase samples were collected by switching off the DMA every other 15 minutes. Gas-phase compounds were adsorbed equally well on both types of filter, and were found to contribute significantly to the total compound mass. Gas-phase adsorption is especially significant during the collection of nanometer-size aerosols and needs always to be taken into account. Other aims of this study were to determine the oxidation products of β-caryophyllene (the major sesquiterpene in boreal forest) in aerosol particles. Since reference compounds are needed for verification of the accuracy of analytical measurements, three oxidation products of β-caryophyllene were synthesized: β-caryophyllene aldehyde, β-nocaryophyllene aldehyde, and β-caryophyllinic acid. All three were identified for the first time in ambient aerosol samples, at relatively high concentrations, and their contribution to the aerosol mass (and probably growth) was concluded to be significant. Methodological and instrumental developments presented in this work enable fuller understanding of the processes behind biogenic aerosol formation and provide new tools for more precise determination of biosphere-atmosphere interactions.
Resumo:
Meridional circulation is an important ingredient in flux transport dynamo models. We have studied its importance on the period, the amplitude of the solar cycle, and also in producing Maunder-like grand minima in these models. First, we model the periods of the last 23 sunspot cycles by varying the meridional circulation speed. If the dynamo is in a diffusion-dominated regime, then we find that most of the cycle amplitudes also get modeled up to some extent when we model the periods. Next, we propose that at the beginning of the Maunder minimum the amplitude of meridional circulation dropped to a low value and then after a few years it increased again. Several independent studies also favor this assumption. With this assumption, a diffusion-dominated dynamo is able to reproduce many important features of the Maunder minimum remarkably well. If the dynamo is in a diffusion-dominated regime, then a slower meridional circulation means that the poloidal field gets more time to diffuse during its transport through the convection zone, making the dynamo weaker. This consequence helps to model both the cycle amplitudes and the Maunder-like minima. We, however, fail to reproduce these results if the dynamo is in an advection-dominated regime.
Resumo:
In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csiszar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Grobner bases method to compute an implicit representation of minimum KL-divergence models.
Relationship between Return, Volume and Volatility in the Ghana Stock Market (Available on Internet)
Resumo:
Numerous reports from several parts of the world have confirmed that on calm clear nights a minimum in air temperature can occur just above ground, at heights of the order of $\frac{1}{2}$ m or less. This phenomenon, first observed by Ramdas & Atmanathan (1932), carries the associated paradox of an apparently unstable layer that sustains itself for several hours, and has not so far been satisfactorily explained. We formulate here a theory that considers energy balance between radiation, conduction and free or forced convection in humid air, with surface temperature, humidity and wind incorporated into an appropriate mathematical model as parameters. A complete numerical solution of the coupled air-soil problem is used to validate an approach that specifies the surface temperature boundary condition through a cooling rate parameter. Utilizing a flux-emissivity scheme for computing radiative transfer, the model is numerically solved for various values of turbulent friction velocity. It is shown that a lifted minimum is predicted by the model for values of ground emissivity not too close to unity, and for sufficiently low surface cooling rates and eddy transport. Agreement with observation for reasonable values of the parameters is demonstrated. A heuristic argument is offered to show that radiation substantially increases the critical Rayleigh number for convection, thus circumventing or weakening Rayleigh-Benard instability. The model highlights the key role played by two parameters generally ignored in explanations of the phenomenon, namely surface emissivity and soil thermal conductivity, and shows that it is unnecessary to invoke the presence of such particulate constituents as haze to produce a lifted minimum.
Resumo:
In this article, a minimum weight design of carbon/epoxy laminates is carried out using genetic algorithms. New failure envelopes have been developed by the combination of two commonly used phenomenological failure criteria, namely Maximum Stress (MS) and Tsai-Wu (TW) are used to obtain the minimum weight of the laminate. These failure envelopes are the most conservative failure envelope (MCFE) and the least conservative failure envelope (LCFE). Uniaxial and biaxial loading conditions are considered for the study and the differences in the optimal weight of the laminate are compared for the MCFE and LCFE. The MCFE can be used for design of critical load-carrying composites, while the LCFE could be used for the design of composite structures where weight reduction is much more important than safety such as unmanned air vehicles.
Resumo:
The similar to 2500 km long Himalayan arc has experienced three large to great earthquakes of M-w 7.8 to 8.4 during the past century, but none produced surface rupture. Paleoseismic studies have been conducted during the last decade to begin understanding the timing, size, rupture extent, return period, and mechanics of the faulting associated with the occurrence of large surface rupturing earthquakes along the similar to 2500 km long Himalayan Frontal Thrust (HFT) system of India and Nepal. The previous studies have been limited to about nine sites along the western two-thirds of the HFT extending through northwest India and along the southern border of Nepal. We present here the results of paleoseismic investigations at three additional sites further to the northeast along the HFT within the Indian states of West Bengal and Assam. The three sites reside between the meizoseismal areas of the 1934 Bihar-Nepal and 1950 Assam earthquakes. The two westernmost of the sites, near the village of Chalsa and near the Nameri Tiger Preserve, show that offsets during the last surface rupture event were at minimum of about 14 m and 12 m, respectively. Limits on the ages of surface rupture at Chalsa (site A) and Nameri (site B), though broad, allow the possibility that the two sites record the same great historical rupture reported in Nepal around A.D. 1100. The correlation between the two sites is supported by the observation that the large displacements as recorded at Chalsa and Nameri would most likely be associated with rupture lengths of hundreds of kilometers or more and are on the same order as reported for a surface rupture earthquake reported in Nepal around A.D. 1100. Assuming the offsets observed at Chalsa and Nameri occurred synchronously with reported offsets in Nepal, the rupture length of the event would approach 700 to 800 km. The easternmost site is located within Harmutty Tea Estate (site C) at the edges of the 1950 Assam earthquake meizoseismal area. Here the most recent event offset is relatively much smaller (<2.5 m), and radiocarbon dating shows it to have occurred after A.D. 1100 (after about A.D. 1270). The location of the site near the edge of the meizoseismal region of the 1950 Assam earthquake and the relatively lesser offset allows speculation that the displacement records the 1950 M-w 8.4 Assam earthquake. Scatter in radiocarbon ages on detrital charcoal has not resulted in a firm bracket on the timing of events observed in the trenches. Nonetheless, the observations collected here, when taken together, suggest that the largest of thrust earthquakes along the Himalayan arc have rupture lengths and displacements of similar scale to the largest that have occurred historically along the world's subduction zones.
Resumo:
An exact numerical calculation of ensemble-averaged length-scale-dependent conductance for the one-dimensional Anderson model is shown to support an earlier conjecture for a conductance minimum. The numerical results can be understood in terms of the Thouless expression for the conductance and the Wigner level-spacing statistics.
Resumo:
The minimum distance of linear block codes is one of the important parameter that indicates the error performance of the code. When the code rate is less than 1/2, efficient algorithms are available for finding minimum distance using the concept of information sets. When the code rate is greater than 1/2, only one information set is available and efficiency suffers. In this paper, we investigate and propose a novel algorithm to find the minimum distance of linear block codes with the code rate greater than 1/2. We propose to reverse the roles of information set and parity set to get virtually another information set to improve the efficiency. This method is 67.7 times faster than the minimum distance algorithm implemented in MAGMA Computational Algebra System for a (80, 45) linear block code.
Explicit and Optimal Exact-Regenerating Codes for the Minimum-Bandwidth Point in Distributed Storage
Resumo:
In the distributed storage setting that we consider, data is stored across n nodes in the network such that the data can be recovered by connecting to any subset of k nodes. Additionally, one can repair a failed node by connecting to any d nodes while downloading beta units of data from each. Dimakis et al. show that the repair bandwidth d beta can be considerably reduced if each node stores slightly more than the minimum required and characterize the tradeoff between the amount of storage per node and the repair bandwidth. In the exact regeneration variation, unlike the functional regeneration, the replacement for a failed node is required to store data identical to that in the failed node. This greatly reduces the complexity of system maintenance. The main result of this paper is an explicit construction of codes for all values of the system parameters at one of the two most important and extreme points of the tradeoff - the Minimum Bandwidth Regenerating point, which performs optimal exact regeneration of any failed node. A second result is a non-existence proof showing that with one possible exception, no other point on the tradeoff can be achieved for exact regeneration.
Resumo:
In the direction of arrival (DOA) estimation problem, we encounter both finite data and insufficient knowledge of array characterization. It is therefore important to study how subspace-based methods perform in such conditions. We analyze the finite data performance of the multiple signal classification (MUSIC) and minimum norm (min. norm) methods in the presence of sensor gain and phase errors, and derive expressions for the mean square error (MSE) in the DOA estimates. These expressions are first derived assuming an arbitrary array and then simplified for the special case of an uniform linear array with isotropic sensors. When they are further simplified for the case of finite data only and sensor errors only, they reduce to the recent results given in [9-12]. Computer simulations are used to verify the closeness between the predicted and simulated values of the MSE.
Resumo:
Model Reference Adaptive Control (MRAC) of a wide repertoire of stable Linear Time Invariant (LTI) systems is addressed here. Even an upper bound on the order of the finite-dimensional system is unavailable. Further, the unknown plant is permitted to have both minimum phase and nonminimum phase zeros. Model following with reference to a completely specified reference model excited by a class of piecewise continuous bounded signals is the goal. The problem is approached by taking recourse to the time moments representation of an LTI system. The treatment here is confined to Single-Input Single-Output (SISO) systems. The adaptive controller is built upon an on-line scheme for time moment estimation of a system given no more than its input and output. As a first step, a cascade compensator is devised. The primary contribution lies in developing a unified framework to eventually address with more finesse the problem of adaptive control of a large family of plants allowed to be minimum or nonminimum phase. Thus, the scheme presented in this paper is confined to lay the basis for more refined compensators-cascade, feedback and both-initially for SISO systems and progressively for Multi-Input Multi-Output (MIMO) systems. Simulations are presented.
Resumo:
‘Best’ solutions for the shock-structure problem are obtained by solving the Boltzmann equation for a rigid sphere gas by applying minimum error criteria on the Mott-Smith ansatz. The use of two such criteria minimizing respectively the local and total errors, as well as independent computations of the remaining error, establish the high accuracy of the solutions, although it is shown that the Mott-Smith distribution is not an exact solution of the Boltzmann equation even at infinite Mach number. The minimum local error method is found to be particularly simple and efficient. Adopting the present solutions as the standard of comparison, it is found that the widely used v2x-moment solutions can be as much as a third in error, but that results based on Rosen's method provide good approximations. Finally, it is shown that if the Maxwell mean free path on the hot side of the shock is chosen as the scaling length, the value of the density-slope shock thickness is relatively insensitive to the intermolecular potential. A comparison is made on this basis of present results with experiment, and very satisfactory quantitative agreement is obtained.