991 resultados para variable parameters
Resumo:
An interspecific hybrid resulting from the crossing of elephant grass (Pennisetum purpureum Schumach) x pearl millet (Pennisetum glaucum (L.) R. Brown) has been developed. This hybrid, however, revealed low phenotypic uniformity and low production of pure seeds. Through recurrent selection, two improved populations were obtained (genotypes Corte and Pastoreio). The aim of this study was assessing seed quality of the three hybrids (genotypes Corte, Pastoreio and Paraiso) by tests of: seed purity; seed germination; accelerated aging test, at 42 ºC; 1,000 seeds weight; drying curves; and sorption and desorption isotherms. Recurrent selection altered the seed size and increased initial quality of population for genotype Pastoreio. Drying curves for the three hybrids have shown similar behavior and reached moisture contents of 2.1%, 1.9%, and 1.8%, respectively, after 63 days. The accelerated aging test showed that hybrid Pastoreio was the most vigorous.
Resumo:
A new approach to the determination of the thermal parameters of high-power batteries is introduced here. Application of local heat flux measurement with a gradient heat flux sensor (GHFS) allows determination of the cell thermal parameters in di_erent surface points of the cell. The suggested methodology is not cell destructive as it does not require deep discharge of the cell or application of any charge/discharge cycles during measurements of the thermal parameters of the cell. The complete procedure is demonstrated on a high-power Li-ion pouch cell, and it is verified on a sample with well-known thermal parameters. A comparison of the experimental results with conventional thermal characterization methods shows an acceptably low error. The dependence of the cell thermal parameters on state of charge (SoC) and measurement points on the surface was studied by the proposed measurement approach.
Resumo:
Fluid handling systems account for a significant share of the global consumption of electrical energy. They also suffer from problems, which reduce their energy efficiency and increase life-cycle costs. Detecting or predicting these problems in time can make fluid handling systems more environmentally and economically sustainable to operate. In this Master’s Thesis, significant problems in fluid systems were studied and possibilities to develop variable-speed-drive-based detection methods for them was discussed. A literature review was conducted to find significant problems occurring in fluid handling systems containing pumps, fans and compressors. To find case examples for evaluating the feasibility of variable-speed-drive-based methods, queries were sent to industrial companies. As a result of this, the possibility to detect heat exchanger fouling with a variable-speed drive was analysed with data from three industrial cases. It was found that a mass flow rate estimate, which can be generated with a variable speed drive, can be used together with temperature measurements to monitor a heat exchanger’s thermal performance. Secondly, it was found that the fouling-related increase in the pressure drop of a heat exchanger can be monitored with a variable speed drive. Lastly, for systems where the flow device is speed controlled with by a pressure measurement, it was concluded that increasing rotational speed can be interpreted as progressing fouling in the heat exchanger.
Resumo:
Breeding parameters of Great Cormorants (PkaZac/iOCOfiCLX CCUibo dCUtbo) and Double-crested Cormorants (P. CLU/uXuA CMJhLtllb) were examined at two mixed species colonies at Cape Tryon and Durell Point, Prince Edward Island from 1976 to 1978. Differential access to nests at the two colony sites resulted in more complete demographic data for P. CCUibo than for P. CLUJiituA. In 1911j P. CCtfibo was present at both colonies by 21 March, whereas P. auAAJtuA did not return until 1 April and 16 April at Cape Tryon and Durell Point, respectively. Differences in the arrival chronology by individuals of each species and differences in the time of nest site occupation according to age, are suggested as factors influencing the nest site distribution of P. CXUtbo and P. aiVtituA at Cape Tryon. Forty-eight P. dOJtbo chicks banded at the Durell Point colony between 19 74 and 19 76 returned there to nest as two- to four-year olds in 19 77 and 19 78. Unmarked individuals with clutch-starts in April were likely greater than four years old as all marked two to four-year olds (with one possible exception) in 19 77 and 1978 had clutch-starts in May and June. Seasonal variation in the breeding success of P. dOJibo individuals was examined at Durell Point in 1977. Mean clutch-size, hatching success and fledging success exhibited a seasonal decline. Four- and 5-egg clutches represented the majority (75%) of all P. CCUibo clutches at Durell Point in 1977 and had the highest reproductive success (0.48 and 0.43 chicks fledged per egg laid respectively). Smaller clutches produced small broods with significantly higher chick mortality while larger clutches suffered high egg loss prior to clutch completion.
Resumo:
Owing to the fact that low-Mg calcite fossil shells are so important in paleoceanographic research, 249 brachiopod, cement and matrix specimens from two neighboring localities (Jemez Springs and Battleship Rock), of the Upper Pennsylvanian Madera Formation were analyzed. Of which, about 86% of the Madera brachiopods are preserved in their pristine mineralogy, microstructure and geochemistry. Cement and matrix samples, in contrast, have been subjected to complete but variable post-deposition~1 alteration. It is confirmed that the stable isotope data of brachiopods are much better than that of matrix material in defining depositional parameters. Because there is no uniform or constant relationship between the two data bases (e.g., from 0.1 to 3.0%0 for 0180 and from 0.2 to 6.7%0 for 013C in this study), it is not possible to make corrections for the matrix data. Regarding the two stratigraphic sections, elemental and petrographic analyses suggest that Jemez Springs is closer to Penasco Uplift than Battleship Rock. Seawater at Jemez Springs is more aerobic, and the water chemistry is more influenced by continental sources than that at Battleship Rock. In addition, there is a relatively stronger dolomitization in the mid-section of the Battleship Rock. Results further suggest that no significant biogenic fractionation or vital effects occurred during their shell secretion, suggesting that the Madera brachiopods incorporated oxygen and carbon isotopes in equilibrium with the ambient seawater. This conclusion is not only drawn from the temporal and spatial analyses, but also supported by brachiopod inter-generic comparison (Composita and Neospirifer) and statistical analysis ( t-test).
Resumo:
Order parameter profiles extracted from the NMR spectra of model membranes are a valuable source of information about their structure and molecular motions. To al1alyze powder spectra the de-Pake-ing (numerical deconvolution) ~echnique can be used, but it assumes a random (spherical) dist.ribution of orientations in the sample. Multilamellar vesicles are known to deform and orient in the strong magnetic fields of NMR magnets, producing non-spherical orientation distributions. A recently developed technique for simultaneously extracting the anisotropies of the system as well as the orientation distributions is applied to the analysis of partially magnetically oriented 31p NMR spectra of phospholipids. A mixture of synthetic lipids, POPE and POPG, is analyzed to measure distortion of multilamellar vesicles in a magnetic field. In the analysis three models describing the shape of the distorted vesicles are examined. Ellipsoids of rotation with a semiaxis ratio of about 1.14 are found to provide a good approximation of the shape of the distorted vesicles. This is in reasonable agreement with published experimental work. All three models yield clearly non-spherical orientational distributions, as well as a precise measure of the anisotropy of the chemical shift. Noise in the experimental data prevented the analysis from concluding which of the three models is the best approximation. A discretization scheme for finding stability in the algorithm is outlined
Resumo:
The streams flowing through the Niagara Escarpment are paved by coarse carbonate and sandstone sediments which have originated from the escarpment units and can be traced downstream from their source. Fifty-nine sediment samples were taken from five streams, over distances of 3,000 to 10,000 feet (915 to 3050 m), to determine downstream changes in sediment composition, textural characteristics and sorting. In addition, fluorometric velocity measurements were used in conjunction with measured -discharge and flow records to estimate the frequency of sediment movement. The frequency of sediments of a given lithology changes downstream in direct response to the outcrop position of the formations in the channels. Clasts derived from a single stratigraphic unit usually reach a maximum frequency within the first 1,000 feet (305 m) of transport. Sediments derived from formations at the top of waterfalls reach a modal frequency farther downstream than material originating at the base of waterfalls. Downstream variations in sediment size over the lengths of the study reaches reflect the changes in channel morphology and lithologic composition of the sediment samples. Linear regression analyses indicate that there is a decrease in the axial lengths between the intial and final samples and that the long axis decreases in length more rapidly than the intermediate, while the short axis remains almost constant. Carbonate sediments from coarse-grained, fossiliferous units - iii - are more variable in size than fine-grained dolostones and sandstones. The average sphericity for carbonates and sandstones increases from 0.65 to 0.67, while maximum projection sphericity remains nearly constant with an average value of 0.52. Pebble roundness increases more rapidly than either of the sphericity parameters and the sediments change from subrounded to rounded. The Hjulstrom diagram indicates that the velocities required to initiate transport of sediments with an average intermediate diameter of 10 cm range from 200 cm/s to 300 cm/s (6.6 ft./sec. to 9.8 ft./sec.). From the modal velocitydischarge relations, the flows corresponding to these velocities are greater than 3,500 cfs (99 m3s). These discharges occur less than 0.01 p~r cent (0.4 days) of the time and correspond to a discharge occurring during the spring flood.
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
Tesis (Maestría en Ciencias para la Planificación de Asentamientos Humanos) U.A.N.L.
Resumo:
Tesis (Maestría en Salud Pública con Esp. en Odontología Social) U.A.N.L.
Resumo:
Tesis (Maestría en Ciencias) U.A.N.L.
Resumo:
Tesis (Maestría en Ciencias de la Administración con Especialidad en Relaciones Industriales) U.A.N.L.
Resumo:
Tesis (Master en Administración y de Negocios con Especialidad en Producción y Calidad) UANL, 2009.
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
In this paper, we characterize the asymmetries of the smile through multiple leverage effects in a stochastic dynamic asset pricing framework. The dependence between price movements and future volatility is introduced through a set of latent state variables. These latent variables can capture not only the volatility risk and the interest rate risk which potentially affect option prices, but also any kind of correlation risk and jump risk. The standard financial leverage effect is produced by a cross-correlation effect between the state variables which enter into the stochastic volatility process of the stock price and the stock price process itself. However, we provide a more general framework where asymmetric implied volatility curves result from any source of instantaneous correlation between the state variables and either the return on the stock or the stochastic discount factor. In order to draw the shapes of the implied volatility curves generated by a model with latent variables, we specify an equilibrium-based stochastic discount factor with time non-separable preferences. When we calibrate this model to empirically reasonable values of the parameters, we are able to reproduce the various types of implied volatility curves inferred from option market data.