965 resultados para method applied to liquid samples
Resumo:
In this paper we present the composite Euler method for the strong solution of stochastic differential equations driven by d-dimensional Wiener processes. This method is a combination of the semi-implicit Euler method and the implicit Euler method. At each step either the semi-implicit Euler method or the implicit Euler method is used in order to obtain better stability properties. We give criteria for selecting the semi-implicit Euler method or the implicit Euler method. For the linear test equation, the convergence properties of the composite Euler method depend on the criteria for selecting the methods. Numerical results suggest that the convergence properties of the composite Euler method applied to nonlinear SDEs is the same as those applied to linear equations. The stability properties of the composite Euler method are shown to be far superior to those of the Euler methods, and numerical results show that the composite Euler method is a very promising method. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
For the improvement of genetic material suitable for on farm use under low-input conditions, participatory and formal plant breeding strategies are frequently presented as competing options. A common frame of reference to phrase mechanisms and purposes related to breeding strategies will facilitate clearer descriptions of similarities and differences between participatory plant breeding and formal plant breeding. In this paper an attempt is made to develop such a common framework by means of a statistically inspired language that acknowledges the importance of both on farm trials and research centre trials as sources of information for on farm genetic improvement. Key concepts are the genetic correlation between environments, and the heterogeneity of phenotypic and genetic variance over environments. Classic selection response theory is taken as the starting point for the comparison of selection trials (on farm and research centre) with respect to the expected genetic improvement in a target environment (low-input farms). The variance-covariance parameters that form the input for selection response comparisons traditionally come from a mixed model fit to multi-environment trial data. In this paper we propose a recently developed class of mixed models, namely multiplicative mixed models, also called factor-analytic models, for modelling genetic variances and covariances (correlations). Mixed multiplicative models allow genetic variances and covariances to be dependent on quantitative descriptors of the environment, and confer a high flexibility in the choice of variance-covariance structure, without requiring the estimation of a prohibitively high number of parameters. As a result detailed considerations regarding selection response comparisons are facilitated. ne statistical machinery involved is illustrated on an example data set consisting of barley trials from the International Center for Agricultural Research in the Dry Areas (ICARDA). Analysis of the example data showed that participatory plant breeding and formal plant breeding are better interpreted as providing complementary rather than competing information.
Resumo:
Two experiments tested predictions from a theory in which processing load depends on relational complexity (RC), the number of variables related in a single decision. Tasks from six domains (transitivity, hierarchical classification, class inclusion, cardinality, relative-clause sentence comprehension, and hypothesis testing) were administered to children aged 3-8 years. Complexity analyses indicated that the domains entailed ternary relations (three variables). Simpler binary-relation (two variables) items were included for each domain. Thus RC was manipulated with other factors tightly controlled. Results indicated that (i) ternary-relation items were more difficult than comparable binary-relation items, (ii) the RC manipulation was sensitive to age-related changes, (iii) ternary relations were processed at a median age of 5 years, (iv) cross-task correlations were positive, with all tasks loading on a single factor (RC), (v) RC factor scores accounted for 80% (88%) of age-related variance in fluid intelligence (compositionality of sets), (vi) binary- and ternary-relation items formed separate complexity classes, and (vii) the RC approach to defining cognitive complexity is applicable to different content domains. (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
This study investigated the haemodynamic response to the 90-minute application of 85 Hz transcutaneous electrical nerve stimulation (TENS) to the T1 and T5 nerve roots. Comparison was made between 20 healthy subjects who had TENS stimulation and a separate group of 20 healthy subjects who rested for 90 minutes. Pulse and blood pressure were measured just prior to the start of TENS stimulation, after 30 minutes of stimulation, and after 90 minutes of stimulation (immediately after stopping TENS) or at completion of the rest time depending on group allocation. The rate pressure product was calculated from the pulse and systolic blood pressure data. Multivariate repeated measures analysis showed a significant group effect for TENS (p = 0.048). Univariate repeated measures analyses showed a significant group by time effect due to TENS on systolic blood pressure over the 90-minute time period (p = 0.028). Separate group repeated measures ANOVA showed a significant decline in heart rate (p = 0.000), systolic blood pressure (p = 0.013) and rate pressure product (p = 0.000) for the TENS group, while the control resting group showed a significant decline in heart rate only (p = 0.04). The application of 85 Hz TENS to the upper thoracic nerve roots causes no adverse haemodynamic effects in healthy subjects.
Resumo:
Plasma levels of lipoprotein(a) _ Lp(a) _ are associated with cardiovascular risk (Danesh et al., 2000) and were long believed to be influenced by the LPA locus on chromosome 6q27 only. However, a recent report of Broeckel et al. (2002) suggested the presence of a second quantitative trait locus on chromosome 1 influencing Lp(a) levels. Using a two-locus model, we found no evidence for an additional Lp(a) locus on chromosome 1 in a linkage study among 483 dizygotic twin pairs.
Resumo:
A number of authors concerned with the analysis of rock jointing have used the idea that the joint areal or diametral distribution can be linked to the trace length distribution through a theorem attributed to Crofton. This brief paper seeks to demonstrate why Crofton's theorem need not be used to link moments of the trace length distribution captured by scan line or areal mapping to the moments of the diametral distribution of joints represented as disks and that it is incorrect to do so. The valid relationships for areal or scan line mapping between all the moments of the trace length distribution and those of the joint size distribution for joints modeled as disks are recalled and compared with those that might be applied were Crofton's theorem assumed to apply. For areal mapping, the relationship is fortuitously correct but incorrect for scan line mapping.
Resumo:
Large values for the mass-to-light ratio (ϒ) in self-gravitating systems is one of the most important evidences of dark matter. We propose a expression for the mass-to-light ratio in spherical systems using MOND. Results for the COMA cluster reveal that a modification of the gravity, as proposed by MOND, can reduce significantly this value.
Resumo:
Recent observations from type Ia Supernovae and from cosmic microwave background (CMB) anisotropies have revealed that most of the matter of the Universe interacts in a repulsive manner, composing the so-called dark energy constituent of the Universe. Determining the properties of dark energy is one of the most important tasks of modern cosmology and this is the main motivation for this work. The analysis of cosmic gravitational waves (GW) represents, besides the CMB temperature and polarization anisotropies, an additional approach in the determination of parameters that may constrain the dark energy models and their consistence. In recent work, a generalized Chaplygin gas model was considered in a flat universe and the corresponding spectrum of gravitational waves was obtained. In the present work we have added a massless gas component to that model and the new spectrum has been compared to the previous one. The Chaplygin gas is also used to simulate a L-CDM model by means of a particular combination of parameters so that the Chaplygin gas and the L-CDM models can be easily distinguished in the theoretical scenarios here established. We find that the models are strongly degenerated in the range of frequencies studied. This degeneracy is in part expected since the models must converge to each other when some particular combinations of parameters are considered.
Resumo:
Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normal distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalized assumption of normal distributed financial returns. Thus it is crucial to properly model the distribution tails so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey (2000) and combine the GARCH-type models with the Extreme Value Theory (EVT) to estimate the tails of three financial index returns DJI,FTSE 100 and NIKKEI 225 representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are much more accurate than those from conventional AR-GARCH models assuming normal or Student’s t-distribution innovations when doing out-of-sample estimation (within the insample estimation, this is so for the right tail of the distribution of returns).
Resumo:
Facing the lateral vibration problem of a machine rotor as a beam on elastic supports in bending, the authors deal with the free vibration of elastically restrained Bernoulli-Euler beams carrying a finite number of concentrated elements along their length. Based on Rayleigh's quotient, an iterative strategy is developed to find the approximated torsional stiffness coefficients, which allows the reconciliation between the theoretical model results and the experimental ones, obtained through impact tests. The mentioned algorithm treats the vibration of continuous beams under a determined set of boundary and continuity conditions, including different torsional stiffness coefficients and the effect of attached concentrated masses and rotational inertias, not only in the energetic terms of the Rayleigh's quotient but also on the mode shapes, considering the shape functions defined in branches. Several loading cases are examined and examples are given to illustrate the validity of the model and accuracy of the obtained natural frequencies.
Resumo:
Long-term contractual decisions are the basis of an efficient risk management. However those types of decisions have to be supported with a robust price forecast methodology. This paper reports a different approach for long-term price forecast which tries to give answers to that need. Making use of regression models, the proposed methodology has as main objective to find the maximum and a minimum Market Clearing Price (MCP) for a specific programming period, and with a desired confidence level α. Due to the problem complexity, the meta-heuristic Particle Swarm Optimization (PSO) was used to find the best regression parameters and the results compared with the obtained by using a Genetic Algorithm (GA). To validate these models, results from realistic data are presented and discussed in detail.
Resumo:
In recent years the use of several new resources in power systems, such as distributed generation, demand response and more recently electric vehicles, has significantly increased. Power systems aim at lowering operational costs, requiring an adequate energy resources management. In this context, load consumption management plays an important role, being necessary to use optimization strategies to adjust the consumption to the supply profile. These optimization strategies can be integrated in demand response programs. The control of the energy consumption of an intelligent house has the objective of optimizing the load consumption. This paper presents a genetic algorithm approach to manage the consumption of a residential house making use of a SCADA system developed by the authors. Consumption management is done reducing or curtailing loads to keep the power consumption in, or below, a specified energy consumption limit. This limit is determined according to the consumer strategy and taking into account the renewable based micro generation, energy price, supplier solicitations, and consumers’ preferences. The proposed approach is compared with a mixed integer non-linear approach.
Resumo:
The concept of demand response has a growing importance in the context of the future power systems. Demand response can be seen as a resource like distributed generation, storage, electric vehicles, etc. All these resources require the existence of an infrastructure able to give players the means to operate and use them in an efficient way. This infrastructure implements in practice the smart grid concept, and should accommodate a large number of diverse types of players in the context of a competitive business environment. In this paper, demand response is optimally scheduled jointly with other resources such as distributed generation units and the energy provided by the electricity market, minimizing the operation costs from the point of view of a virtual power player, who manages these resources and supplies the aggregated consumers. The optimal schedule is obtained using two approaches based on particle swarm optimization (with and without mutation) which are compared with a deterministic approach that is used as a reference methodology. A case study with two scenarios implemented in DemSi, a demand Response simulator developed by the authors, evidences the advantages of the use of the proposed particle swarm approaches.
Resumo:
This paper proposes a swarm intelligence long-term hedging tool to support electricity producers in competitive electricity markets. This tool investigates the long-term hedging opportunities available to electric power producers through the use of contracts with physical (spot and forward) and financial (options) settlement. To find the optimal portfolio the producer risk preference is stated by a utility function (U) expressing the trade-off between the expectation and the variance of the return. Variance estimation and the expected return are based on a forecasted scenario interval determined by a long-term price range forecast model, developed by the authors, whose explanation is outside the scope of this paper. The proposed tool makes use of Particle Swarm Optimization (PSO) and its performance has been evaluated by comparing it with a Genetic Algorithm (GA) based approach. To validate the risk management tool a case study, using real price historical data for mainland Spanish market, is presented to demonstrate the effectiveness of the proposed methodology.