789 resultados para Predictive-validity
Resumo:
DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.
Resumo:
The authors compare the performance of two types of controllers one based on the multilayered network and the other based on the single layered CMAC network (cerebellar model articulator controller). The neurons (information processing units) in the multi-layered network use Gaussian activation functions. The control scheme which is considered is a predictive control algorithm, along the lines used by Willis et al. (1991), Kambhampati and Warwick (1991). The process selected as a test bed is a continuous stirred tank reactor. The reaction taking place is an irreversible exothermic reaction in a constant volume reactor cooled by a single coolant stream. This reactor is a simplified version of the first tank in the two tank system given by Henson and Seborg (1989).
Resumo:
We introduce the notion that the energy of individuals can manifest as a higher-level, collective construct. To this end, we conducted four independent studies to investigate the viability and importance of the collective energy construct as assessed by a new survey instrument—the productive energy measure (PEM). Study 1 (n = 2208) included exploratory and confirmatory factor analyses to explore the underlying factor structure of PEM. Study 2 (n = 660) cross-validated the same factor structure in an independent sample. In study 3, we administered the PEM to more than 5000 employees from 145 departments located in five countries. Results from measurement invariance, statistical aggregation, convergent, and discriminant-validity assessments offered additional support for the construct validity of PEM. In terms of predictive and incremental validity, the PEM was positively associated with three collective attitudes—units' commitment to goals, the organization, and overall satisfaction. In study 4, we explored the relationship between the productive energy of firms and their overall performance. Using data from 92 firms (n = 5939employees), we found a positive relationship between the PEM (aggregated to the firm level) and the performance of those firms. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
This paper describes an experimental application of constrained predictive control and feedback linearisation based on dynamic neural networks. It also verifies experimentally a method for handling input constraints, which are transformed by the feedback linearisation mappings. A performance comparison with a PID controller is also provided. The experimental system consists of a laboratory based single link manipulator arm, which is controlled in real time using MATLAB/SIMULINK together with data acquisition equipment.
Resumo:
This paper describes the integration of constrained predictive control and computed-torque control, and its application on a six degree-of-freedom PUMA 560 manipulator arm. The real-time implementation was based on SIMULINK, with the predictive controller and the computed-torque control law implemented in the C programming language. The constrained predictive controller solved a quadratic programming problem at every sampling interval, which was as short as 10 ms, using a prediction horizon of 150 steps and an 18th order state space model.
Resumo:
User interaction within a virtual environment may take various forms: a teleconferencing application will require users to speak to each other (Geak, 1993), with computer supported co-operative working; an Engineer may wish to pass an object to another user for examination; in a battle field simulation (McDonough, 1992), users might exchange fire. In all cases it is necessary for the actions of one user to be presented to the others sufficiently quickly to allow realistic interaction. In this paper we take a fresh look at the approach of virtual reality operating systems by tackling the underlying issues of creating real-time multi-user environments.
Resumo:
In this study two new measures of lexical diversity are tested for the first time on French. The usefulness of these measures, MTLD (McCarthy and Jarvis (2010 and this volume) ) and HD-D (McCarthy and Jarvis 2007), in predicting different aspects of language proficiency is assessed and compared with D (Malvern and Richards 1997; Malvern, Richards, Chipere and Durán 2004) and Maas (1972) in analyses of stories told by two groups of learners (n=41) of two different proficiency levels and one group of native speakers of French (n=23). The importance of careful lemmatization in studies of lexical diversity which involve highly inflected languages is also demonstrated. The paper shows that the measures of lexical diversity under study are valid proxies for language ability in that they explain up to 62 percent of the variance in French C-test scores, and up to 33 percent of the variance in a measure of complexity. The paper also provides evidence that dependence on segment size continues to be a problem for the measures of lexical diversity discussed in this paper. The paper concludes that limiting the range of text lengths or even keeping text length constant is the safest option in analysing lexical diversity.
Resumo:
There is controversy about whether traditional medicine can guide drug discovery, and investment in ethnobotanically led research has fluctuated. One view is that traditionally used plants are not necessarily efficacious and there are no robust methods for distinguishing the ones that are most likely to be bioactive when selecting species for further testing. Here, we reconstruct a genus-level molecular phylogeny representing the 20,000 species found in the floras of three disparate biodiversity hotspots: Nepal, New Zealand and the Cape of South Africa. Borrowing phylogenetic methods from community ecology, we reveal significant clustering of the 1,500 traditionally used species, and provide a direct measure of the relatedness of the three medicinal floras. We demonstrate shared phylogenetic patterns across the floras: related plants from these regions are used to treat medical conditions in the same therapeutic areas. This strongly suggests independent discovery of plant efficacy, an interpretation corroborated by the presence of a significantly greater proportion of known bioactive species in these plant groups than in a random sample. Phylogenetic cross-cultural comparison can focus screening efforts on a subset of traditionally used plants that are richer in bioactive compounds, and could revitalise the use of traditional knowledge in bioprospecting.
Resumo:
The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.
Resumo:
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.