35 resultados para Explicit recasts


Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been suggested that the temporal control of rhythmic unimianual movements is different between tasks requiring continuous (e.g., circle drawing) and discontinuous movements (e.g., finger tapping). Specifically, for continuous movements temporal regularities are ail emergent property, whereas for tasks that involve discontinuities timing is ail explicit part of the action goal. The present experiment further investigated the control of continuous and discontinuous movements by comparing the coordination dynamics and attentional demands of bimanual continuous circle drawing with bimanual intermittent circle drawing. The intermittent task required participants to insert a 400 ms pause between each cycle while circling. Using dual-task methodology, 15 right-handed participants performed the two circle drawing tasks, while vocally responding to randomly presented auditory probes. The circle drawing tasks were performed in symmetrical and asymmetrical coordination modes and at movement frequencies of 1 Hz and 1.7 Hz. Intermittent circle drawing exhibited superior spatial and temporal accuracy and stability than continuous circle drawing supporting the hypothesis that the two tasks have different underlying control processes. In terms of attentional cost, probe RT was significantly slower during the intermittent circle drawing task than the continuous circle drawing task across both coordination modes and movement frequencies. Of interest was the finding that in the intermittent circling task reaction time (RT) to probes presented during the pause between cycles did not differ from the RT to probes occurring during the circling movement. The differences in attentional demands between the intermittent and continuous circle drawing tasks may reflect the operation of explicit event timing and implicit emergent timing processes, respectively. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assuming as a starting point the acknowledge that the principles and methods used to build and manage the documentary systems are disperse and lack systematization, this study hypothesizes that the notion of structure, when assuming mutual relationships among its elements, promotes more organical systems and assures better quality and consistency in the retrieval of information concerning users` matters. Accordingly, it aims to explore the fundamentals about the records of information and documentary systems, starting from the notion of structure. In order to achieve that, it presents basic concepts and relative matters to documentary systems and information records. Next to this, it lists the theoretical subsides over the notion of structure, studied by Benveniste, Ferrater Mora, Levi-Strauss, Lopes, Penalver Simo, Saussure, apart from Ducrot, Favero and Koch. Appropriations that have already been done by Paul Otlet, Garcia Gutierrez and Moreiro Gonzalez. In Documentation come as a further topic. It concludes that the adopted notion of structure to make explicit a hypothesis of real systematization achieves more organical systems, as well as it grants pedagogical reference to the documentary tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The time varying intensity character of a load applied to a structure poses many difficulties in analysis. A remedy to this situation is to substitute a complex pulse shape by a rectangular equivalent one. It has been shown by others that this procedure works well for perfectly plastic elementary structures. This paper applies the concept of equivalent pulse to more complex structures. Special attention is given to the material behavior, which is allowed to be strain rate and strain hardening sensitive. Thanks to the explicit finite element solution, it is shown in this article that blast loads applied to complex structures made of real materials can be substituted by equivalent rectangular loads with both responses being practically the same. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A model predictive controller (MPC) is proposed, which is robustly stable for some classes of model uncertainty and to unknown disturbances. It is considered as the case of open-loop stable systems, where only the inputs and controlled outputs are measured. It is assumed that the controller will work in a scenario where target tracking is also required. Here, it is extended to the nominal infinite horizon MPC with output feedback. The method considers an extended cost function that can be made globally convergent for any finite input horizon considered for the uncertain system. The method is based on the explicit inclusion of cost contracting constraints in the control problem. The controller considers the output feedback case through a non-minimal state-space model that is built using past output measurements and past input increments. The application of the robust output feedback MPC is illustrated through the simulation of a low-order multivariable system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work considers a semi-implicit system A, that is, a pair (S, y), where S is an explicit system described by a state representation (x)over dot(t) = f(t, x(t), u(t)), where x(t) is an element of R(n) and u(t) is an element of R(m), which is subject to a set of algebraic constraints y(t) = h(t, x(t), u(t)) = 0, where y(t) is an element of R(l). An input candidate is a set of functions v = (v(1),.... v(s)), which may depend on time t, on x, and on u and its derivatives up to a Finite order. The problem of finding a (local) proper state representation (z)over dot = g(t, z, v) with input v for the implicit system Delta is studied in this article. The main result shows necessary and sufficient conditions for the solution of this problem, under mild assumptions on the class of admissible state representations of Delta. These solvability conditions rely on an integrability test that is computed from the explicit system S. The approach of this article is the infinite-dimensional differential geometric setting of Fliess, Levine, Martin, and Rouchon (1999) (`A Lie-Backlund Approach to Equivalence and Flatness of Nonlinear Systems`, IEEE Transactions on Automatic Control, 44(5), (922-937)).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider in this paper the optimal stationary dynamic linear filtering problem for continuous-time linear systems subject to Markovian jumps in the parameters (LSMJP) and additive noise (Wiener process). It is assumed that only an output of the system is available and therefore the values of the jump parameter are not accessible. It is a well known fact that in this setting the optimal nonlinear filter is infinite dimensional, which makes the linear filtering a natural numerically, treatable choice. The goal is to design a dynamic linear filter such that the closed loop system is mean square stable and minimizes the stationary expected value of the mean square estimation error. It is shown that an explicit analytical solution to this optimal filtering problem is obtained from the stationary solution associated to a certain Riccati equation. It is also shown that the problem can be formulated using a linear matrix inequalities (LMI) approach, which can be extended to consider convex polytopic uncertainties on the parameters of the possible modes of operation of the system and on the transition rate matrix of the Markov process. As far as the authors are aware of this is the first time that this stationary filtering problem (exact and robust versions) for LSMJP with no knowledge of the Markov jump parameters is considered in the literature. Finally, we illustrate the results with an example.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noise under three kinds of performance criterions related to the final value of the expectation and variance of the output. In the first problem it is desired to minimise the final variance of the output subject to a restriction on its final expectation, in the second one it is desired to maximise the final expectation of the output subject to a restriction on its final variance, and in the third one it is considered a performance criterion composed by a linear combination of the final variance and expectation of the output of the system. We present explicit sufficient conditions for the existence of an optimal control strategy for these problems, generalising previous results in the literature. We conclude this article presenting a numerical example of an asset liabilities management model for pension funds with regime switching.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives - A highly adaptive aspect of human memory is the enhancement of explicit, consciously accessible memory by emotional stimuli. We studied the performance of Alzheimer`s disease (AD) patients and elderly controls using a memory battery with emotional content, and we correlated these results with the amygdala and hippocampus volume. Methods - Twenty controls and 20 early AD patients were subjected to the International Affective Picture System (IAPS) and to magnetic resonance imaging-based volumetric measurements of the medial temporal lobe structures. Results - The results show that excluding control group subjects with 5 or more years of schooling, both groups showed improvement with pleasant or unpleasant figures for the IAPS in an immediate free recall test. Likewise, in a delayed free recall test, both the controls and the AD group showed improvement for pleasant pictures, when education factor was not controlled. The AD group showed improvement in the immediate and delayed free recall test proportional to the medial temporal lobe structures, with no significant clinical correlation between affective valence and amygdala volume. Conclusion - AD patients can correctly identify emotions, at least at this early stage, but this does not improve their memory performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the first time, we introduce and study some mathematical properties of the Kumaraswamy Weibull distribution that is a quite flexible model in analyzing positive data. It contains as special sub-models the exponentiated Weibull, exponentiated Rayleigh, exponentiated exponential, Weibull and also the new Kumaraswamy exponential distribution. We provide explicit expressions for the moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, reliability and Renyi entropy. The moments of the order statistics are calculated. We also discuss the estimation of the parameters by maximum likelihood. We obtain the expected information matrix. We provide applications involving two real data sets on failure times. Finally, some multivariate generalizations of the Kumaraswamy Weibull distribution are discussed. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study in detail the so-called beta-modified Weibull distribution, motivated by the wide use of the Weibull distribution in practice, and also for the fact that the generalization provides a continuous crossover towards cases with different shapes. The new distribution is important since it contains as special sub-models some widely-known distributions, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among several others. It also provides more flexibility to analyse complex real data. Various mathematical properties of this distribution are derived, including its moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are also derived for the chf, mean deviations, Bonferroni and Lorenz curves, reliability and entropies. The estimation of parameters is approached by two methods: moments and maximum likelihood. We compare by simulation the performances of the estimates from these methods. We obtain the expected information matrix. Two applications are presented to illustrate the proposed distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On this paper, the results of an experimental study oil the hydraulic friction loss for small-diameter polyethylene pipes are reported. The experiment was carried out using a range of Reynolds number between 6000 to 72000, obtained by varying discharge at 20 degrees C water temperature, with internal pipe diameters of 10.0 mm, 12.9 mm, 16.1 mm, 17.4 mm and 19.7 mm. According to the analysis results and experimental conditions, the friction factor 0 of the Darcy-Weisbach equation call be estimated with c = 0.300 and m = 0.25. The Blasius equation (c = 0.316 and m = 0.25) gives an overestimate of friction loss, although this fact is non-restrictive for micro-irrigation system designs. The analysis shows that both the Blasius and the adjusted equation parameters allow for accurate friction factor estimates, characterized by low mean error (5.1%).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper uses a fully operational inter-regional computable general equilibrium (CGE) model implemented for the Brazilian economy, based on previous work by Haddad and Hewings, in order to assess the likely economic effects of road transportation policy changes in Brazil. Among the features embedded in this framework, modelling of external scale economies and transportation costs provides an innovative way of dealing explicitly with theoretical issues related to integrated regional systems. The model is calibrated for 109 regions. The explicit modelling of transportation costs built into the inter-regional CGE model, based on origin-destination flows, which takes into account the spatial structure of the Brazilian economy, creates the capability of integrating the inter-regional CGE model with a geo-coded transportation network model enhancing the potential of the framework in understanding the role of infrastructure on regional development. The transportation model used is the so-called Highway Development and Management, developed by the World Bank, implemented using the software TransCAD. Further extensions of the current model specification for integrating other features of transport planning in a continental industrialising country like Brazil are discussed, with the goal of building a bridge between conventional transport planning practices and the innovative use of CGE models. In order to illustrate the analytical power of the integrated system, the authors present a set of simulations, which evaluate the ex ante economic impacts of physical/qualitative changes in the Brazilian road network (for example, a highway improvement), in accordance with recent policy developments in Brazil. Rather than providing a critical evaluation of this debate, they intend to emphasise the likely structural impacts of such policies. They expect that the results will reinforce the need to better specifying spatial interactions in inter-regional CGE models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Independent brain circuits appear to underlie different forms of conditioned fear, depending on the type of conditioning used, such as a context or explicit cue paired with footshocks. Several clinical reports have associated damage to the medial temporal lobe (MTL) with retrograde amnesia. Although a number of studies have elucidated the neural circuits underlying conditioned fear, the involvement of MTL components in the aversive conditioning paradigm is still unclear. To address this issue, we assessed freezing responses and Fos protein expression in subregions of the rhinal cortex and ventral hippocampus of rats following exposure to a context, light or tone previously paired with footshock (Experiment 1). A comparable degree of freezing was observed in the three types of conditioned fear, but with distinct patterns of Fos distribution. The groups exposed to cued fear conditioning did not show changes in Fos expression, whereas the group subjected to contextual fear conditioning showed selective activation of the ectorhinal (Ect), perirhinal (Per), and entorhinal (Ent) cortices, with no changes in the ventral hippocampus. We then examined the effects of the benzodiazepine midazolam injected bilaterally into these three rhinal subregions in the expression of contextual fear conditioning (Experiment 2). Midazolam administration into the Ect, Per, and Ent reduced freezing responses. These findings suggest that contextual and explicit stimuli endowed with aversive properties through conditioning recruit distinct brain areas, and the rhinal cortex appears to be critical for storing context-, but not explicit cue-footshock, associations. (C) 2010 Elsevier B.V. All rights reserved.