952 resultados para dynamic simulation
Resumo:
After outlining some comparative features of poverty in India, this article reviews critically recent literature on the dynamics of poverty. On economic efficiency grounds, it rejects the view that the chronically poor are more deserving than the non-chronic poor of poverty assistance. Mechanisms of households and communities for coping with poverty are discussed. The possibility is raised that where poverty has been persistent that rational methods for coping with it are likely to be well established, and less suffering may occur than for households and communities thrown temporarily into poverty. However, situations can also be envisaged where such rational behaviours deepen the poverty trap and create unfavourable externalities for poverty alleviation. Conflict can arise between programmes to alleviate poverty in poor communities and the sustainability of these communities and their local cultures. Problems posed by this are discussed. Furthermore, the impact of market extension on poor landholders is considered. In contrast to the prevailing view that increased market extension and liberalisation is favourable to poor farmers, it is argued that inescapable market transaction cost makes it difficult for the poor to survive as landholders in a fluid and changing market system. The likelihood of poor landholders joining the landless poor rises, and if they migrate from the countryside to the city they face further adjustment hurdles. Consequently, poor landholders may be poorer after the extension of the market system and only their offspring may reap benefits from market reforms.
Resumo:
Reports experimental results involving 204 members of the public who were asked their willingness to pay for the conservation of the mahogany glider Petaurus gracilis on three occasions: prior to information being provided to them about the glider and other wildlife species; after such information was provided, and after participants had an opportunity to see live specimens of this endangered species. Variations in the mean willingness to pay are analysed. Concerns arise about whether information provision and experience reveal ‘true’ contingent valuations of public goods and about the choice of the relevant contingent valuation measure.
Resumo:
This study evaluated the stress levels at the core layer and the veneer layer of zirconia crowns (comprising an alternative core design vs. a standard core design) under mechanical/thermal simulation, and subjected simulated models to laboratory mouth-motion fatigue. The dimensions of a mandibular first molar were imported into computer-aided design (CAD) software and a tooth preparation was modeled. A crown was designed using the space between the original tooth and the prepared tooth. The alternative core presented an additional lingual shoulder that lowered the veneer bulk of the cusps. Finite element analyses evaluated the residual maximum principal stresses fields at the core and veneer of both designs under loading and when cooled from 900 degrees C to 25 degrees C. Crowns were fabricated and mouth-motion fatigued, generating master Weibull curves and reliability data. Thermal modeling showed low residual stress fields throughout the bulk of the cusps for both groups. Mechanical simulation depicted a shift in stress levels to the core of the alternative design compared with the standard design. Significantly higher reliability was found for the alternative core. Regardless of the alternative configuration, thermal and mechanical computer simulations showed stress in the alternative core design comparable and higher to that of the standard configuration, respectively. Such a mechanical scenario probably led to the higher reliability of the alternative design under fatigue.
Resumo:
A question is examined as to estimates of the norms of perturbations of a linear stable dynamic system, under which the perturbed system remains stable in a situation R:here a perturbation has a fixed structure.
Resumo:
The electromechanical transfer characteristics of adhesively bonded piezoelectric sensors are investigated. By the use of dynamic piezoelectricity theory, Mindlin plate theory for flexural wave propagation, and a multiple integral transform method, the frequency-response functions of piezoelectric sensors with and without backing materials are developed and the pressure-voltage transduction functions of the sensors calculated. The corresponding simulation results show that the sensitivity of the sensors is not only dependent on the sensors' inherent features, such as piezoelectric properties and geometry, but also on local characteristics of the tested structures and the admittance and impedance of the attached electrical circuit. It is also demonstrated that the simplified rigid mass sensor model can be used to analyze successfully the sensitivity of the sensor at low frequencies, but that the dynamic piezoelectric continuum model has to be used for higher frequencies, especially around the resonance frequency of the coupled sensor-structure vibration system.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
In this paper a methodology for integrated multivariate monitoring and control of biological wastewater treatment plants during extreme events is presented. To monitor the process, on-line dynamic principal component analysis (PCA) is performed on the process data to extract the principal components that represent the underlying mechanisms of the process. Fuzzy c-means (FCM) clustering is used to classify the operational state. Performing clustering on scores from PCA solves computational problems as well as increases robustness due to noise attenuation. The class-membership information from FCM is used to derive adequate control set points for the local control loops. The methodology is illustrated by a simulation study of a biological wastewater treatment plant, on which disturbances of various types are imposed. The results show that the methodology can be used to determine and co-ordinate control actions in order to shift the control objective and improve the effluent quality.
Resumo:
The overlapping expression profile of MEF2 and the class-II histone deacetylase, HDAC7, led us to investigate the functional interaction and relationship between these regulatory proteins. HDAC7 expression inhibits the activity of MEF2 (-A, -C, and -D), and in contrast MyoD and Myogenin activities are not affected. Glutathione S-transferase pulldown and immunoprecipitation demonstrate that the repression mechanism involves direct interactions between MEF2 proteins and HDAC7 and is associated with the ability of MEF2 to interact with the N-terminal 121 amino acids of HDAC7 that encode repression domain 1. The MADS domain of MEF2 mediates the direct interaction of MEF2 with HDAC7, MEF2 inhibition by HDAC7 is dependent on the N-terminal repression domain and surprisingly does not involve the C-terminal deacetylase domain. HDAC7 interacts with CtBP and other class-I and -II HDACs suggesting that silencing of MEF2 activity involves corepressor recruitment. Furthermore, we show that induction of muscle differentiation by serum withdrawal leads to the translocation of HDAC7 from the nucleus into the cytoplasm. This work demonstrates that HDAC7 regulates the function of MEF2 proteins and suggests that this class-II HDAC regulates this important transcriptional (and pathophysiological) target in heart and muscle tissue. The nucleocytoplasmic trafficking of HDAC7 and other class-II HDACs during myogenesis provides an ideal mechanism for the regulation of HDAC targets during mammalian development and differentiation.
The acquisition of movement skills: Practice enhances the dynamic stability of bimanual coordination
Resumo:
During bimanual movements, two relatively stable inherent patterns of coordination (in-phase and anti-phase) are displayed (e.g., Kelso, Am. J. Physiol. 246 (1984) R1000). Recent research has shown that new patterns of coordination can be learned. For example, following practice a 90 degrees out-of-phase pattern can emerge as an additional, relatively stable, state (e.g., Zanone & Kelso, J. Exp. Psychol.: Human Performance and Perception 18 (1992) 403). On this basis, it has been concluded that practice leads to the evolution and stabilisation of the newly learned pattern and that this process of learning changes the entire attractor layout of the dynamic system. A general feature of such research has been to observe the changes of the targeted pattern's stability characteristics during training at a single movement frequency. The present study was designed to examine how practice affects the maintenance of a coordinated pattern as the movement frequency is scaled. Eleven volunteers were asked to perform a bimanual forearm pronation-supination task. Time to transition onset was used as an index of the subjects' ability to maintain two symmetrically opposite coordinated patterns (target task - 90 degrees out-of-phase - transfer task - 270 degrees out-of-phase). Their ability to maintain the target task and the transfer task were examined again after five practice sessions each consisting of 15 trials of only the 90 degrees out-of-phase pattern. Concurrent performance feedback (a Lissajous figure) was available to the participants during each practice trial. A comparison of the time to transition onset showed that the target task was more stable after practice (p = 0.025). These changes were still observed one week (p = 0.05) and two months (p = 0.075) after the practice period. Changes in the stability of the transfer task were not observed until two months after practice (p = 0.025). Notably, following practice, transitions from the 90 degrees pattern were generally to the anti-phase (180 degrees) pattern, whereas, transitions from the 270 degrees pattern were to the 90 degrees pattern. These results suggest that practice does improve the stability of a 90 degrees pattern, and that such improvements are transferable to the performance of the unpractised 270 degrees pattern. In addition, the anti-phase pattern remained more stable than the practised 90 degrees pattern throughout. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Computational simulations of the title reaction are presented, covering a temperature range from 300 to 2000 K. At lower temperatures we find that initial formation of the cyclopropene complex by addition of methylene to acetylene is irreversible, as is the stabilisation process via collisional energy transfer. Product branching between propargyl and the stable isomers is predicted at 300 K as a function of pressure for the first time. At intermediate temperatures (1200 K), complex temporal evolution involving multiple steady states begins to emerge. At high temperatures (2000 K) the timescale for subsequent unimolecular decay of thermalized intermediates begins to impinge on the timescale for reaction of methylene, such that the rate of formation of propargyl product does not admit a simple analysis in terms of a single time-independent rate constant until the methylene supply becomes depleted. Likewise, at the elevated temperatures the thermalized intermediates cannot be regarded as irreversible product channels. Our solution algorithm involves spectral propagation of a symmetrised version of the discretized master equation matrix, and is implemented in a high precision environment which makes hitherto unachievable low-temperature modelling a reality.
Resumo:
The QU-GENE Computing Cluster (QCC) is a hardware and software solution to the automation and speedup of large QU-GENE (QUantitative GENEtics) simulation experiments that are designed to examine the properties of genetic models, particularly those that involve factorial combinations of treatment levels. QCC automates the management of the distribution of components of the simulation experiments among the networked single-processor computers to achieve the speedup.