339 resultados para risque minimal
Resumo:
The Dynamic Data eXchange (DDX) is our third generation platform for building distributed robot controllers. DDX allows a coalition of programs to share data at run-time through an efficient shared memory mechanism managed by a store. Further, stores on multiple machines can be linked by means of a global catalog and data is moved between the stores on an as needed basis by multi-casting. Heterogeneous computer systems are handled. We describe the architecture of DDX and the standard clients we have developed that let us rapidly build complex control systems with minimal coding.
Resumo:
Introduction - The planning for healthy cities faces significant challenges due to lack of effective information, systems and a framework to organise that information. Such a framework is critical in order to make accessible and informed decisions for planning healthy cities. The challenges for planning healthy cities have been magnified by the rise of the healthy cities movement, as a result of which, there have been more frequent calls for localised, collaborative and knowledge-based decisions. Some studies have suggested that the use of a ‘knowledge-based’ approach to planning will enhance the accuracy and quality decision-making by improving the availability of data and information for health service planners and may also lead to increased collaboration between stakeholders and the community. A knowledge-based or evidence-based approach to decision-making can provide an ‘out-of-the-box’ thinking through the use of technology during decision-making processes. Minimal research has been conducted in this area to date, especially in terms of evaluating the impact of adopting knowledge-based approach on stakeholders, policy-makers and decision-makers within health planning initiatives. Purpose – The purpose of the paper is to present an integrated method that has been developed to facilitate a knowledge-based decision-making process to assist health planning Methodology – Specifically, the paper describes the participatory process that has been adopted to develop an online Geographic Information System (GIS)-based Decision Support System (DSS) for health planners. Value – Conceptually, it is an application of Healthy Cities and Knowledge Cities approaches which are linked together. Specifically, it is a unique settings-based initiative designed to plan for and improve the health capacity of Logan-Beaudesert area, Australia. This setting-based initiative is named as the Logan-Beaudesert Health Coalition (LBHC). Practical implications - The paper outlines the application of a knowledge-based approach to the development of a healthy city. Also, it focuses on the need for widespread use of this approach as a tool for enhancing community-based health coalition decision making processes.
Resumo:
The Intention to Notice: the collection, the tour and ordinary landscapes is concerned with how ordinary landscapes and places are enabled and conserved through making itineraries that are framed around the ephemera encountered by chance, and the practices that make possible the endurance of these material traces. Through observing and then examining the material and temporal aspects of a variety of sites/places, the museum and the expanded garden are identified as spaces where the expression of contemporary political, ecological and social attitudes to cultural landscapes can be realised through a curatorial approach to design, to effect minimal intervention. Three notions are proposed to encourage investigation into contemporary cultural landscapes: To traverse slowly to allow space for speculations framed by the topographies and artefacts encountered; to [re]make/[re]write cultural landscapes as discursive landscapes that provoke the intention to notice; and to reveal and conserve the fabric of everyday places. A series of walking, recording and making projects undertaken across a variety of cultural landscapes in remote South Australia, Melbourne, Sydney, London, Los Angeles, Chandigarh, Padova and Istanbul, investigate how communities of practice are facilitated through the invitation to notice and intervene in ordinary landscapes, informed by the theory and practice of postproduction and the reticent auteur. This community of practice approach draws upon chance encounters and it seeks to encourage creative investigation into places. The Intention to Notice is a practice of facilitating that also leads to recording traces and events; large and small, material and immaterial, that encourages both conjecture and archive. Most importantly, there is an open-ended invitation to commit and exchange through design interaction.
Resumo:
In public venues, crowd size is a key indicator of crowd safety and stability. In this paper we propose a crowd counting algorithm that uses tracking and local features to count the number of people in each group as represented by a foreground blob segment, so that the total crowd estimate is the sum of the group sizes. Tracking is employed to improve the robustness of the estimate, by analysing the history of each group, including splitting and merging events. A simplified ground truth annotation strategy results in an approach with minimal setup requirements that is highly accurate.
Resumo:
Over the past decade, there has been growth in the delivery of vocational rehabilitation services globally, as countries seek to control disability-related expenditure, yet there has been minimal research outside the United States on competencies required to work in this area. This study reports on research conducted in Australia to determine current job function and knowledge areas in terms of their importance and frequency of use in the provision of vocational rehabilitation. A survey comprising items from the Rehabilitation Skills Inventory-Amended and International Survey of Disability Management was completed by 149 rehabilitation counselors and items submitted to factor analysis. T-tests and analyses of variance were used to determine differences between scores of importance and frequency and differences in scores based on work setting and professional training. Six factors were identified as important and frequently used: (i) vocational counseling, (ii) professional practice, (iii) personal counseling, (iv) rehabilitation case management, (v) workplace disability case management, and (vi) workplace intervention and program management. Vocational counseling, professional practice and personal counseling were significantly more important and performed more frequently by respondents in vocational rehabilitation settings than those in compensation settings. These same three factors were rated significantly higher in importance and frequency by those with rehabilitation counselor training when compared with those with other training. In conclusion, although ‘traditional’ knowledge and skill areas such as vocational counseling, professional practice, and personal counseling were identified as central to vocational rehabilitation practice in Australian rehabilitation agencies, mean ratings suggest a growing emphasis on knowledge and skills associated with disability management practice.
Resumo:
This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.
Resumo:
Generating accurate population-specific public health messages regarding sun protection requires knowledge about seasonal variation in sun exposure in different environments. To address this issue for a subtropical area of Australia, we used polysulphone badges to measure UVR for the township of Nambour (26° latitude) and personal UVR exposure among Nambour residents who were taking part in a skin cancer prevention trial. Badges were worn by participants for two winter and two summer days. The ambient UVR was approximately three times as high in summer as in winter. However, participants received more than twice the proportion of available UVR in winter as in summer (6.5%vs 2.7%, P < 0.05), resulting in an average ratio of summer to winter personal UVR exposure of 1.35. The average absolute difference in daily dose between summer and winter was only one-seventh of a minimal erythemal dose. Extrapolating from our data, we estimate that ca. 42% of the total exposure received in the 6 months of winter (June–August) and summer (December–February) is received during the three winter months. Our data show that in Queensland a substantial proportion of people’s annual UVR dose is obtained in winter, underscoring the need for dissemination of sun protection messages throughout the year in subtropical and tropical climates.
Resumo:
This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.
Resumo:
Science and technology are promoted as major contributors to national development. Consequently, improved science education has been placed high on the agenda of tasks to be tackled in many developing countries, although progress has often been limited. In fact there have been claims that the enormous investment in teaching science in developing countries has basically failed, with many reports of how efforts to teach science in developing countries often result in rote learning of strange concepts, mere copying of factual information, and a general lack of understanding on the part of local students. These generalisations can be applied to science education in Fiji. Muralidhar (1989) has described a situation in which upper primary and middle school students in Fiji were given little opportunity to engage in practical work; an extremely didactic form of teacher exposition was the predominant method of instruction during science lessons. He concluded that amongst other things, teachers' limited understanding, particularly of aspects of physical science, resulted in their rigid adherence to the text book or the omission of certain activities or topics. Although many of the problems associated with science education in developing countries have been documented, few attempts have been made to understand how non-Western students might better learn science. This study addresses the issue of Fiji pre-service primary teachers' understanding of a key aspect of physical science, namely, matter and how it changes, and their responses to learning experiences based on a constructivist epistemology. Initial interviews were used to probe pre-service primary teachers' understanding of this domain of science. The data were analysed to identify students' alternative and scientific conceptions. These conceptions were then used to construct Concept Profile Inventories (CPI) which allowed for qualitative comparison of the concepts of the two ethnic groups who took part in the study. This phase of the study also provided some insight into the interaction of scientific information and traditional beliefs in non-Western societies. A quantitative comparison of the groups' conceptions was conducted using a Science Concept Survey instrument developed from the CPis. These data provided considerable insight into the aspects of matter where the pre-service teachers' understanding was particularly weak. On the basis of these preliminary findings, a six-week teaching program aimed at improving the students' understanding of matter was implemented in an experimental design with a group of students. The intervention involved elements of pedagogy such as the use of analogies and concept maps which were novel to most of those who took part. At the conclusion of the teaching programme, the learning outcomes of the experimental group were compared with those of a control group taught in a more traditional manner. These outcomes were assessed quantitatively by means of pre- and post-tests and a delayed post-test, and qualitatively using an interview protocol. The students' views on the various teaching strategies used with the experimental group were also sought. The findings indicate that in the domain of matter little variation exists in the alternative conceptions held by Fijian and Indian students suggesting that cultural influences may be minimal in their construction. Furthermore, the teaching strategies implemented with the experimental group of students, although largely derived from Western research, showed considerable promise in the context of Fiji, where they appeared to be effective in improving the understanding of students from different cultural backgrounds. These outcomes may be of significance to those involved in teacher education and curriculum development in other developing countries.
Resumo:
Professional coaching is a rapidly expanding field with interdisciplinary roots and broad application. However, despite abundant prescriptive literature, research into the process of coaching, and especially life coaching, is minimal. Similarly, although learning is inherently recognised in the process of coaching, and coaching is increasingly being recognised as a means of enhancing teaching and learning, the process of learning in coaching is little understood, and learning theory makes up only a small part of the evidence-based coaching literature. In this grounded theory study of life coaches and their clients, the process of learning in life coaching across a range of coaching models is examined and explained. The findings demonstrate how learning in life coaching emerged as a process of discovering, applying and integrating self-knowledge, which culminated in the development of self. This process occurred through eight key coaching processes shared between coaches and clients and combined a multitude of learning theory.
Resumo:
The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let Omega be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m >0, the class of languages defined by formal systems of length <= m: • is identifiable in the limit from positive data with a mind change bound of Omega (power)m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of Omega × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depth-bounded linearly covering programs, and Krishna Rao’s depth-bounded linearly moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.
Resumo:
Assessment of the condition of connectors in the overhead electricity network has traditionally relied on the heat dissipation or voltage drop from existing load current (50Hz) as a measurable parameter to differentiate between satisfactory and failing connectors. This research has developed a technique which does not rely on the 50Hz current and a prototype connector tester has been developed. In this system a high frequency signal is injected into the section of line under test and measures the resistive voltage drop and the current at the test frequency to yield the resistance in micro-ohms. From the value of resistance a decision as to whether a connector is satisfactory or approaching failure can be made. Determining the resistive voltage drop in the presence of a large induced voltage was achieved by the innovative approach of using a representative sample of the magnetic flux producing the induced voltage as the phase angle reference for the signal processing rather than the phase angle of the current, which can be affected by the presence of nearby metal objects. Laboratory evaluation of the connector tester has validated the measurement technique. The magnitude of the load current (50Hz) has minimal effect on the measurement accuracy. Addition of a suitable battery based power supply system and isolated communications, probably radio and refinement of the printed circuit board design and software are the remaining development steps to a production instrument.
Resumo:
Cell sheets can be used to produce neo-tissue with mature extracellular matrix. However, extensive contraction of cell sheets remains a problem. We devised a technique to overcome this problem and applied it to tissue engineer a dermal construct. Human dermal fibroblasts were cultured with poly(lactic-co-glycolic acid)-collagen meshes and collagen-hyaluronic acid foams. Resulting cell sheets were folded over the scaffolds to form dermal constructs. Human keratinocytes were cultured on these dermal constructs to assess their ability to support bilayered skin regeneration. Dermal constructs produced with collagen-hyaluronic acid foams showed minimal contraction, while those with poly(lactic-co-glycolic acid)-collagen meshes curled up. Cell proliferation and metabolic activity profiles were characterized with PicoGreen and AlamarBlue assays, respectively. Fluorescent labeling showed high cell viability and F-actin expression within the constructs. Collagen deposition was detected by immunocytochemistry and electron microscopy. Transforming Growth Factor-alpha and beta1, Keratinocyte Growth Factor and Vascular Endothelial Growth Factor were produced at various stages of culture, measured by RT-PCR and ELISA. These results indicated that assimilating cell sheets with mechanically stable scaffolds could produce viable dermal-like constructs that do not contract. Repeated enzymatic treatment cycles for cell expansion is unnecessary, while the issue of poor cell seeding efficiency in scaffolds is eliminated.
Resumo:
Nitrous oxide (N2O) is primarily produced by the microbially-mediated nitrification and denitrification processes in soils. It is influenced by a suite of climate (i.e. temperature and rainfall) and soil (physical and chemical) variables, interacting soil and plant nitrogen (N) transformations (either competing or supplying substrates) as well as land management practices. It is not surprising that N2O emissions are highly variable both spatially and temporally. Computer simulation models, which can integrate all of these variables, are required for the complex task of providing quantitative determinations of N2O emissions. Numerous simulation models have been developed to predict N2O production. Each model has its own philosophy in constructing simulation components as well as performance strengths. The models range from those that attempt to comprehensively simulate all soil processes to more empirical approaches requiring minimal input data. These N2O simulation models can be classified into three categories: laboratory, field and regional/global levels. Process-based field-scale N2O simulation models, which simulate whole agroecosystems and can be used to develop N2O mitigation measures, are the most widely used. The current challenge is how to scale up the relatively more robust field-scale model to catchment, regional and national scales. This paper reviews the development history, main construction components, strengths, limitations and applications of N2O emissions models, which have been published in the literature. The three scale levels are considered and the current knowledge gaps and challenges in modelling N2O emissions from soils are discussed.
Resumo:
With the increase in the level of global warming, renewable energy based distributed generators (DGs) will increasingly play a dominant role in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cells and micro turbines will gain considerable momentum in the near future. A microgrid consists of clusters of load and distributed generators that operate as a single controllable system. The interconnection of the DG to the utility/grid through power electronic converters has raised concern about safe operation and protection of the equipments. Many innovative control techniques have been used for enhancing the stability of microgrid as for proper load sharing. The most common method is the use of droop characteristics for decentralized load sharing. Parallel converters have been controlled to deliver desired real power (and reactive power) to the system. Local signals are used as feedback to control converters, since in a real system, the distance between the converters may make the inter-communication impractical. The real and reactive power sharing can be achieved by controlling two independent quantities, frequency and fundamental voltage magnitude. In this thesis, an angle droop controller is proposed to share power amongst converter interfaced DGs in a microgrid. As the angle of the output voltage can be changed instantaneously in a voltage source converter (VSC), controlling the angle to control the real power is always beneficial for quick attainment of steady state. Thus in converter based DGs, load sharing can be performed by drooping the converter output voltage magnitude and its angle instead of frequency. The angle control results in much lesser frequency variation compared to that with frequency droop. An enhanced frequency droop controller is proposed for better dynamic response and smooth transition between grid connected and islanded modes of operation. A modular controller structure with modified control loop is proposed for better load sharing between the parallel connected converters in a distributed generation system. Moreover, a method for smooth transition between grid connected and islanded modes is proposed. Power quality enhanced operation of a microgrid in presence of unbalanced and non-linear loads is also addressed in which the DGs act as compensators. The compensator can perform load balancing, harmonic compensation and reactive power control while supplying real power to the grid A frequency and voltage isolation technique between microgrid and utility is proposed by using a back-to-back converter. As utility and microgrid are totally isolated, the voltage or frequency fluctuations in the utility side do not affect the microgrid loads and vice versa. Another advantage of this scheme is that a bidirectional regulated power flow can be achieved by the back-to-back converter structure. For accurate load sharing, the droop gains have to be high, which has the potential of making the system unstable. Therefore the choice of droop gains is often a tradeoff between power sharing and stability. To improve this situation, a supplementary droop controller is proposed. A small signal model of the system is developed, based on which the parameters of the supplementary controller are designed. Two methods are proposed for load sharing in an autonomous microgrid in rural network with high R/X ratio lines. The first method proposes power sharing without any communication between the DGs. The feedback quantities and the gain matrixes are transformed with a transformation matrix based on the line R/X ratio. The second method involves minimal communication among the DGs. The converter output voltage angle reference is modified based on the active and reactive power flow in the line connected at point of common coupling (PCC). It is shown that a more economical and proper power sharing solution is possible with the web based communication of the power flow quantities. All the proposed methods are verified through PSCAD simulations. The converters are modeled with IGBT switches and anti parallel diodes with associated snubber circuits. All the rotating machines are modeled in detail including their dynamics.