989 resultados para Pattern oriented modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synchronous motors are used mainly in large drives, for example in ship propulsion systems and in steel factories' rolling mills because of their high efficiency, high overload capacity and good performance in the field weakening range. This, however, requires an extremely good torque control system. A fast torque response and a torque accuracy are basic requirements for such a drive. For large power, high dynamic performance drives the commonly known principle of field oriented vector control has been used solely hitherto, but nowadays it is not the only way to implement such a drive. A new control method Direct Torque Control (DTC) has also emerged. The performance of such a high quality torque control as DTC in dynamically demanding industrial applications is mainly based on the accurate estimate of the various flux linkages' space vectors. Nowadays industrial motor control systems are real time applications with restricted calculation capacity. At the same time the control system requires a simple, fast calculable and reasonably accurate motor model. In this work a method to handle these problems in a Direct Torque Controlled (DTC) salient pole synchronous motor drive is proposed. A motor model which combines the induction law based "voltage model" and motor inductance parameters based "current model" is presented. The voltage model operates as a main model and is calculated at a very fast sampling rate (for example 40 kHz). The stator flux linkage calculated via integration from the stator voltages is corrected using the stator flux linkage computed from the current model. The current model acts as a supervisor that prevents only the motor stator flux linkage from drifting erroneous during longer time intervals. At very low speeds the role of the current model is emphasised but, nevertheless, the voltage model always stays the main model. At higher speeds the function of the current model correction is to act as a stabiliser of the control system. The current model contains a set of inductance parameters which must be known. The validation of the current model in steady state is not self evident. It depends on the accuracy of the saturated value of the inductances. Parameter measurement of the motor model where the supply inverter is used as a measurement signal generator is presented. This so called identification run can be performed prior to delivery or during drive commissioning. A derivation method for the inductance models used for the representation of the saturation effects is proposed. The performance of the electrically excited synchronous motor supplied with the DTC inverter is proven with experimental results. It is shown that it is possible to obtain a good static accuracy of the DTC's torque controller for an electrically excited synchronous motor. The dynamic response is fast and a new operation point is achieved without oscillation. The operation is stable throughout the speed range. The modelling of the magnetising inductance saturation is essential and cross saturation has to be considered as well. The effect of cross saturation is very significant. A DTC inverter can be used as a measuring equipment and the parameters needed for the motor model can be defined by the inverter itself. The main advantage is that the parameters defined are measured in similar magnetic operation conditions and no disagreement between the parameters will exist. The inductance models generated are adequate to meet the requirements of dynamically demanding drives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The condensation rate has to be high in the safety pressure suppression pool systems of Boiling Water Reactors (BWR) in order to fulfill their safety function. The phenomena due to such a high direct contact condensation (DCC) rate turn out to be very challenging to be analysed either with experiments or numerical simulations. In this thesis, the suppression pool experiments carried out in the POOLEX facility of Lappeenranta University of Technology were simulated. Two different condensation modes were modelled by using the 2-phase CFD codes NEPTUNE CFD and TransAT. The DCC models applied were the typical ones to be used for separated flows in channels, and their applicability to the rapidly condensing flow in the condensation pool context had not been tested earlier. A low Reynolds number case was the first to be simulated. The POOLEX experiment STB-31 was operated near the conditions between the ’quasi-steady oscillatory interface condensation’ mode and the ’condensation within the blowdown pipe’ mode. The condensation models of Lakehal et al. and Coste & Lavi´eville predicted the condensation rate quite accurately, while the other tested ones overestimated it. It was possible to get the direct phase change solution to settle near to the measured values, but a very high resolution of calculation grid was needed. Secondly, a high Reynolds number case corresponding to the ’chugging’ mode was simulated. The POOLEX experiment STB-28 was chosen, because various standard and highspeed video samples of bubbles were recorded during it. In order to extract numerical information from the video material, a pattern recognition procedure was programmed. The bubble size distributions and the frequencies of chugging were calculated with this procedure. With the statistical data of the bubble sizes and temporal data of the bubble/jet appearance, it was possible to compare the condensation rates between the experiment and the CFD simulations. In the chugging simulations, a spherically curvilinear calculation grid at the blowdown pipe exit improved the convergence and decreased the required cell count. The compressible flow solver with complete steam-tables was beneficial for the numerical success of the simulations. The Hughes-Duffey model and, to some extent, the Coste & Lavi´eville model produced realistic chugging behavior. The initial level of the steam/water interface was an important factor to determine the initiation of the chugging. If the interface was initialized with a water level high enough inside the blowdown pipe, the vigorous penetration of a water plug into the pool created a turbulent wake which invoked the chugging that was self-sustaining. A 3D simulation with a suitable DCC model produced qualitatively very realistic shapes of the chugging bubbles and jets. The comparative FFT analysis of the bubble size data and the pool bottom pressure data gave useful information to distinguish the eigenmodes of chugging, bubbling, and pool structure oscillations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyzed the trends of scientific output of the University Hospital, Federal University of Rio de Janeiro. A total of 1420 publications were classified according to pattern and visibility. Most were non-research publications with domestic visibility. With time, there was a tendency to shift from non-research (or education-oriented) publications with domestic visibility to research publications with international visibility. This change may reflect new academic attitudes within the institution concerning the objectives of the hospital and the establishment of scientific research activities. The emphasis of this University Hospital had been on the training of new physicians. However, more recently, the production of new knowledge has been incorporated as a new objective. The analysis of the scientific production of the most productive sectors of the hospital also showed that most are developing non-research studies devoted to the local public while a few of the sectors are carrying out research studies published in journals with international status. The dilemma of quality versus quantity and of education versus research-oriented publication seems, however, to continue to exist within the specialized sectors. The methodology described here to analyze the scientific production of a university hospital can be used as a tool to better understand the evolution of medical research in Brazil and also to help formulate public policies and new strategies to include research among the major objectives of University Hospitals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the present study was to evaluate breathing pattern, thoracoabdominal motion and muscular activity during three breathing exercises: diaphragmatic breathing (DB), flow-oriented (Triflo II) incentive spirometry and volume-oriented (Voldyne) incentive spirometry. Seventeen healthy subjects (12 females, 5 males) aged 23 ± 5 years (mean ± SD) were studied. Calibrated respiratory inductive plethysmography was used to measure the following variables during rest (baseline) and breathing exercises: tidal volume (Vt), respiratory frequency (f), rib cage contribution to Vt (RC/Vt), inspiratory duty cycle (Ti/Ttot), and phase angle (PhAng). Sternocleidomastoid muscle activity was assessed by surface electromyography. Statistical analysis was performed by ANOVA and Tukey or Friedman and Wilcoxon tests, with the level of significance set at P < 0.05. Comparisons between baseline and breathing exercise periods showed a significant increase of Vt and PhAng during all exercises, a significant decrease of f during DB and Voldyne, a significant increase of Ti/Ttot during Voldyne, and no significant difference in RC/Vt. Comparisons among exercises revealed higher f and sternocleidomastoid activity during Triflo II (P < 0.05) with respect to DB and Voldyne, without a significant difference in Vt, Ti/Ttot, PhAng, or RC/Vt. Exercises changed the breathing pattern and increased PhAng, a variable of thoracoabdominal asynchrony, compared to baseline. The only difference between DB and Voldyne was a significant increase of Ti/Ttot compared to baseline. Triflo II was associated with higher f values and electromyographic activity of the sternocleidomastoid. In conclusion, DB and Voldyne showed similar results while Triflo II showed disadvantages compared to the other breathing exercises.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les sociétés modernes dépendent de plus en plus sur les systèmes informatiques et ainsi, il y a de plus en plus de pression sur les équipes de développement pour produire des logiciels de bonne qualité. Plusieurs compagnies utilisent des modèles de qualité, des suites de programmes qui analysent et évaluent la qualité d'autres programmes, mais la construction de modèles de qualité est difficile parce qu'il existe plusieurs questions qui n'ont pas été répondues dans la littérature. Nous avons étudié les pratiques de modélisation de la qualité auprès d'une grande entreprise et avons identifié les trois dimensions où une recherche additionnelle est désirable : Le support de la subjectivité de la qualité, les techniques pour faire le suivi de la qualité lors de l'évolution des logiciels, et la composition de la qualité entre différents niveaux d'abstraction. Concernant la subjectivité, nous avons proposé l'utilisation de modèles bayésiens parce qu'ils sont capables de traiter des données ambiguës. Nous avons appliqué nos modèles au problème de la détection des défauts de conception. Dans une étude de deux logiciels libres, nous avons trouvé que notre approche est supérieure aux techniques décrites dans l'état de l'art, qui sont basées sur des règles. Pour supporter l'évolution des logiciels, nous avons considéré que les scores produits par un modèle de qualité sont des signaux qui peuvent être analysés en utilisant des techniques d'exploration de données pour identifier des patrons d'évolution de la qualité. Nous avons étudié comment les défauts de conception apparaissent et disparaissent des logiciels. Un logiciel est typiquement conçu comme une hiérarchie de composants, mais les modèles de qualité ne tiennent pas compte de cette organisation. Dans la dernière partie de la dissertation, nous présentons un modèle de qualité à deux niveaux. Ces modèles ont trois parties: un modèle au niveau du composant, un modèle qui évalue l'importance de chacun des composants, et un autre qui évalue la qualité d'un composé en combinant la qualité de ses composants. L'approche a été testée sur la prédiction de classes à fort changement à partir de la qualité des méthodes. Nous avons trouvé que nos modèles à deux niveaux permettent une meilleure identification des classes à fort changement. Pour terminer, nous avons appliqué nos modèles à deux niveaux pour l'évaluation de la navigabilité des sites web à partir de la qualité des pages. Nos modèles étaient capables de distinguer entre des sites de très bonne qualité et des sites choisis aléatoirement. Au cours de la dissertation, nous présentons non seulement des problèmes théoriques et leurs solutions, mais nous avons également mené des expériences pour démontrer les avantages et les limitations de nos solutions. Nos résultats indiquent qu'on peut espérer améliorer l'état de l'art dans les trois dimensions présentées. En particulier, notre travail sur la composition de la qualité et la modélisation de l'importance est le premier à cibler ce problème. Nous croyons que nos modèles à deux niveaux sont un point de départ intéressant pour des travaux de recherche plus approfondis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis gives a general introduction about the topic include India, the spatial and temporal variation of the surface meteorological parameters are dealt in detail. The general pattern of the winds over the region in different seasons and the generation and movements of the thermally and dynamically originated local wind systems of Western Ghats region has been studied. The modification of the prevailing winds over region by the Palghat Gap and its effect on the mouth regions pf the gap is analysed in great depth. The thesis gives the information of climatic elements of the mountain region such as energy budgets, rainfall studies, evaporation and condensation and the variation in the heat fluxes over the region. The impact of orography is studied in a different approach. The type of hypothetical study gives more insight into the control of mountain on the distribution of meteorological parameter over the study region and helps to quantify the impact of the mountain in varying the weather climate of region. The detailed study of the hydro-meteorological aspects of the main river basins of the region also should be included to the climatic studies for the total understanding of the weather and climate over the region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current study is aimed at the development of a theoretical simulation tool based on Discrete Element Method (DEM) to 'interpret granular dynamics of solid bed in the cross section of the horizontal rotating cylinder at the microscopic level and subsequently apply this model to establish the transition behaviour, mixing and segregation.The simulation of the granular motion developed in this work is based on solving Newton's equation of motion for each particle in the granular bed subjected to the collisional forces, external forces and boundary forces. At every instant of time, the forces are tracked and the positions velocities and accelarations of each partcle is The software code for this simulation is written in VISUAL FORTRAN 90 After checking the validity of the code with special tests, it is used to investigate the transition behaviour of granular solids motion in the cross section of a rotating cylinder for various rotational speeds and fill fraction.This work is hence directed towards a theoretical investigation based on Discrete Element Method (DEM) of the motion of granular solids in the radial direction of the horizontal cylinder to elucidate the relationship between the operating parameters of the rotating cylinder geometry and physical properties ofthe granular solid.The operating parameters of the rotating cylinder include the various rotational velocities of the cylinder and volumetric fill. The physical properties of the granular solids include particle sizes, densities, stiffness coefficients, and coefficient of friction Further the work highlights the fundamental basis for the important phenomena of the system namely; (i) the different modes of solids motion observed in a transverse crosssection of the rotating cylinder for various rotational speeds, (ii) the radial mixing of the granular solid in terms of active layer depth (iii) rate coefficient of mixing as well as the transition behaviour in terms of the bed turnover time and rotational speed and (iv) the segregation mechanisms resulting from differences in the size and density of particles.The transition behaviour involving its six different modes of motion of the granular solid bed is quantified in terms of Froude number and the results obtained are validated with experimental and theoretical results reported in the literature The transition from slumping to rolling mode is quantified using the bed turnover time and a linear relationship is established between the bed turn over time and the inverse of the rotational speed of the cylinder as predicted by Davidson et al. [2000]. The effect of the rotational speed, fill fraction and coefficient of friction on the dynamic angle of repose are presented and discussed. The variation of active layer depth with respect to fill fraction and rotational speed have been investigated. The results obtained through simulation are compared with the experimental results reported by Van Puyvelde et. at. [2000] and Ding et at. [2002].The theoretical model has been further extended, to study the rmxmg and segregation in the transverse direction for different particle sizes and their size ratios. The effect of fill fraction and rotational speed on the transverse mixing behaviour is presented in the form of a mixing index and mixing kinetics curve. The segregation pattern obtained by the simulation of the granular solid bed with respect to the rotational speed of the cylinder is presented both in graphical and numerical forms. The segregation behaviour of the granular solid bed with respect to particle size, density and volume fraction of particle size has been investigated. Several important macro parameters characterising segregation such as mixing index, percolation index and segregation index have been derived from the simulation tool based on first principles developed in this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Department of Marine Geology and Geophysics,Cochin University of Science and Technology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In connection with the (revived) demand for considering applications in the teaching of mathematics, various schemata or lists of criteria have been developed since the end of the sixties, which set up requirements about closeness to the real world or about the type of mathematics being used, and which have made it possible to analyze the available applications in their light. After having stated the problem (in section 1), we present (in section 2) a sketch of some of the best known of these and of some earlier schemata, although we are not aiming for a complete picture. Then (in section 3) we distinguish among different dimensions.in the analysis of applications. With this as a basis, we develop (in section 4) our own suggestion for categorizing types of applications and conceptions for an application-oriented mathematics instruction. Then (in section 5) we illustrate our schemata by some examples of performed evaluations. Finally (in section 6), we present some preliminary first results of the analysis of teaching conceptions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood modelling of urban areas is still at an early stage, partly because until recently topographic data of sufficiently high resolution and accuracy have been lacking in urban areas. However, Digital Surface Models (DSMs) generated from airborne scanning laser altimetry (LiDAR) having sub-metre spatial resolution have now become available, and these are able to represent the complexities of urban topography. The paper describes the development of a LiDAR post-processor for urban flood modelling based on the fusion of LiDAR and digital map data. The map data are used in conjunction with LiDAR data to identify different object types in urban areas, though pattern recognition techniques are also employed. Post-processing produces a Digital Terrain Model (DTM) for use as model bathymetry, and also a friction parameter map for use in estimating spatially-distributed friction coefficients. In vegetated areas, friction is estimated from LiDAR-derived vegetation height, and (unlike most vegetation removal software) the method copes with short vegetation less than ~1m high, which may occupy a substantial fraction of even an urban floodplain. The DTM and friction parameter map may also be used to help to generate an unstructured mesh of a vegetated urban floodplain for use by a 2D finite element model. The mesh is decomposed to reflect floodplain features having different frictional properties to their surroundings, including urban features such as buildings and roads as well as taller vegetation features such as trees and hedges. This allows a more accurate estimation of local friction. The method produces a substantial node density due to the small dimensions of many urban features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In all biological processes, protein molecules and other small molecules interact to function and form transient macromolecular complexes. This interaction of two or more molecules can be described by a docking event. Docking is an important phase for structure-based drug design strategies, as it can be used as a method to simulate protein-ligand interactions. Various docking programs exist that allow automated docking, but most of them have limited visualization and user interaction. It would be advantageous if scientists could visualize the molecules participating in the docking process, manipulate their structures and manually dock them before submitting the new conformations to an automated docking process in an immersive environment, which can help stimulate the design/docking process. This also could greatly reduce docking time and resources. To achieve this, we propose a new virtual modelling/docking program, whereby the advantages of virtual modelling programs and the efficiency of the algorithms in existing docking programs will be merged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The nature and magnitude of climatic variability during the period of middle Pliocene warmth (ca 3.29–2.97 Ma) is poorly understood. We present a suite of palaeoclimate modelling experiments incorporating an advanced atmospheric general circulation model (GCM), coupled to a Q-flux ocean model for 3.29, 3.12 and 2.97 Ma BP. Astronomical solutions for the periods in question were derived from the Berger and Loutre BL2 astronomical solution. Boundary conditions, excluding sea surface temperatures (SSTs) which were predicted by the slab-ocean model, were provided from the USGS PRISM2 2°×2° digital data set. The model results indicate that little annual variation (0.5°C) in SSTs, relative to a ‘control’ experiment, occurred during the middle Pliocene in response to the altered orbital configurations. Annual surface air temperatures also displayed little variation. Seasonally, surface air temperatures displayed a trend of cooler temperatures during December, January and February, and warmer temperatures during June, July and August. This pattern is consistent with altered seasonality resulting from the prescribed orbital configurations. Precipitation changes follow the seasonal trend observed for surface air temperature. Compared to present-day, surface wind strength and wind stress over the North Atlantic, North Pacific and Southern Ocean remained greater in each of the Pliocene experiments. This suggests that wind-driven gyral circulation may have been consistently greater during the middle Pliocene. The trend of climatic variability predicted by the GCM for the middle Pliocene accords with geological data. However, it is unclear if the model correctly simulates the magnitude of the variation. This uncertainty is derived from, (a) the relative insensitivity of the GCM to perturbation in the imposed boundary conditions, (b) a lack of detailed time series data concerning changes to terrestrial ice cover and greenhouse gas concentrations for the middle Pliocene and (c) difficulties in representing the effects of ‘climatic history’ in snap-shot GCM experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous work has established the value of goal-oriented approaches to requirements engineering. Achieving clarity and agreement about stakeholders’ goals and assumptions is critical for building successful software systems and managing their subsequent evolution. In general, this decision-making process requires stakeholders to understand the implications of decisions outside the domains of their own expertise. Hence it is important to support goal negotiation and decision making with description languages that are both precise and expressive, yet easy to grasp. This paper presents work in progress to develop a pattern language for describing goal refinement graphs. The language has a simple graphical notation, which is supported by a prototype editor tool, and a symbolic notation based on modal logic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biosecurity is a great challenge to policy-makers globally. Biosecurity policies aim to either prevent invasions before they occur or to eradicate and/or effectively manage the invasive species and diseases once an invasion has occurred. Such policies have traditionally been directed towards professional producers in natural resource based sectors, including agriculture. Given the wide scope of issues threatened by invasive species and diseases, it is important to account for several types of stakeholders that are involved. We investigate the problem of an invasive insect pest feeding on an agricultural crop with heterogeneous producers: profit-oriented professional farmers and utility-oriented hobby farmers. We start from an ecological-economic model conceptually similar to the one developed by Eiswerth and Johnson [Eiswerth, M.E. and Johnson, W.S., 2002. Managing nonindigenous invasive species: insights from dynamic analysis. Environmental and Resource Economics 23, 319-342.] and extend it in three ways. First, we make explicit the relationship between the invaded state carrying capacity and farmers' planting decisions. Second, we add another producer type into the framework and hence account for the existence of both professional and hobby fanners. Third, we provide a theoretical contribution by discussing two alternative types of equilibria. We also apply the model to an empirical case to extract a number of stylised facts and in particular to assess: a) under which circumstances the invasion is likely to be not controllable; and b) how extending control policies to hobby farmers could affect both types of producers. (C) 2008 Elsevier B.V. All rights reserved.