894 resultados para Model-Based Design


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Among unidentified gamma-ray sources in the galactic plane, there are some that present significant variability and have been proposed to be high-mass microquasars. To deepen the study of the possible association between variable low galactic latitude gamma-ray sources and microquasars, we have applied a leptonic jet model based on the microquasar scenario that reproduces the gamma-ray spectrum of three unidentified gamma-ray sources, 3EG J1735-1500, 3EG J1828+0142 and GRO J1411-64, and is consistent with the observational constraints at lower energies. We conclude that if these sources were generated by microquasars, the particle acceleration processes could not be as efficient as in other objects of this type that present harder gamma-ray spectra. Moreover, the dominant mechanism of high-energy emission should be synchrotron self-Compton (SSC) scattering, and the radio jets may only be observed at low frequencies. For each particular case, further predictions of jet physical conditions and variability generation mechanisms have been made in the context of the model. Although there might be other candidates able to explain the emission coming from these sources, microquasars cannot be excluded as counterparts. Observations performed by the next generation of gamma-ray instruments, like GLAST, are required to test the proposed model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

From an analysis of a learning model based on the theory of information processing four hypothesis were developed for improving the design of laboratory courses. Three of these hypotheses concerned specific procedures to minimise the load on students' working memories (or working spaces) and the fourth hypothesis was concerned with the value of mini-projects in enhancing meaningful learning of the knowledge and skills underpinning the set experiments. A three-year study of a first year undergraduate chemistry laboratory course at a Scottish university has been carried out to test these four hypotheses. This paper reports the results of the study relevant to the three hypotheses about the burden on students' working spaces. It was predicted from the learning model that the load on students working space should be reduced by appropriate changes to the written instructions and the laboratory organisation and by the introduction of prelab-work and prelab-training in laboratory techniques. It was concluded from research conducted over the three years period that all these hypothesised changes were effective both in reducing the load on students' working spaces and in improving their attitudes to the laboratory course.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In modern day organizations there are an increasing number of IT devices such as computers, mobile phones and printers. These devices can be located and maintained by using specialized IT management applications. Costs related to a single device accumulate from various sources and are normally categorized as direct costs like hardware costs and indirect costs such as labor costs. These costs can be saved in a configuration management database and presented to users using web based development tools such as ASP.NET. The overall costs of IT devices during their lifecycle can be ten times higher than the actual purchase price of the product and ability to define and reduce these costs can save organizations noticeable amount of money. This Master’s Thesis introduces the research field of IT management and defines a custom framework model based on Information Technology Infrastructure Library (ITIL) best practices which is designed to be implemented as part of an existing IT management application for defining and presenting IT costs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The article describes some concrete problems that were encountered when writing a two-level model of Mari morphology. Mari is an agglutinative Finno-Ugric language spoken in Russia by about 600 000 people. The work was begun in the 1980s on the basis of K. Koskenniemi’s Two-Level Morphology (1983), but in the latest stage R. Beesley’s and L. Karttunen’s Finite State Morphology (2003) was used. Many of the problems described in the article concern the inexplicitness of the rules in Mari grammars and the lack of information about the exact distribution of some suffixes, e.g. enclitics. The Mari grammars usually give complete paradigms for a few unproblematic verb stems, whereas the difficult or unclear forms of certain verbs are only superficially discussed. Another example of phenomena that are poorly described in grammars is the way suffixes with an initial sibilant combine to stems ending in a sibilant. The help of informants and searches from electronic corpora were used to overcome such difficulties in the development of the two-level model of Mari. The variation of the order of plural markers, case suffixes and possessive suffixes is a typical feature of Mari. The morphotactic rules constructed for Mari declensional forms tend to be recursive and their productivity must be limited by some technical device, such as filters. In the present model, certain plural markers were treated like nouns. The positional and functional versatility of the possessive suffixes can be regarded as the most challenging phenomenon in attempts to formalize the Mari morphology. Cyrillic orthography, which was used in the model, also caused problems. For instance, a Cyrillic letter may represent a sequence of two sounds, the first being part of the word stem while the other belongs to a suffix. In some cases, letters for voiced consonants are also generalized to represent voiceless consonants. Such orthographical conventions distance a morphological model based on orthography from the actual (morpho)phonological processes in the language.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Membrane bioreactors (MBRs) are a combination of activated sludge bioreactors and membrane filtration, enabling high quality effluent with a small footprint. However, they can be beset by fouling, which causes an increase in transmembrane pressure (TMP). Modelling and simulation of changes in TMP could be useful to describe fouling through the identification of the most relevant operating conditions. Using experimental data from a MBR pilot plant operated for 462days, two different models were developed: a deterministic model using activated sludge model n°2d (ASM2d) for the biological component and a resistance in-series model for the filtration component as well as a data-driven model based on multivariable regressions. Once validated, these models were used to describe membrane fouling (as changes in TMP over time) under different operating conditions. The deterministic model performed better at higher temperatures (>20°C), constant operating conditions (DO set-point, membrane air-flow, pH and ORP), and high mixed liquor suspended solids (>6.9gL-1) and flux changes. At low pH (<7) or periods with higher pH changes, the data-driven model was more accurate. Changes in the DO set-point of the aerobic reactor that affected the TMP were also better described by the data-driven model. By combining the use of both models, a better description of fouling can be achieved under different operating conditions

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sähkönsiirtoyritysten kunnossapidon taloudellinen malli eli SKUTMA, on sähköverkkoyhtiöille suunniteltu luotettavuuspohjainen kunnossapitomalli, mikä priorisoi ja ajoittaa sähkönjakeluverkon komponenttien huolto- ja investointiajankohdat. Malli hyödyntää dynaamisen optimoinnin algoritmia kustannusminimien löytämiseksi tarkastelujaksolta ja simuloi komponenttien rappeutumasta rappeutumismallin avulla. Tässä diplomityössä on kehitetty kunnossapito-ohjelma SKUTMA-mallin pohjalta, minkä avulla tutkitaan mallin toimivuutta oikeilla johtolähdöillä ja sen hyödyntämistä sähköverkkojen kunnossapidon suunnittelussa. Työssä käydään läpi myös kunnossapitoohjelman laskenta metodiikkaa ja sen ominaisuuksia. Tämän työn lopputuloksena saadaan selkeä kuva mallin toiminnasta, käytettävyydestä ja jatkokehityspotentiaalista.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rising population, rapid urbanisation and growing industrialisation have severely stressed water quality and its availability in Malawi. In addition, financial and institutional problems and the expanding agro industry have aggravated this problem. The situation is worsened by depleting water resources and pollution from untreated sewage and industrial effluent. The increasing scarcity of clean water calls for the need for appropriate management of available water resources. There is also demand for a training system for conceptual design and evaluation for wastewater treatment in order to build the capacity for technical service providers and environmental practitioners in the country. It is predicted that Malawi will face a water stress situation by 2025. In the city of Blantyre, this situation is aggravated by the serious pollution threat from the grossly inadequate sewage treatment capacity. This capacity is only 23.5% of the wastewater being generated presently. In addition, limited or non-existent industrial effluent treatment has contributed to the severe water quality degradation. This situation poses a threat to the ecologically fragile and sensitive receiving water courses within the city. This water is used for domestic purposes further downstream. This manuscript outlines the legal and policy framework for wastewater treatment in Malawi. The manuscript also evaluates the existing wastewater treatment systems in Blantyre. This evaluation aims at determining if the effluent levels at the municipal plants conform to existing standards and guidelines and other associated policy and regulatory frameworks. The raw material at all the three municipal plants is sewage. The typical wastewater parameters are Biochemical Oxygen Demand (BOD5), Chemical Oxygen Demand (COD), and Total Suspended Solids (TSS). The treatment target is BOD5, COD, and TSS reduction. Typical wastewater parameters at the wastewater treatment plant at MDW&S textile and garments factory are BOD5 and COD. The treatment target is to reduce BOD5 and COD. The manuscript further evaluates a design approach of the three municipal wastewater treatment plants in the city and the wastewater treatment plant at Mapeto David Whitehead & Sons (MDW&S) textile and garments factory. This evaluation utilises case-based design and case-based reasoning principles in the ED-WAVE tool to determine if there is potential for the tool in Blantyre. The manuscript finally evaluates the technology selection process for appropriate wastewater treatment systems for the city of Blantyre. The criteria for selection of appropriate wastewater treatment systems are discussed. Decision support tools and the decision tree making process for technology selection are also discussed. Based on the treatment targets and design criteria at the eight cases evaluated in this manuscript in reference to similar cases in the ED-WAVE tool, this work confirms the practical use of case-based design and case-based reasoning principles in the ED-WAVE tool in the design and evaluation of wastewater treatment 6 systems in sub-Sahara Africa, using Blantyre, Malawi, as the case study area. After encountering a new situation, already collected decision scenarios (cases) are invoked and modified in order to arrive at a particular design alternative. What is necessary, however, is to appropriately modify the case arrived at through the Case Study Manager in order to come up with a design appropriate to the local situation taking into account technical, socio-economic and environmental aspects. This work provides a training system for conceptual design and evaluation for wastewater treatment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the targets of the climate and energy package of the European Union is to increase the energy efficiency in order to achieve a 20 percent reduction in primary energy use compared with the projected level by 2020. The energy efficiency can be improved for example by increasing the rotational speed of large electrical drives, because this enables the elimination of gearboxes leading to a compact design with lower losses. The rotational speeds of traditional bearings, such as roller bearings, are limited by mechanical friction. Active magnetic bearings (AMBs), on the other hand, allow very high rotational speeds. Consequently, their use in large medium- and high-speed machines has rapidly increased. An active magnetic bearing rotor system is an inherently unstable, nonlinear multiple-input, multiple-output system. Model-based controller design of AMBs requires an accurate system model. Finite element modeling (FEM) together with the experimental modal analysis provides a very accurate model for the rotor, and a linearized model of the magneticactuators has proven to work well in normal conditions. However, the overall system may suffer from unmodeled dynamics, such as dynamics of foundation or shrink fits. This dynamics can be modeled by system identification. System identification can also be used for on-line diagnostics. In this study, broadband excitation signals are adopted to the identification of an active magnetic bearing rotor system. The broadband excitation enables faster frequency response function measurements when compared with the widely used stepped sine and swept sine excitations. Different broadband excitations are reviewed, and the random phase multisine excitation is chosen for further study. The measurement times using the multisine excitation and the stepped sine excitation are compared. An excitation signal design with an analysis of the harmonics produced by the nonlinear system is presented. The suitability of different frequency response function estimators for an AMB rotor system are also compared. Additionally, analytical modeling of an AMB rotor system, obtaining a parametric model from the nonparametric frequency response functions, and model updating are discussed in brief, as they are key elements in the modeling for a control design. Theoretical methods are tested with a laboratory test rig. The results conclude that an appropriately designed random phase multisine excitation is suitable for the identification of AMB rotor systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of the pilotage effectiveness study was to come up with a process descrip-tion of the pilotage procedure, to design performance indicators based on this process description, to be used by Finnpilot, and to work out a preliminary plan for the imple-mentation of the indicators within the Finnpilot organisation. The theoretical aspects of pilotage as well as the guidelines and standards used were determined through a literature review. Based on the literature review, a process flow model with the following phases was created: the planning of pilotage, the start of pilo-tage, the act of pilotage, the end of pilotage and the closing of pilotage. The model based on the literature review was tested through interviews and observation of pilotage. At the same time an e-mail survey directed at foreign pilotage organisations, which included a questionnaire concerning their standards and management systems, operations procedures, measurement tools and their attitude to the passage planning, was conducted. The main issues in the observations and interviews were the passage plan and the bridge team co-operation. The phases of the pilotage process model emerged in both the pilotage activities and the interviews whereas bridge team co-operation was relatively marginal. Most of the pilotage organisations, who responded to the query, also use some standard-based management system. All organisations who answered the survey use some sort of a pilotage process model. According to the query, the main measuring tools for pilotage are statistical information concerning pilotage and the organisations, the customer feedback surveys, and financial results. Attitudes to-wards passage planning were mostly positive among the organisations. A workshop with pilotage experts was arranged where the process model constructed on the basis of the literature review was tuned to match practical pilotage. In the workshop it was determined that certain phases and the corresponding tasks, through which pilo-tage can be described as a process, were identifiable in all pilotage. The result of the workshop was a complemented process model, which separates incoming and outgoing traffic, as well as the fairway pilotage and harbour pilotage from each other. Addition-ally indicators divided according to the data gathering method were defined. Data con-cerning safety and traffic flow is gathered in the form of customer feedback. The pilot's own perceptions of the pilotage process are gathered through self-assessment. The measurement data which is connected to the phases of the pilotage process is generated e.g. by gathering statistics of the success of the pilot dispatches, the accuracy of the pi-lotage and the incidents that occurred during the pilotage, near misses, deviations and accidents. The measurement data is collected via the PilotWeb at the closing of the pilo-tage. A separate project and a project group with pilots also participating will be established for the deployment of the performance indicators. The phases of the project are: the definition phase, the implementation phase and the deployment phase. The purpose of the definition phase is to prepare questions for ship commanders concerning the cus-tomer feedback questionnaire and also to work out the self-assessment queries and the queries concerning the process indicators.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To study Assessing the impact of tillage practices on soil carbon losses dependents it is necessary to describe the temporal variability of soil CO2 emission after tillage. It has been argued that large amounts of CO2 emitted after tillage may serve as an indicator for longer-term changes in soil carbon stocks. Here we present a two-step function model based on soil temperature and soil moisture including an exponential decay in time component that is efficient in fitting intermediate-term emission after disk plow followed by a leveling harrow (conventional), and chisel plow coupled with a roller for clod breaking (reduced) tillage. Emission after reduced tillage was described using a non-linear estimator with determination coefficient (R²) as high as 0.98. Results indicate that when emission after tillage is addressed it is important to consider an exponential decay in time in order to predict the impact of tillage in short-term emissions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Control of an industrial robot is mainly a problem of dynamics. It includes non-linearities, uncertainties and external perturbations that should be considered in the design of control laws. In this work, two control strategies based on variable structure controllers (VSC) and a PD control algorithm are compared in relation to the tracking errors considering friction. The controller's performances are evaluated by adding an static friction model. Simulations and experimental results show it is possible to diminish tracking errors by using a model based friction compensation scheme. A SCARA robot is used to illustrate the conclusions of this paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The power rating of wind turbines is constantly increasing; however, keeping the voltage rating at the low-voltage level results in high kilo-ampere currents. An alternative for increasing the power levels without raising the voltage level is provided by multiphase machines. Multiphase machines are used for instance in ship propulsion systems, aerospace applications, electric vehicles, and in other high-power applications including wind energy conversion systems. A machine model in an appropriate reference frame is required in order to design an efficient control for the electric drive. Modeling of multiphase machines poses a challenge because of the mutual couplings between the phases. Mutual couplings degrade the drive performance unless they are properly considered. In certain multiphase machines there is also a problem of high current harmonics, which are easily generated because of the small current path impedance of the harmonic components. However, multiphase machines provide special characteristics compared with the three-phase counterparts: Multiphase machines have a better fault tolerance, and are thus more robust. In addition, the controlled power can be divided among more inverter legs by increasing the number of phases. Moreover, the torque pulsation can be decreased and the harmonic frequency of the torque ripple increased by an appropriate multiphase configuration. By increasing the number of phases it is also possible to obtain more torque per RMS ampere for the same volume, and thus, increase the power density. In this doctoral thesis, a decoupled d–q model of double-star permanent-magnet (PM) synchronous machines is derived based on the inductance matrix diagonalization. The double-star machine is a special type of multiphase machines. Its armature consists of two three-phase winding sets, which are commonly displaced by 30 electrical degrees. In this study, the displacement angle between the sets is considered a parameter. The diagonalization of the inductance matrix results in a simplified model structure, in which the mutual couplings between the reference frames are eliminated. Moreover, the current harmonics are mapped into a reference frame, in which they can be easily controlled. The work also presents methods to determine the machine inductances by a finite-element analysis and by voltage-source inverters on-site. The derived model is validated by experimental results obtained with an example double-star interior PM (IPM) synchronous machine having the sets displaced by 30 electrical degrees. The derived transformation, and consequently, the decoupled d–q machine model, are shown to model the behavior of an actual machine with an acceptable accuracy. Thus, the proposed model is suitable to be used for the model-based control design of electric drives consisting of double-star IPM synchronous machines.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pinnankorkeuden tunteminen kiehutusvesireaktorin painesäiliössä on erittäin tärkeää sen turvallisuusvaikutusten takia. Pinnankorkeutta mitataan vesipatsaiden korkeutta havaitsevien paine-eromittausten avulla. Säteilyturvakeskuksen YVL-ohjeiden mukaan turvallisuuteen vaikuttavien mittausten täytyy noudattaa moninkertaistus- ja erilaisuusperiaatteita. Yleensä erilaisuusperiaatetta on toteutettu käyttämällä erityyppisiä paine-eromittareita, mutta erilaisella fysikaalisella toimintaperiaatteella oleva mittaus olisi parempi ja toteuttaisi paremmin erilaisuusperiaatetta. Uimurikytkin olisi tällainen fysikaalisesti eri periaatteeseen perustuva pinnankorkeuden mittauslaite. Ydinvoimalaan tarkoitettu teknologia tulee kelpoistaa riippumattoman tahon toimesta ennen käyttöönottoa. Kelpoistamiskokeita varten Lappeenrannan teknillisen yliopiston Ydinturvallisuuden tutkimusyksikköön rakennettiin vuosina 2011–2013 kaksi koelaitteistoa. Näillä koelaitteistoilla tutkittiin uimurikytkimien toimintaa ja ominaisuuksia erilaisissa kiehutusvesireaktorin käyttötilanteissa. Koelaitteistot tarvitsivat toimiakseen automaatiojärjestelmät, jotka suunniteltiin pääosin noudattamalla suunnittelun elinkaarimallia sekä automaatiosuunnittelun sisältökokonaisuuksia. Automaatiojärjestelmien suunnittelu aloitettiin määrittelemällä koejärjestelyjen asettamat vaatimukset, jonka jälkeen tehtiin teknologiavalinnat. Seuraavaksi suunniteltiin automaatiojärjestelmien logiikkaohjelmistot, joiden kuvaukseen tämä työ pääasiassa keskittyy. Logiikkaohjelmistot toteutettiin graafisella National Instruments LabView -ohjelmointikielellä. Logiikkaohjelmistojen tuli hoitaa tiedonkeruuta, käyttöautomaatiota, turvallisuustehtäviä sekä kokeisiin liittyviä erikoistehtäviä. Ohjelmistot saatiin esikokeiden aikana toimimaan halutusti, ja varsinaiset kokeet voitiin suorittaa ilman merkittäviä ongelmia.