843 resultados para “Hybrid” implementation model
Resumo:
The objectives of this work were to estimate the genetic and phenotypic parameters and to predict the genetic and genotypic values of the selection candidates obtained from intraspecific crosses in Panicum maximum as well as the performance of the hybrid progeny of the existing and projected crosses. Seventy-nine intraspecific hybrids obtained from artificial crosses among five apomictic and three sexual autotetraploid individuals were evaluated in a clonal test with two replications and ten plants per plot. Green matter yield, total and leaf dry matter yields and leaf percentage were evaluated in five cuts per year during three years. Genetic parameters were estimated and breeding and genotypic values were predicted using the restricted maximum likelihood/best linear unbiased prediction procedure (REML/BLUP). The dominant genetic variance was estimated by adjusting the effect of full-sib families. Low magnitude individual narrow sense heritabilities (0.02-0.05), individual broad sense heritabilities (0.14-0.20) and repeatability measured on an individual basis (0.15-0.21) were obtained. Dominance effects for all evaluated characteristics indicated that breeding strategies that explore heterosis must be adopted. Less than 5% increase in the parameter repeatability was obtained for a three-year evaluation period and may be the criterion to determine the maximum number of years of evaluation to be adopted, without compromising gain per cycle of selection. The identification of hybrid candidates for future cultivars and of those that can be incorporated into the breeding program was based on the genotypic and breeding values, respectively. The prediction of the performance of the hybrid progeny, based on the breeding values of the progenitors, permitted the identification of the best crosses and indicated the best parents to use in crosses.
Resumo:
Rough a global coarse problem. Although these techniques are usually employed for problems in which the fine-scale processes are described by Darcy's law, they can also be applied to pore-scale simulations and used as a mathematical framework for hybrid methods that couples a Darcy and pore scales. In this work, we consider a pore-scale description of fine-scale processes. The Navier-Stokes equations are numerically solved in the pore geometry to compute the velocity field and obtain generalized permeabilities. In the case of two-phase flow, the dynamics of the phase interface is described by the volume of fluid method with the continuum surface force model. The MsFV method is employed to construct an algorithm that couples a Darcy macro-scale description with a pore-scale description at the fine scale. The hybrid simulations results presented are in good agreement with the fine-scale reference solutions. As the reconstruction of the fine-scale details can be done adaptively, the presented method offers a flexible framework for hybrid modeling.
Resumo:
RATIONALE: An objective and simple prognostic model for patients with pulmonary embolism could be helpful in guiding initial intensity of treatment. OBJECTIVES: To develop a clinical prediction rule that accurately classifies patients with pulmonary embolism into categories of increasing risk of mortality and other adverse medical outcomes. METHODS: We randomly allocated 15,531 inpatient discharges with pulmonary embolism from 186 Pennsylvania hospitals to derivation (67%) and internal validation (33%) samples. We derived our prediction rule using logistic regression with 30-day mortality as the primary outcome, and patient demographic and clinical data routinely available at presentation as potential predictor variables. We externally validated the rule in 221 inpatients with pulmonary embolism from Switzerland and France. MEASUREMENTS: We compared mortality and nonfatal adverse medical outcomes across the derivation and two validation samples. MAIN RESULTS: The prediction rule is based on 11 simple patient characteristics that were independently associated with mortality and stratifies patients with pulmonary embolism into five severity classes, with 30-day mortality rates of 0-1.6% in class I, 1.7-3.5% in class II, 3.2-7.1% in class III, 4.0-11.4% in class IV, and 10.0-24.5% in class V across the derivation and validation samples. Inpatient death and nonfatal complications were <or= 1.1% among patients in class I and <or= 1.9% among patients in class II. CONCLUSIONS: Our rule accurately classifies patients with pulmonary embolism into classes of increasing risk of mortality and other adverse medical outcomes. Further validation of the rule is important before its implementation as a decision aid to guide the initial management of patients with pulmonary embolism.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
We have included the effective description of squark interactions with charginos/neutralinos in the MadGraph MSSM model. This effective description includes the effective Yukawa couplings, and another logarithmic term which encodes the supersymmetry-breaking. We have performed an extensive test of our implementation analyzing the results of the partial decay widths of squarks into charginos and neutralinos obtained by using FeynArts/FormCalc programs and the new model file in MadGraph. We present results for the cross-section of top-squark production decaying into charginos and neutralinos.
Resumo:
Cooperative transmission can be seen as a "virtual" MIMO system, where themultiple transmit antennas are in fact implemented distributed by the antennas both at the source and the relay terminal. Depending on the system design, diversity/multiplexing gainsare achievable. This design involves the definition of the type of retransmission (incrementalredundancy, repetition coding), the design of the distributed space-time codes, the errorcorrecting scheme, the operation of the relay (decode&forward or amplify&forward) and thenumber of antennas at each terminal. Proposed schemes are evaluated in different conditionsin combination with forward error correcting codes (FEC), both for linear and near-optimum(sphere decoder) receivers, for its possible implementation in downlink high speed packetservices of cellular networks. Results show the benefits of coded cooperation over directtransmission in terms of increased throughput. It is shown that multiplexing gains areobserved even if the mobile station features a single antenna, provided that cell wide reuse of the relay radio resource is possible.
Resumo:
Tavoitteena diplomityössä oli tutkia, miten hankintastrategiaa saataisiin paremmin toteutettua jokapäiväisessä työssä. Strategian implementointiin käytettiin strategiakarttaa sekä balanced scorecard -mittareita. Tutkimus toteutettiin hankintaosastolla Vaasan & Vaasan Oy:ssä, joka toimii leipomotoimialalla viidessä maassa Itämeren alueella. Hankinnan strategiakartta ja balanced scorecard -mittarit laadittiin olemassa olevan hankintastrategian pohjalta. Perinteisiä balanced scorecard -näkökulmia muokattiin paremmin hankintatoimeen sopiviksi. Malliin lisättiin toimittajanäkökulma ja hankinnan rooli pääprosessien tukifunktiona otettiin huomioon muuttamalla balanced scorecardin asiakasnäkökulma sisäisen asiakkaan näkökulmaksi. Strategiakartta myös laadittiin niin, että hankinnan sisäisten asiakkaiden odotukset ovat kartassa tasavertaisina taloudellistentavoitteiden kanssa. Balanced scorecard -mittareiden määrä pidettiin pienenä, jotta niiden toimintaa ohjaava vaikutus olisi mahdollisimman suuri. Mittareita voidaan vaihtaa strategian muuttuessa. Työssä todettiin, että strategiakartta ja balanced scorecard ovat hyviä välineitä strategian muuttamiseen konkreettisiksi toimenpiteiksi myös tukifunktiossa, kun esimerkiksi mallin näkökulmia muokataan tapauskohtaisesti. Strategiakartat ja balanced scorecard tulisi kytkeä osaksi koko yrityksen strategista suunnittelua. Strategiakartan ja balanced scorecardin käytöstä on hankinnalle monia hyötyjä. Työntekijät hahmottavat paremmin oman panoksensa yrityksen tavoitteiden saavuttamisessa. Käytetyt työkalut voivat tukea myös konsernin yksiköiden hankintojen keskittämiskehitystä.
Resumo:
In bubbly flow simulations, bubble size distribution is an important factor in determination of hydrodynamics. Beside hydrodynamics, it is crucial in the prediction of interfacial area available for mass transfer and in the prediction of reaction rate in gas-liquid reactors such as bubble columns. Solution of population balance equations is a method which can help to model the size distribution by considering continuous bubble coalescence and breakage. Therefore, in Computational Fluid Dynamic simulations it is necessary to couple CFD and Population Balance Model (CFD-PBM) to get reliable distribution. In the current work a CFD-PBM coupled model is implemented as FORTRAN subroutines in ANSYS CFX 10 and it has been tested for bubbly flow. This model uses the idea of Multi Phase Multi Size Group approach which was previously presented by Sha et al. (2006) [18]. The current CFD-PBM coupled method considers inhomogeneous flow field for different bubble size groups in the Eulerian multi-dispersed phase systems. Considering different velocity field for bubbles can give the advantageof more accurate solution of hydrodynamics. It is also an improved method for prediction of bubble size distribution in multiphase flow compared to available commercial packages.
Resumo:
Forensic intelligence is a distinct dimension of forensic science. Forensic intelligence processes have mostly been developed to address either a specific type of trace or a specific problem. Even though these empirical developments have led to successes, they are trace-specific in nature and contribute to the generation of silos which hamper the establishment of a more general and transversal model. Forensic intelligence has shown some important perspectives but more general developments are required to address persistent challenges. This will ensure the progress of the discipline as well as its widespread implementation in the future. This paper demonstrates that the description of forensic intelligence processes, their architectures, and the methods for building them can, at a certain level, be abstracted from the type of traces considered. A comparative analysis is made between two forensic intelligence approaches developed independently in Australia and in Europe regarding the monitoring of apparently very different kind of problems: illicit drugs and false identity documents. An inductive effort is pursued to identify similarities and to outline a general model. Besides breaking barriers between apparently separate fields of study in forensic science and intelligence, this transversal model would assist in defining forensic intelligence, its role and place in policing, and in identifying its contributions and limitations. The model will facilitate the paradigm shift from the current case-by-case reactive attitude towards a proactive approach by serving as a guideline for the use of forensic case data in an intelligence-led perspective. A follow-up article will specifically address issues related to comparison processes, decision points and organisational issues regarding forensic intelligence (part II).
Resumo:
Tutkimus keskittyy hankintatoimen kehittämiseen osana laitosprojektien toteutusta. Työ pohjautuu empiiriseltä taustaltaan Pöyry Oyj:n projektiliiketoimintaan ja työn tarkastelunäkökulmaksi onvalittu projektihallinnosta vastaavan yrityksen näkökulma. Tutkimus on hyvin käytännönläheinen ¿ se lähtee hankinnan ja sen seurannan ongelmista ja pyrkii tarjoamaan niihin uudenlaisia ratkaisuja. Pohjimmiltaan tutkimus kuuluu teollisuustalouden piiriin, vaikka tietojärjestelmätieteellä on vahva tukirooli. Työn tavoitteet ja tulokset liittyvät teollisuustaloudelle ominaisesti yrityksen toiminnan kehittämiseen, käytetyt välineet ja ratkaisut puolestaan hyödyntävät tietojärjestelmätieteen antamia mahdollisuuksia. Tutkimuksessa on käytetty konstruktiivista tutkimusotetta, jonka mukaisesti on luotu innovatiivisia konstruktioita ratkaisemaan aitoja reaalimaailman ongelmia ja tätä kautta tuotettu kontribuutioita teollisuustaloudelle. Tavoitteena oli järjestää hankintatoimi ja sen seuranta suurissa laitosprojekteissa tehokkaammin. Tätä varten uudistettiin ensin projektihallinnon ja hankintatoimen toimintaohjeet vastaamaan paremmin nykyajan vaatimuksia. Toimintaohjeiden perusteella ryhdyttiin toteuttamaan hankintaohjelmistoa, joka pystyisi kattamaan kaikki toimintaohjeissa kuvatut piirteet. Lopulta hankintaohjelmisto toi mukanaan uusia piirteitä projektihallintoon ja hankintatoimeen ja nämä sisällytettiin toimintaohjeisiin. Tähän kehitystyöhön ryhdyttiin, jotta laitosprojektien projektihallinto ja hankintatoimi toimisivat paremmin, eli pienemmin kustannuksin tuottaen projekteissa tarvittavat tulokset nopeammin, tarkemmin ja laadukkaammin. Tutkimuksella on kolmenlaisia tuloksia: hankintatoimen parannetut metodit, hankintaohjelmiston pohjana olevat toiminta- ja laskentamallit sekä implementaationa hankintasovellus. Uudistetut projekti- ja hankintaohjeet kuvaavat hankintatoiminnan parannettuja metodeja. Hankintaohjelmistoasuunnitellessa ja kehitettäessä tehdyt kuvaukset sisältävät uusia malleja niin hankintaprosessille kuin hankinnan seuraamiseksi suurissa laitosprojekteissa. Itse ohjelmisto on tuloksena implementaatio, joka perustuu parannettuihin hankintametodeihin ja uusiin toiminta- ja laskentamalleihin. Uudistetut projekti- ja hankintaohjeet ovat olleet käytössä Pöyry Oyj:ssä vuodesta 1991. Vuosien varrella nämä toimintaohjeet ovat auttaneet ja tukeneet satojen laitosprojektientoteutusta ja ylläpitäneet Pöyry Oyj:n kilpailukykyä kansainvälisenä projektitalona. Hankintasovellus puolestaan on ollut käytössä useissa projekteissa ja sen on havaittu pienentävän hankintatoimen suoria työkustannuksia laitosprojekteissa. Sovelluksen katsotaan myös tuovan epäsuoria kustannussäästöjä parempien hankintapäätösten muodossa, mutta näiden säästöjen suuruutta ei pystytä luotettavasti arvioimaan.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.
Resumo:
Existing digital rights management (DRM) systems, initiatives like Creative Commons or research works as some digital rights ontologies provide limited support for content value chains modelling and management. This is becoming a critical issue as content markets start to profit from the possibilities of digital networks and the World Wide Web. The objective is to support the whole copyrighted content value chain across enterprise or business niches boundaries. Our proposal provides a framework that accommodates copyright law and a rich creation model in order to cope with all the creation life cycle stages. The dynamic aspects of value chains are modelled using a hybrid approach that combines ontology-based and rule-based mechanisms. The ontology implementation is based on Web Ontology Language and Description Logic (OWL-DL) reasoners, are directly used for license checking. On the other hand, for more complex aspects of the dynamics of content value chains, rule languages are the choice.
Resumo:
The need for high performance, high precision, and energy saving in rotating machinery demands an alternative solution to traditional bearings. Because of the contactless operation principle, the rotating machines employing active magnetic bearings (AMBs) provide many advantages over the traditional ones. The advantages such as contamination-free operation, low maintenance costs, high rotational speeds, low parasitic losses, programmable stiffness and damping, and vibration insulation come at expense of high cost, and complex technical solution. All these properties make the use of AMBs appropriate primarily for specific and highly demanding applications. High performance and high precision control requires model-based control methods and accurate models of the flexible rotor. In turn, complex models lead to high-order controllers and feature considerable computational burden. Fortunately, in the last few years the advancements in signal processing devices provide new perspective on the real-time control of AMBs. The design and the real-time digital implementation of the high-order LQ controllers, which focus on fast execution times, are the subjects of this work. In particular, the control design and implementation in the field programmable gate array (FPGA) circuits are investigated. The optimal design is guided by the physical constraints of the system for selecting the optimal weighting matrices. The plant model is complemented by augmenting appropriate disturbance models. The compensation of the force-field nonlinearities is proposed for decreasing the uncertainty of the actuator. A disturbance-observer-based unbalance compensation for canceling the magnetic force vibrations or vibrations in the measured positions is presented. The theoretical studies are verified by the practical experiments utilizing a custom-built laboratory test rig. The test rig uses a prototyping control platform developed in the scope of this work. To sum up, the work makes a step in the direction of an embedded single-chip FPGA-based controller of AMBs.
Resumo:
Key management has a fundamental role in secure communications. Designing and testing of key management protocols is tricky. These protocols must work flawlessly despite of any abuse. The main objective of this work was to design and implement a tool that helps to specify the protocol and makes it possible to test the protocol while it is still under development. This tool generates compile-ready java code from a key management protocol model. A modelling method for these protocols, which uses Unified Modeling Language (UML) was also developed. The protocol is modelled, exported as an XMI and read by the code generator tool. The code generator generates java code that is immediately executable with a test software after compilation.