938 resultados para analytical source model
Resumo:
Report for the scientific sojourn carried out at the Université Catholique de Louvain, Belgium, from March until June 2007. In the first part, the impact of important geometrical parameters such as source and drain thickness, fin spacing, spacer width, etc. on the parasitic fringing capacitance component of multiple-gate field-effect transistors (MuGFET) is deeply analyzed using finite element simulations. Several architectures such as single gate, FinFETs (double gate), triple-gate represented by Pi-gate MOSFETs are simulated and compared in terms of channel and fringing capacitances for the same occupied die area. Simulations highlight the great impact of diminishing the spacing between fins for MuGFETs and the trade-off between the reduction of parasitic source and drain resistances and the increase of fringing capacitances when Selective Epitaxial Growth (SEG) technology is introduced. The impact of these technological solutions on the transistor cut-off frequencies is also discussed. The second part deals with the study of the effect of the volume inversion (VI) on the capacitances of undoped Double-Gate (DG) MOSFETs. For that purpose, we present simulation results for the capacitances of undoped DG MOSFETs using an explicit and analytical compact model. It monstrates that the transition from volume inversion regime to dual gate behaviour is well simulated. The model shows an accurate dependence on the silicon layer thickness,consistent withtwo dimensional numerical simulations, for both thin and thick silicon films. Whereas the current drive and transconductance are enhanced in volume inversion regime, our results show thatintrinsic capacitances present higher values as well, which may limit the high speed (delay time) behaviour of DG MOSFETs under volume inversion regime.
Resumo:
BACKGROUND: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. RESULTS: We present the Systems Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the collective effort to define the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyse qualitative models. CONCLUSIONS: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.
Resumo:
The objective of this paper is to present a generalized analytical-numerical model of the internal flow in heat pipes. The model formulation is based on two-dimensional formulation of the energy and momentum equations in the vapour and liquid regions and also in the metallic tube. The numerical solution of the model is obtained by using the descretization scheme LOAD and the SIMPLE numerical code. The flow fields, as well as the pressure fields, for different geometries were obtained and discussed. Copyright © 1996 Elsevier Science Ltd.
Resumo:
No presente artigo, conforme o entendimento de que modelos te- óricos subsidiam a compreensão de fenômenos investigativos, objetivou-se elucidar os conceitos da teoria sociológica de Norbert Elias, considerando-se que esta é uma excelente fonte de análise para se compreender o universo do ser professor, apesar de o autor não abordar diretamente questões relacionadas ao campo da educação. A partir do conceito de configuração, é possível dizer que a constituição do ser professor resulta das diferentes configurações nas quais ele está imerso, pois, de acordo com Elias, as pessoas (professores, pais, gestores, ministros, alunos etc.) modelam suas ideias a partir de todas as suas experiências e, essencialmente, das experiências vividas no interior do próprio grupo. É observável que as configurações, formadas por grupos interdependentes de pessoas, e não por indivíduos singulares, apresentam-se cada vez mais ampliadas nos contextos escolares, com funções especializadas e específicas (professores, alunos, diretores, coordenadores, supervisores, secretários etc.), em grupos que se tornam cada vez mais funcionalmente dependentes. As cadeias de interdependência estão mais diferenciadas e, consequentemente, mais opacas e dificilmente controláveis por parte de quaisquer grupos ou indiví- duos. Portanto, uma melhor compreensão será possível quando se estudar empiricamente as configurações que estão em jogo na educação brasileira. Daí se justifica a análise das configurações e das teias de interdependência em que os professores estão envolvidos. Enfim, a aplicação dos modelos de competição abordados por Elias possibilita evidenciar as problemáticas sociológicas do ser professor, tornando-as mais evidentes e facilitando o entendimento do jogo para reorganizá-lo em termos de equilíbrio na teia social.
Resumo:
Apresentamos dois algoritmos automáticos, os quais se utilizam do método dos mínimos quadrados de Wiener-Hopf, para o cálculo de filtros lineares digitais para as transformadas seno, co-seno e de Hankel J0, J1 e J2. O primeiro, que otimiza os parâmetros: incremento das abscissas, abscissa inicial e o fator de deslocamento utilizados para os cálculos dos coeficientes dos filtros lineares digitais que são aferidos através de transformadas co-seno, seno e o segundo, que otimiza os parâmetros: incremento das abscissas e abscissa inicial utilizados para os cálculos dos coeficientes dos filtros lineares digitais que são aferidos através de transformadas de Hankel J0, J1 e J2. Esses algoritmos levaram às propostas de novos filtros lineares digitais de 19, 30 e 40 pontos para as transformadas co-seno e seno e de novos filtros otimizados de 37 , 27 e 19 pontos para as transformadas J0, J1 e J2, respectivamente. O desempenho dos novos filtros em relação aos filtros existentes na literatura geofísica é avaliado usando-se um modelo geofísico constituído por dois semi-espaços. Como fonte usou-se uma linha infinita de corrente entre os semi-espaços originando, desta forma, transformadas co-seno e seno. Verificou-se melhores desempenhos na maioria das simulações usando o novo filtro co-seno de 19 pontos em relação às simulações usando o filtro co-seno de 19 pontos existente na literatura. Verificou-se também a equivalência de desempenhos nas simulações usando o novo filtro seno de 19 pontos em relação às simulações usando o filtro seno de 20 pontos existente na literatura. Adicionalmente usou-se também como fonte um dipolo magnético vertical entre os semi-espaços originando desta forma, transformadas J0 e J1, verificando-se melhores desempenhos na maioria das simulações usando o novo filtro J1 de 27 pontos em relação ao filtro J1 de 47 pontos existente na literatura. Verificou-se também a equivalência de desempenhos na maioria das simulações usando o novo filtro J0 de 37 pontos em relação ao filtro J0 de 61 pontos existente na literatura. Usou-se também como fonte um dipolo magnético horizontal entre os semi-espaços, verificando-se um desempenho análogo ao que foi descrito anteriormente dos novos filtros de 37 e 27 pontos para as respectivas transformadas J0 e J1 em relação aos filtros de 61 e 47 pontos existentes na literatura, destas respectivas transformadas. Finalmente verificou-se a equivalência de desempenhos entre os novos filtros J0 de 37 pontos e J1 de 27 pontos em relação aos filtros de 61 e 47 pontos existentes na literatura destas transformadas, respectivamente, quando aplicados em modelos de sondagens elétricas verticais (Wenner e Schlumberger). A maioria dos nossos filtros contêm poucos coeficientes quando comparados àqueles geralmente usados na geofísica. Este aspecto é muito importante porque transformadas utilizando filtros lineares digitais são usadas maciçamente em problemas numéricos geofísicos.
Resumo:
Extensible Business Reporting Language (XBRL) is being adopted by European regulators as a data standard for the exchange of business information. This paper examines the approach of XBRL International (XII) to the meta-data standard's development and diffusion. We theorise the development of XBRL using concepts drawn from a model of successful open source projects. Comparison of the open source model to XBRL enables us to identify a number of interesting similarities and differences. In common with open source projects, the benefits and progress of XBRL have been overstated and 'hyped' by enthusiastic participants. While XBRL is an open data standard in terms of access to the equivalent of its 'source code' we find that the governance structure of the XBRL consortium is significantly different to a model open source approach. The barrier to participation that is created by requiring paid membership and a focus on transacting business at physical conferences and meetings is identified as particularly critical. Decisions about the technical structure of XBRL, the regulator-led pattern of adoption and the organisation of XII are discussed. Finally areas for future research are identified.
Resumo:
Objective of this work was to explore the performance of a recently introduced source extraction method, FSS (Functional Source Separation), in recovering induced oscillatory change responses from extra-cephalic magnetoencephalographic (MEG) signals. Unlike algorithms used to solve the inverse problem, FSS does not make any assumption about the underlying biophysical source model; instead, it makes use of task-related features (functional constraints) to estimate source/s of interest. FSS was compared with blind source separation (BSS) approaches such as Principal and Independent Component Analysis, PCA and ICA, which are not subject to any explicit forward solution or functional constraint, but require source uncorrelatedness (PCA), or independence (ICA). A visual MEG experiment with signals recorded from six subjects viewing a set of static horizontal black/white square-wave grating patterns at different spatial frequencies was analyzed. The beamforming technique Synthetic Aperture Magnetometry (SAM) was applied to localize task-related sources; obtained spatial filters were used to automatically select BSS and FSS components in the spatial area of interest. Source spectral properties were investigated by using Morlet-wavelet time-frequency representations and significant task-induced changes were evaluated by means of a resampling technique; the resulting spectral behaviours in the gamma frequency band of interest (20-70 Hz), as well as the spatial frequency-dependent gamma reactivity, were quantified and compared among methods. Among the tested approaches, only FSS was able to estimate the expected sustained gamma activity enhancement in primary visual cortex, throughout the whole duration of the stimulus presentation for all subjects, and to obtain sources comparable to invasively recorded data.
Resumo:
An ion chromatography procedure, employing an IonPac AC15 concentrator column was used to investigate on line preconcentration for the simultaneous determination of inorganic anions and organic acids in river water. Twelve organic acids and nine inorganic anions were separated without any interference from other compounds and carry-over problems between samples. The injection loop was replaced by a Dionex AC15 concentrator column. The proposed procedure employed an auto-sampler that injected 1.5 ml of sample into a KOH mobile phase, generated by an Eluent Generator, at 1.5 mL min-1, which carried the sample to the chromatographic columns (one guard column, model AG-15, and one analytical column, model AS15, with 250 x 4mm i.d.). The gradient elution concentrations consisted of a 10.0 mmol l-1 KOH solution from 0 to 6.5 min, gradually increased to 45.0 mmol l-1 KOH at 21 min., and immediatelly returned and maintained at the initial concentrations until 24 min. of total run. The compounds were eluted and transported to an electro-conductivity detection cell that was attached to an electrochemical detector. The advantage of using concentrator column was the capability of performing routine simultaneous determinations for ions from 0.01 to 1.0 mg l-1 organic acids (acetate, propionic acid, formic acid, butyric acid, glycolic acid, pyruvate, tartaric acid, phthalic acid, methanesulfonic acid, valeric acid, maleic acid, oxalic acid, chlorate and citric acid) and 0.01 to 5.0 mg l-1 inorganic anions (fluoride, chloride, nitrite, nitrate, bromide, sulfate and phosphate), without extensive sample pretreatment and with an analysis time of only 24 minutes.
Resumo:
In an attempt to solve the bridge problem faced by many county engineers, this investigation focused on a low cost bridge alternative that consists of using railroad flatcars (RRFC) as the bridge superstructure. The intent of this study was to determine whether these types of bridges are structurally adequate and potentially feasible for use on low volume roads. A questionnaire was sent to the Bridge Committee members of the American Association of State Highway and Transportation Officials (AASHTO) to determine their use of RRFC bridges and to assess the pros and cons of these bridges based on others’ experiences. It was found that these types of bridges are widely used in many states with large rural populations and they are reported to be a viable bridge alternative due to their low cost, quick and easy installation, and low maintenance. A main focus of this investigation was to study an existing RRFC bridge that is located in Tama County, IA. This bridge was analyzed using computer modeling and field load testing. The dimensions of the major structural members of the flatcars in this bridge were measured and their properties calculated and used in an analytical grillage model. The analytical results were compared with those obtained in the field tests, which involved instrumenting the bridge and loading it with a fully loaded rear tandem-axle truck. Both sets of data (experimental and theoretical) show that the Tama County Bridge (TCB) experienced very low strains and deflections when loaded and the RRFCs appeared to be structurally adequate to serve as a bridge superstructure. A calculated load rating of the TCB agrees with this conclusion. Because many different types of flatcars exist, other flatcars were modeled and analyzed. It was very difficult to obtain the structural plans of RRFCs; thus, only two additional flatcars were analyzed. The results of these analyses also yielded very low strains and displacements. Taking into account the experiences of other states, the inspection of several RRFC bridges in Oklahoma, the field test and computer analysis of the TCB, and the computer analysis of two additional flatcars, RRFC bridges appear to provide a safe and feasible bridge alternative for low volume roads.
Resumo:
The relationship between electrophysiological and functional magnetic resonance imaging (fMRI) signals remains poorly understood. To date, studies have required invasive methods and have been limited to single functional regions and thus cannot account for possible variations across brain regions. Here we present a method that uses fMRI data and singe-trial electroencephalography (EEG) analyses to assess the spatial and spectral dependencies between the blood-oxygenation-level-dependent (BOLD) responses and the noninvasively estimated local field potentials (eLFPs) over a wide range of frequencies (0-256 Hz) throughout the entire brain volume. This method was applied in a study where human subjects completed separate fMRI and EEG sessions while performing a passive visual task. Intracranial LFPs were estimated from the scalp-recorded data using the ELECTRA source model. We compared statistical images from BOLD signals with statistical images of each frequency of the eLFPs. In agreement with previous studies in animals, we found a significant correspondence between LFP and BOLD statistical images in the gamma band (44-78 Hz) within primary visual cortices. In addition, significant correspondence was observed at low frequencies (<14 Hz) and also at very high frequencies (>100 Hz). Effects within extrastriate visual areas showed a different correspondence that not only included those frequency ranges observed in primary cortices but also additional frequencies. Results therefore suggest that the relationship between electrophysiological and hemodynamic signals thus might vary both as a function of frequency and anatomical region.
Resumo:
This thesis presents an alternative approach to the analytical design of surface-mounted axialflux permanent-magnet machines. Emphasis has been placed on the design of axial-flux machines with a one-rotor-two-stators configuration. The design model developed in this study incorporates facilities to include both the electromagnetic design and thermal design of the machine as well as to take into consideration the complexity of the permanent-magnet shapes, which is a typical requirement for the design of high-performance permanent-magnet motors. A prototype machine with rated 5 kW output power at 300 min-1 rotation speed has been designed and constructed for the purposesof ascertaining the results obtained from the analytical design model. A comparative study of low-speed axial-flux and low-speed radial-flux permanent-magnet machines is presented. The comparative study concentrates on 55 kW machines with rotation speeds 150 min-1, 300 min-1 and 600 min-1 and is based on calculated designs. A novel comparison method is introduced. The method takes into account the mechanical constraints of the machine and enables comparison of the designed machines, with respect to the volume, efficiency and cost aspects of each machine. It is shown that an axial-flux permanent-magnet machine with one-rotor-two-stators configuration has generally a weaker efficiency than a radial-flux permanent-magnet machine if for all designs the same electric loading, air-gap flux density and current density have been applied. On the other hand, axial-flux machines are usually smaller in volume, especially when compared to radial-flux machines for which the length ratio (axial length of stator stack vs. air-gap diameter)is below 0.5. The comparison results show also that radial-flux machines with alow number of pole pairs, p < 4, outperform the corresponding axial-flux machines.
Resumo:
Turun yliopiston arkeologian oppiaine tutki Raision Ihalan historiallisella kylätontilla, ns. Mullin eduspellolla, asuinpaikan, josta löydettiin maamme oloissa harvinaisen hyvin säilyneitä rakennusten puuosien jäännöksiä. Löytö on ainutlaatuinen Suomen oloissa ja sillä on kansainvälistäkin merkitystä, koska hyvin säilyneet myöhemmän rautakauden ja varhaisen keskiajan maaseutuasuinpaikat, joista tavataan puujäännöksiä, ovat harvinaisia erityisesti itäisen Itämeren piirissä. Rakennukset on ennallistettu käyttäen tiukkaa paikallisen analogian (’Tight Local Analogy’) metodia, erityisesti suoraa historiallista analogista lähestymistapaa. Tätä tarkoitusta varten muodostettiin aluksi arkeologinen, historiallinen ja etnografinen lähdemalli. Tämä valittiin maantieteellisesti ja ajallisesti relevantista tutkimusaineistosta pohjoisen Itämeren piiristä. Tiedot lounaisen Suomen rakennuksista ja rakennusteknologiasta katsottiin olevan tärkein osa mallia johtuen historiallisesta ja spatiaalisesta jatkuvuudesta. Lähdemalli yhdistettiin sitten Mullin arkeologiseen aineistoon ja analyysin tuloksena saatiin rakennusten ennallistukset. Mullista on voitu ennallistaa ainakin kuusi eri rakennusta neljässä eri rakennuspaikassa. Rakennusteknologia perustui kattoa kannattaviin horisontaalisiin pitkiin seinähirsiin, jotka oli nurkissa yhdistetty joko salvoksella tai varhopatsaalla. Kaikissa rakennuksissa ulkoseinän pituus oli 5 – 7 metriä. Löydettiin lisäksi savi- ja puulattioita sekä kaksi tulisijaa, savikupoliuuni ja avoin liesi. Runsaan palaneen saven perusteella on mahdollista päätellä, että katto oli mitä todennäköisimmin kaksilappeinen vuoliaiskatto, joka oli katettu puulla ja/tai turpeella. Kaikki rakennukset olivat samaa tyyppiä ja ne käsittivät isomman huoneen ja kapean eteisen. Kaikki analysoitu puu oli mäntyä. Ulkoalueelta tavattiin lisäksi tunkioita, ojia, aitoja ja erilaisia varastokuoppia. Rakennukset on ajoitettu 900-luvun lopulta 1200-luvun lopulle (cal AD). Lopuksi tutkittiin rakennuksia yhteisöllisessä ympäristössään, niiden ajallista asemaa sekä asukkaiden erilaisia spatiaalisia kokemuksia ja yhteyksiä. Raision Ihalaa analysoidaan sosiaalisen identiteetin ja sen materiaalisten ilmenemismuotojen kautta. Nämä sosiaaliset identiteetit muodostuvat kommunikaatioverkostoista eri spatiaalisilla ja yhteisöllisillä ta¬soilla. Näitä eri tasoja ovat: 1) kotitalous arjen toimintoineen, perhe ja sukulaisuussuhteet traditioineen; 2) paikallinen identiteetti, rakennus, rakennuspaikka, asuinpaikan ympäristö ja sen käyttö, (maa)talo ja kylä; 3) Raision Ihalan kylä laajemmassa alueellisessa kontekstissaan pohjoisen Itämeren piirissä: kauppiaiden ja käsityöläisten kontaktiverkostot, uskonnollinen identiteetti ja sen muutokset.
Resumo:
Today’s electrical machine technology allows increasing the wind turbine output power by an order of magnitude from the technology that existed only ten years ago. However, it is sometimes argued that high-power direct-drive wind turbine generators will prove to be of limited practical importance because of their relatively large size and weight. The limited space for the generator in a wind turbine application together with the growing use of wind energy pose a challenge for the design engineers who are trying to increase torque without making the generator larger. When it comes to high torque density, the limiting factor in every electrical machine is heat, and if the electrical machine parts exceed their maximum allowable continuous operating temperature, even for a short time, they can suffer permanent damage. Therefore, highly efficient thermal design or cooling methods is needed. One of the promising solutions to enhance heat transfer performances of high-power, low-speed electrical machines is the direct cooling of the windings. This doctoral dissertation proposes a rotor-surface-magnet synchronous generator with a fractional slot nonoverlapping stator winding made of hollow conductors, through which liquid coolant can be passed directly during the application of current in order to increase the convective heat transfer capabilities and reduce the generator mass. This doctoral dissertation focuses on the electromagnetic design of a liquid-cooled direct-drive permanent-magnet synchronous generator (LC DD-PMSG) for a directdrive wind turbine application. The analytical calculation of the magnetic field distribution is carried out with the ambition of fast and accurate predicting of the main dimensions of the machine and especially the thickness of the permanent magnets; the generator electromagnetic parameters as well as the design optimization. The focus is on the generator design with a fractional slot non-overlapping winding placed into open stator slots. This is an a priori selection to guarantee easy manufacturing of the LC winding. A thermal analysis of the LC DD-PMSG based on a lumped parameter thermal model takes place with the ambition of evaluating the generator thermal performance. The thermal model was adapted to take into account the uneven copper loss distribution resulting from the skin effect as well as the effect of temperature on the copper winding resistance and the thermophysical properties of the coolant. The developed lumpedparameter thermal model and the analytical calculation of the magnetic field distribution can both be integrated with the presented algorithm to optimize an LC DD-PMSG design. Based on an instrumented small prototype with liquid-cooled tooth-coils, the following targets have been achieved: experimental determination of the performance of the direct liquid cooling of the stator winding and validating the temperatures predicted by an analytical thermal model; proving the feasibility of manufacturing the liquid-cooled tooth-coil winding; moreover, demonstration of the objectives of the project to potential customers.
Resumo:
The dissertation proposes two control strategies, which include the trajectory planning and vibration suppression, for a kinematic redundant serial-parallel robot machine, with the aim of attaining the satisfactory machining performance. For a given prescribed trajectory of the robot's end-effector in the Cartesian space, a set of trajectories in the robot's joint space are generated based on the best stiffness performance of the robot along the prescribed trajectory. To construct the required system-wide analytical stiffness model for the serial-parallel robot machine, a variant of the virtual joint method (VJM) is proposed in the dissertation. The modified method is an evolution of Gosselin's lumped model that can account for the deformations of a flexible link in more directions. The effectiveness of this VJM variant is validated by comparing the computed stiffness results of a flexible link with the those of a matrix structural analysis (MSA) method. The comparison shows that the numerical results from both methods on an individual flexible beam are almost identical, which, in some sense, provides mutual validation. The most prominent advantage of the presented VJM variant compared with the MSA method is that it can be applied in a flexible structure system with complicated kinematics formed in terms of flexible serial links and joints. Moreover, by combining the VJM variant and the virtual work principle, a systemwide analytical stiffness model can be easily obtained for mechanisms with both serial kinematics and parallel kinematics. In the dissertation, a system-wide stiffness model of a kinematic redundant serial-parallel robot machine is constructed based on integration of the VJM variant and the virtual work principle. Numerical results of its stiffness performance are reported. For a kinematic redundant robot, to generate a set of feasible joints' trajectories for a prescribed trajectory of its end-effector, its system-wide stiffness performance is taken as the constraint in the joints trajectory planning in the dissertation. For a prescribed location of the end-effector, the robot permits an infinite number of inverse solutions, which consequently yields infinite kinds of stiffness performance. Therefore, a differential evolution (DE) algorithm in which the positions of redundant joints in the kinematics are taken as input variables was employed to search for the best stiffness performance of the robot. Numerical results of the generated joint trajectories are given for a kinematic redundant serial-parallel robot machine, IWR (Intersector Welding/Cutting Robot), when a particular trajectory of its end-effector has been prescribed. The numerical results show that the joint trajectories generated based on the stiffness optimization are feasible for realization in the control system since they are acceptably smooth. The results imply that the stiffness performance of the robot machine deviates smoothly with respect to the kinematic configuration in the adjacent domain of its best stiffness performance. To suppress the vibration of the robot machine due to varying cutting force during the machining process, this dissertation proposed a feedforward control strategy, which is constructed based on the derived inverse dynamics model of target system. The effectiveness of applying such a feedforward control in the vibration suppression has been validated in a parallel manipulator in the software environment. The experimental study of such a feedforward control has also been included in the dissertation. The difficulties of modelling the actual system due to the unknown components in its dynamics is noticed. As a solution, a back propagation (BP) neural network is proposed for identification of the unknown components of the dynamics model of the target system. To train such a BP neural network, a modified Levenberg-Marquardt algorithm that can utilize an experimental input-output data set of the entire dynamic system is introduced in the dissertation. Validation of the BP neural network and the modified Levenberg- Marquardt algorithm is done, respectively, by a sinusoidal output approximation, a second order system parameters estimation, and a friction model estimation of a parallel manipulator, which represent three different application aspects of this method.
Resumo:
La transformation de modèles consiste à transformer un modèle source en un modèle cible conformément à des méta-modèles source et cible. Nous distinguons deux types de transformations. La première est exogène où les méta-modèles source et cible représentent des formalismes différents et où tous les éléments du modèle source sont transformés. Quand elle concerne un même formalisme, la transformation est endogène. Ce type de transformation nécessite généralement deux étapes : l’identification des éléments du modèle source à transformer, puis la transformation de ces éléments. Dans le cadre de cette thèse, nous proposons trois principales contributions liées à ces problèmes de transformation. La première contribution est l’automatisation des transformations des modèles. Nous proposons de considérer le problème de transformation comme un problème d'optimisation combinatoire où un modèle cible peut être automatiquement généré à partir d'un nombre réduit d'exemples de transformations. Cette première contribution peut être appliquée aux transformations exogènes ou endogènes (après la détection des éléments à transformer). La deuxième contribution est liée à la transformation endogène où les éléments à transformer du modèle source doivent être détectés. Nous proposons une approche pour la détection des défauts de conception comme étape préalable au refactoring. Cette approche est inspirée du principe de la détection des virus par le système immunitaire humain, appelée sélection négative. L’idée consiste à utiliser de bonnes pratiques d’implémentation pour détecter les parties du code à risque. La troisième contribution vise à tester un mécanisme de transformation en utilisant une fonction oracle pour détecter les erreurs. Nous avons adapté le mécanisme de sélection négative qui consiste à considérer comme une erreur toute déviation entre les traces de transformation à évaluer et une base d’exemples contenant des traces de transformation de bonne qualité. La fonction oracle calcule cette dissimilarité et les erreurs sont ordonnées selon ce score. Les différentes contributions ont été évaluées sur d’importants projets et les résultats obtenus montrent leurs efficacités.