813 resultados para Constraint based modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis starts showing the main characteristics and application fields of the AlGaN/GaN HEMT technology, focusing on reliability aspects essentially due to the presence of low frequency dispersive phenomena which limit in several ways the microwave performance of this kind of devices. Based on an equivalent voltage approach, a new low frequency device model is presented where the dynamic nonlinearity of the trapping effect is taken into account for the first time allowing considerable improvements in the prediction of very important quantities for the design of power amplifier such as power added efficiency, dissipated power and internal device temperature. An innovative and low-cost measurement setup for the characterization of the device under low-frequency large-amplitude sinusoidal excitation is also presented. This setup allows the identification of the new low frequency model through suitable procedures explained in detail. In this thesis a new non-invasive empirical method for compact electrothermal modeling and thermal resistance extraction is also described. The new contribution of the proposed approach concerns the non linear dependence of the channel temperature on the dissipated power. This is very important for GaN devices since they are capable of operating at relatively high temperatures with high power densities and the dependence of the thermal resistance on the temperature is quite relevant. Finally a novel method for the device thermal simulation is investigated: based on the analytical solution of the tree-dimensional heat equation, a Visual Basic program has been developed to estimate, in real time, the temperature distribution on the hottest surface of planar multilayer structures. The developed solver is particularly useful for peak temperature estimation at the design stage when critical decisions about circuit design and packaging have to be made. It facilitates the layout optimization and reliability improvement, allowing the correct choice of the device geometry and configuration to achieve the best possible thermal performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relevance of human joint models was shown in the literature. In particular, the great importance of models for the joint passive motion simulation (i.e. motion under virtually unloaded conditions) was outlined. They clarify the role played by the principal anatomical structures of the articulation, enhancing the comprehension of surgical treatments, and in particular the design of total ankle replacement and ligament reconstruction. Equivalent rigid link mechanisms proved to be an efficient tool for an accurate simulation of the joint passive motion. This thesis focuses on the ankle complex (i.e. the anatomical structure composed of the tibiotalar and the subtalar joints), which has a considerable role in human locomotion. The lack of interpreting models of this articulation and the poor results of total ankle replacement arthroplasty have strongly suggested devising new mathematical models capable of reproducing the restraining function of each structure of the joint and of replicating the relative motion of the bones which constitute the joint itself. In this contest, novel equivalent mechanisms are proposed for modelling the ankle passive motion. Their geometry is based on the joint’s anatomical structures. In particular, the role of the main ligaments of the articulation is investigated under passive conditions by means of nine 5-5 fully parallel mechanisms. Based on this investigation, a one-DOF spatial mechanism is developed for modelling the passive motion of the lower leg. The model considers many passive structures constituting the articulation, overcoming the limitations of previous models which took into account few anatomical elements of the ankle complex. All the models have been identified from experimental data by means of optimization procedure. Then, the simulated motions have been compared to the experimental one, in order to show the efficiency of the approach and thus to deduce the role of each anatomical structure in the ankle kinematic behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This master’s thesis describes the research done at the Medical Technology Laboratory (LTM) of the Rizzoli Orthopedic Institute (IOR, Bologna, Italy), which focused on the characterization of the elastic properties of the trabecular bone tissue, starting from october 2012 to present. The approach uses computed microtomography to characterize the architecture of trabecular bone specimens. With the information obtained from the scanner, specimen-specific models of trabecular bone are generated for the solution with the Finite Element Method (FEM). Along with the FEM modelling, mechanical tests are performed over the same reconstructed bone portions. From the linear-elastic stage of mechanical tests presented by experimental results, it is possible to estimate the mechanical properties of the trabecular bone tissue. After a brief introduction on the biomechanics of the trabecular bone (chapter 1) and on the characterization of the mechanics of its tissue using FEM models (chapter 2), the reliability analysis of an experimental procedure is explained (chapter 3), based on the high-scalable numerical solver ParFE. In chapter 4, the sensitivity analyses on two different parameters for micro-FEM model’s reconstruction are presented. Once the reliability of the modeling strategy has been shown, a recent layout for experimental test, developed in LTM, is presented (chapter 5). Moreover, the results of the application of the new layout are discussed, with a stress on the difficulties connected to it and observed during the tests. Finally, a prototype experimental layout for the measure of deformations in trabecular bone specimens is presented (chapter 6). This procedure is based on the Digital Image Correlation method and is currently under development in LTM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates two distinct research topics. The main topic (Part I) is the computational modelling of cardiomyocytes derived from human stem cells, both embryonic (hESC-CM) and induced-pluripotent (hiPSC-CM). The aim of this research line lies in developing models of the electrophysiology of hESC-CM and hiPSC-CM in order to integrate the available experimental data and getting in-silico models to be used for studying/making new hypotheses/planning experiments on aspects not fully understood yet, such as the maturation process, the functionality of the Ca2+ hangling or why the hESC-CM/hiPSC-CM action potentials (APs) show some differences with respect to APs from adult cardiomyocytes. Chapter I.1 introduces the main concepts about hESC-CMs/hiPSC-CMs, the cardiac AP, and computational modelling. Chapter I.2 presents the hESC-CM AP model, able to simulate the maturation process through two developmental stages, Early and Late, based on experimental and literature data. Chapter I.3 describes the hiPSC-CM AP model, able to simulate the ventricular-like and atrial-like phenotypes. This model was used to assess which currents are responsible for the differences between the ventricular-like AP and the adult ventricular AP. The secondary topic (Part II) consists in the study of texture descriptors for biological image processing. Chapter II.1 provides an overview on important texture descriptors such as Local Binary Pattern or Local Phase Quantization. Moreover the non-binary coding and the multi-threshold approach are here introduced. Chapter II.2 shows that the non-binary coding and the multi-threshold approach improve the classification performance of cellular/sub-cellular part images, taken from six datasets. Chapter II.3 describes the case study of the classification of indirect immunofluorescence images of HEp2 cells, used for the antinuclear antibody clinical test. Finally the general conclusions are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semiconductors technologies are rapidly evolving driven by the need for higher performance demanded by applications. Thanks to the numerous advantages that it offers, gallium nitride (GaN) is quickly becoming the technology of reference in the field of power amplification at high frequency. The RF power density of AlGaN/GaN HEMTs (High Electron Mobility Transistor) is an order of magnitude higher than the one of gallium arsenide (GaAs) transistors. The first demonstration of GaN devices dates back only to 1993. Although over the past few years some commercial products have started to be available, the development of a new technology is a long process. The technology of AlGaN/GaN HEMT is not yet fully mature, some issues related to dispersive phenomena and also to reliability are still present. Dispersive phenomena, also referred as long-term memory effects, have a detrimental impact on RF performances and are due both to the presence of traps in the device structure and to self-heating effects. A better understanding of these problems is needed to further improve the obtainable performances. Moreover, new models of devices that take into consideration these effects are necessary for accurate circuit designs. New characterization techniques are thus needed both to gain insight into these problems and improve the technology and to develop more accurate device models. This thesis presents the research conducted on the development of new charac- terization and modelling methodologies for GaN-based devices and on the use of this technology for high frequency power amplifier applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In der vorliegenden Arbeit werden 52 Verbindungen beschrieben, welche auf COX/LOX-Inhibition mit zusätzlichen Hydroxylradikalfängereigenschaften getestet worden sind. rnEs war möglich eine neue Synthesestrategie für noch nicht beschriebene 4,5-Diarylisoselenazole zu entwickeln und eine vorhandene Synthese für Isothiazoliumchloride von zwei Stufen, mit mäßigen Ausbeuten, auf eine Stufe, mit hoher Ausbeute, zu verkürzen.rnEs wurden mehrere COX-Inhibitoren identifiziert. MSD4a, MSD4h, MSD5a und MSD5h konnten als COX-1-, COX-2- und 5-LOX-Hemmer identifiziert werden. Besonders hervorzuheben ist die Verbindung MSD5h, die zusätzlich zur COX-1-, COX-2- und 5-LOX-Inhibition eine leichte Hemmung im Hydroxylradikalfänger-Assay zeigt, für die ein clog P-Wert von 2,65 berechnet wurde und die im XTT-Zytotoxizitätstestsystem, selbst bei einer Konzentration von 100 µM, kaum toxische Eigenschaften besitzt.rnWeiterhin war es möglich zu zeigen, dass Carbonsäuren gute Hydroxylradikalfängereigenschaften in unserem, auf der Fenton-Reaktion basierenden, Testsystem haben. Die Potenz der Carbonsäuren MSD8b und MSD11j im Vergleich zu den unwirksamen korrespondierenden Ester MSD8a und MSD11i führte zu Untersuchungen mit weiteren Carbonsäuren und deren Ester. Um den Wirkungsmechanismus zu erforschen wurde das Testsystem modifiziert, um eine Komplexierung der Eisenionen durch die Carbonsäuren auszuschließen. An Hand der Substanzen MSD8b und MSD11j wurde nachgewiesen, dass diese mit dem Hydroxylradikal reagieren, ohne zu decarboxylieren oder andere Zerfallsreaktionen einzugehen.rnZusätzlich zu den Untersuchungen der Enzym-Inhibition sowie des Hydroxylradikal-Scavenings wurden Molecular Modelling Studien durchgeführt. Die Ergebnisse der Dockingstudien in COX-1- (1eqg), COX-2- (1cx2) und in COX-1 mutierte COX-2-Kristallstrukturen (1cx2) führen zu einer kritischen Bewertung des folgenden Ansatzes: Es ist nicht unbedingt sinnvoll zuerst Strukturen mit dem Computer zu entwerfen und zu modeln und sie erst dann zu synthetisieren und in Enzym- oder Zellassays zu testen. Die Begründung dafür liegt in der Schwierigkeit einschätzen zu können, wie nah das gewählte Modell der Wirklichkeit ist. In den durchgeführten Dockingstudien konnte der sehr große Einfluss des kokristallisierten Liganden in der als Grundlage dienenden Kristallstruktur auf die Dockingergebnisse gezeigt werden. Durch einen zu kleinen kokristallisierten Liganden in der COX-1-Bindungstasche wurden als Ergebnis der Dockingstudie alle Verbindungen als nicht potent eingestuft, obwohl diese zum Teil im Enzymtestsystem wirksam waren. Dies konnte mit den Mutationsversuchen ausgeglichen werden. rnDeshalb kann man aus diesen Ergebnissen als Fazit ziehen, dass eine Strategie, Strukturen zu synthetisieren, in vitro zu testen und dabei die Strukturentwicklung mit Molecular Modelling Studien zu unterstützen, die Methode der Wahl darstellt.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral thesis is devoted to the study of the causal effects of the maternal smoking on the delivery cost. The interest of economic consequences of smoking in pregnancy have been studied fairly extensively in the USA, and very little is known in European context. To identify the causal relation between different maternal smoking status and the delivery cost in the Emilia-Romagna region two distinct methods were used. The first - geometric multidimensional - is mainly based on the multivariate approach and involves computing and testing the global imbalance, classifying cases in order to generate well-matched comparison groups, and then computing treatment effects. The second - structural modelling - refers to a general methodological account of model-building and model-testing. The main idea of this approach is to decompose the global mechanism into sub-mechanisms though a recursive decomposition of a multivariate distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Waste management represents an important issue in our society and Waste-to-Energy incineration plants have been playing a significant role in the last decades, showing an increased importance in Europe. One of the main issues posed by waste combustion is the generation of air contaminants. Particular concern is present about acid gases, mainly hydrogen chloride and sulfur oxides, due to their potential impact on the environment and on human health. Therefore, in the present study the main available technological options for flue gas treatment were analyzed, focusing on dry treatment systems, which are increasingly applied in Municipal Solid Wastes (MSW) incinerators. An operational model was proposed to describe and optimize acid gas removal process. It was applied to an existing MSW incineration plant, where acid gases are neutralized in a two-stage dry treatment system. This process is based on the injection of powdered calcium hydroxide and sodium bicarbonate in reactors followed by fabric filters. HCl and SO2 conversions were expressed as a function of reactants flow rates, calculating model parameters from literature and plant data. The implementation in a software for process simulation allowed the identification of optimal operating conditions, taking into account the reactant feed rates, the amount of solid products and the recycle of the sorbent. Alternative configurations of the reference plant were also assessed. The applicability of the operational model was extended developing also a fundamental approach to the issue. A predictive model was developed, describing mass transfer and kinetic phenomena governing the acid gas neutralization with solid sorbents. The rate controlling steps were identified through the reproduction of literature data, allowing the description of acid gas removal in the case study analyzed. A laboratory device was also designed and started up to assess the required model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim of this research is the development and validation of a comprehensive multibody motorcycle model featuring rigid-ring tires, taking into account both slope and roughness of road surfaces. A novel parametrization for the general kinematics of the motorcycle is proposed, using a mixed reference-point and relative-coordinates approach. The resulting description, developed in terms of dependent coordinates, makes it possible to efficiently include rigid-ring kinematics as well as road elevation and slope. The equations of motion for the multibody system are derived symbolically and the constraint equations arising from the dependent-coordinate formulation are handled using a projection technique. Therefore the resulting system of equations can be integrated in time domain using a standard ODE algorithm. The model is validated with respect to maneuvers experimentally measured on the race track, showing consistent results and excellent computational efficiency. More in detail, it is also capable of reproducing the chatter vibration of racing motorcycles. The chatter phenomenon, appearing during high speed cornering maneuvers, consists of a self-excited vertical oscillation of both the front and rear unsprung masses in the range of frequency between 17 and 22 Hz. A critical maneuver is numerically simulated, and a self-excited vibration appears, consistent with the experimentally measured chatter vibration. Finally, the driving mechanism for the self-excitation is highlighted and a physical interpretation is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the Generalized Beam Theory (GBT) is used as the main tool to analyze the mechanics of thin-walled beams. After an introduction to the subject and a quick review of some of the most well-known approaches to describe the behaviour of thin-walled beams, a novel formulation of the GBT is presented. This formulation contains the classic shear-deformable GBT available in the literature and contributes an additional description of cross-section warping that is variable along the wall thickness besides along the wall midline. Shear deformation is introduced in such a way that the classical shear strain components of the Timoshenko beam theory are recovered exactly. According to the new kinematics proposed, a reviewed form of the cross-section analysis procedure is devised, based on a unique modal decomposition. Later, a procedure for a posteriori reconstruction of all the three-dimensional stress components in the finite element analysis of thin-walled beams using the GBT is presented. The reconstruction is simple and based on the use of three-dimensional equilibrium equations and of the RCP procedure. Finally, once the stress reconstruction procedure is presented, a study of several existing issues on the constitutive relations in the GBT is carried out. Specifically, a constitutive law based on mirroring the kinematic constraints of the GBT model into a specific stress field assumption is proposed. It is shown that this method is equally valid for isotropic and orthotropic beams and coincides with the conventional GBT approach available in the literature. Later on, an analogous procedure is presented for the case of laminated beams. Lastly, as a way to improve an inherently poor description of shear deformability in the GBT, the introduction of shear correction factors is proposed. Throughout this work, numerous examples are provided to determine the validity of all the proposed contributions to the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work illustrates a soil-tunnel-structure interaction study performed by an integrated,geotechnical and structural,approach based on 3D finite element analyses and validated against experimental observations.The study aims at analysing the response of reinforced concrete framed buildings on discrete foundations in interaction with metro lines.It refers to the case of the twin tunnels of the Milan (Italy) metro line 5,recently built in coarse grained materials using EPB machines,for which subsidence measurements collected along ground and building sections during tunnelling were available.Settlements measured under freefield conditions are firstly back interpreted using Gaussian empirical predictions. Then,the in situ measurements’ analysis is extended to include the evolving response of a 9 storey reinforced concrete building while being undercrossed by the metro line.In the finite element study,the soil mechanical behaviour is described using an advanced constitutive model. This latter,when combined with a proper simulation of the excavation process, proves to realistically reproduce the subsidence profiles under free field conditions and to capture the interaction phenomena occurring between the twin tunnels during the excavation. Furthermore, when the numerical model is extended to include the building, schematised in a detailed manner, the results are in good agreement with the monitoring data for different stages of the twin tunnelling. Thus, they indirectly confirm the satisfactory performance of the adopted numerical approach which also allows a direct evaluation of the structural response as an outcome of the analysis. Further analyses are also carried out modelling the building with different levels of detail. The results highlight that, in this case, the simplified approach based on the equivalent plate schematisation is inadequate to capture the real tunnelling induced displacement field. The overall behaviour of the system proves to be mainly influenced by the buried portion of the building which plays an essential role in the interaction mechanism, due to its high stiffness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heart diseases are the leading cause of death worldwide, both for men and women. However, the ionic mechanisms underlying many cardiac arrhythmias and genetic disorders are not completely understood, thus leading to a limited efficacy of the current available therapies and leaving many open questions for cardiac electrophysiologists. On the other hand, experimental data availability is still a great issue in this field: most of the experiments are performed in vitro and/or using animal models (e.g. rabbit, dog and mouse), even when the final aim is to better understand the electrical behaviour of in vivo human heart either in physiological or pathological conditions. Computational modelling constitutes a primary tool in cardiac electrophysiology: in silico simulations, based on the available experimental data, may help to understand the electrical properties of the heart and the ionic mechanisms underlying a specific phenomenon. Once validated, mathematical models can be used for making predictions and testing hypotheses, thus suggesting potential therapeutic targets. This PhD thesis aims to apply computational cardiac modelling of human single cell action potential (AP) to three clinical scenarios, in order to gain new insights into the ionic mechanisms involved in the electrophysiological changes observed in vitro and/or in vivo. The first context is blood electrolyte variations, which may occur in patients due to different pathologies and/or therapies. In particular, we focused on extracellular Ca2+ and its effect on the AP duration (APD). The second context is haemodialysis (HD) therapy: in addition to blood electrolyte variations, patients undergo a lot of other different changes during HD, e.g. heart rate, cell volume, pH, and sympatho-vagal balance. The third context is human hypertrophic cardiomyopathy (HCM), a genetic disorder characterised by an increased arrhythmic risk, and still lacking a specific pharmacological treatment.