21 resultados para Numerical One-Loop Integration
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The main object of this thesis is the analysis and the quantization of spinning particle models which employ extended ”one dimensional supergravity” on the worldline, and their relation to the theory of higher spin fields (HS). In the first part of this work we have described the classical theory of massless spinning particles with an SO(N) extended supergravity multiplet on the worldline, in flat and more generally in maximally symmetric backgrounds. These (non)linear sigma models describe, upon quantization, the dynamics of particles with spin N/2. Then we have analyzed carefully the quantization of spinning particles with SO(N) extended supergravity on the worldline, for every N and in every dimension D. The physical sector of the Hilbert space reveals an interesting geometrical structure: the generalized higher spin curvature (HSC). We have shown, in particular, that these models of spinning particles describe a subclass of HS fields whose equations of motions are conformally invariant at the free level; in D = 4 this subclass describes all massless representations of the Poincar´e group. In the third part of this work we have considered the one-loop quantization of SO(N) spinning particle models by studying the corresponding partition function on the circle. After the gauge fixing of the supergravity multiplet, the partition function reduces to an integral over the corresponding moduli space which have been computed by using orthogonal polynomial techniques. Finally we have extend our canonical analysis, described previously for flat space, to maximally symmetric target spaces (i.e. (A)dS background). The quantization of these models produce (A)dS HSC as the physical states of the Hilbert space; we have used an iterative procedure and Pochhammer functions to solve the differential Bianchi identity in maximally symmetric spaces. Motivated by the correspondence between SO(N) spinning particle models and HS gauge theory, and by the notorious difficulty one finds in constructing an interacting theory for fields with spin greater than two, we have used these one dimensional supergravity models to study and extract informations on HS. In the last part of this work we have constructed spinning particle models with sp(2) R symmetry, coupled to Hyper K¨ahler and Quaternionic-K¨ahler (QK) backgrounds.
Resumo:
The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.
Resumo:
The research field of my PhD concerns mathematical modeling and numerical simulation, applied to the cardiac electrophysiology analysis at a single cell level. This is possible thanks to the development of mathematical descriptions of single cellular components, ionic channels, pumps, exchangers and subcellular compartments. Due to the difficulties of vivo experiments on human cells, most of the measurements are acquired in vitro using animal models (e.g. guinea pig, dog, rabbit). Moreover, to study the cardiac action potential and all its features, it is necessary to acquire more specific knowledge about single ionic currents that contribute to the cardiac activity. Electrophysiological models of the heart have become very accurate in recent years giving rise to extremely complicated systems of differential equations. Although describing the behavior of cardiac cells quite well, the models are computationally demanding for numerical simulations and are very difficult to analyze from a mathematical (dynamical-systems) viewpoint. Simplified mathematical models that capture the underlying dynamics to a certain extent are therefore frequently used. The results presented in this thesis have confirmed that a close integration of computational modeling and experimental recordings in real myocytes, as performed by dynamic clamp, is a useful tool in enhancing our understanding of various components of normal cardiac electrophysiology, but also arrhythmogenic mechanisms in a pathological condition, especially when fully integrated with experimental data.
Resumo:
Recently, a rising interest in political and economic integration/disintegration issues has been developed in the political economy field. This growing strand of literature partly draws on traditional issues of fiscal federalism and optimum public good provision and focuses on a trade-off between the benefits of centralization, arising from economies of scale or externalities, and the costs of harmonizing policies as a consequence of the increased heterogeneity of individual preferences in an international union or in a country composed of at least two regions. This thesis stems from this strand of literature and aims to shed some light on two highly relevant aspects of the political economy of European integration. The first concerns the role of public opinion in the integration process; more precisely, how economic benefits and costs of integration shape citizens' support for European Union (EU) membership. The second is the allocation of policy competences among different levels of government: European, national and regional. Chapter 1 introduces the topics developed in this thesis by reviewing the main recent theoretical developments in the political economy analysis of integration processes. It is structured as follows. First, it briefly surveys a few relevant articles on economic theories of integration and disintegration processes (Alesina and Spolaore 1997, Bolton and Roland 1997, Alesina et al. 2000, Casella and Feinstein 2002) and discusses their relevance for the study of the impact of economic benefits and costs on public opinion attitude towards the EU. Subsequently, it explores the links existing between such political economy literature and theories of fiscal federalism, especially with regard to normative considerations concerning the optimal allocation of competences in a union. Chapter 2 firstly proposes a model of citizens’ support for membership of international unions, with explicit reference to the EU; subsequently it tests the model on a panel of EU countries. What are the factors that influence public opinion support for the European Union (EU)? In international relations theory, the idea that citizens' support for the EU depends on material benefits deriving from integration, i.e. whether European integration makes individuals economically better off (utilitarian support), has been common since the 1970s, but has never been the subject of a formal treatment (Hix 2005). A small number of studies in the 1990s have investigated econometrically the link between national economic performance and mass support for European integration (Eichenberg and Dalton 1993; Anderson and Kalthenthaler 1996), but only making informal assumptions. The main aim of Chapter 2 is thus to propose and test our model with a view to providing a more complete and theoretically grounded picture of public support for the EU. Following theories of utilitarian support, we assume that citizens are in favour of membership if they receive economic benefits from it. To develop this idea, we propose a simple political economic model drawing on the recent economic literature on integration and disintegration processes. The basic element is the existence of a trade-off between the benefits of centralisation and the costs of harmonising policies in presence of heterogeneous preferences among countries. The approach we follow is that of the recent literature on the political economy of international unions and the unification or break-up of nations (Bolton and Roland 1997, Alesina and Wacziarg 1999, Alesina et al. 2001, 2005a, to mention only the relevant). The general perspective is that unification provides returns to scale in the provision of public goods, but reduces each member state’s ability to determine its most favoured bundle of public goods. In the simple model presented in Chapter 2, support for membership of the union is increasing in the union’s average income and in the loss of efficiency stemming from being outside the union, and decreasing in a country’s average income, while increasing heterogeneity of preferences among countries points to a reduced scope of the union. Afterwards we empirically test the model with data on the EU; more precisely, we perform an econometric analysis employing a panel of member countries over time. The second part of Chapter 2 thus tries to answer the following question: does public opinion support for the EU really depend on economic factors? The findings are broadly consistent with our theoretical expectations: the conditions of the national economy, differences in income among member states and heterogeneity of preferences shape citizens’ attitude towards their country’s membership of the EU. Consequently, this analysis offers some interesting policy implications for the present debate about ratification of the European Constitution and, more generally, about how the EU could act in order to gain more support from the European public. Citizens in many member states are called to express their opinion in national referenda, which may well end up in rejection of the Constitution, as recently happened in France and the Netherlands, triggering a European-wide political crisis. These events show that nowadays understanding public attitude towards the EU is not only of academic interest, but has a strong relevance for policy-making too. Chapter 3 empirically investigates the link between European integration and regional autonomy in Italy. Over the last few decades, the double tendency towards supranationalism and regional autonomy, which has characterised some European States, has taken a very interesting form in this country, because Italy, besides being one of the founding members of the EU, also implemented a process of decentralisation during the 1970s, further strengthened by a constitutional reform in 2001. Moreover, the issue of the allocation of competences among the EU, the Member States and the regions is now especially topical. The process leading to the drafting of European Constitution (even if then it has not come into force) has attracted much attention from a constitutional political economy perspective both on a normative and positive point of view (Breuss and Eller 2004, Mueller 2005). The Italian parliament has recently passed a new thorough constitutional reform, still to be approved by citizens in a referendum, which includes, among other things, the so called “devolution”, i.e. granting the regions exclusive competence in public health care, education and local police. Following and extending the methodology proposed in a recent influential article by Alesina et al. (2005b), which only concentrated on the EU activity (treaties, legislation, and European Court of Justice’s rulings), we develop a set of quantitative indicators measuring the intensity of the legislative activity of the Italian State, the EU and the Italian regions from 1973 to 2005 in a large number of policy categories. By doing so, we seek to answer the following broad questions. Are European and regional legislations substitutes for state laws? To what extent are the competences attributed by the European treaties or the Italian Constitution actually exerted in the various policy areas? Is their exertion consistent with the normative recommendations from the economic literature about their optimum allocation among different levels of government? The main results show that, first, there seems to be a certain substitutability between EU and national legislations (even if not a very strong one), but not between regional and national ones. Second, the EU concentrates its legislative activity mainly in international trade and agriculture, whilst social policy is where the regions and the State (which is also the main actor in foreign policy) are more active. Third, at least two levels of government (in some cases all of them) are significantly involved in the legislative activity in many sectors, even where the rationale for that is, at best, very questionable, indicating that they actually share a larger number of policy tasks than that suggested by the economic theory. It appears therefore that an excessive number of competences are actually shared among different levels of government. From an economic perspective, it may well be recommended that some competences be shared, but only when the balance between scale or spillover effects and heterogeneity of preferences suggests so. When, on the contrary, too many levels of government are involved in a certain policy area, the distinction between their different responsibilities easily becomes unnecessarily blurred. This may not only leads to a slower and inefficient policy-making process, but also risks to make it too complicate to understand for citizens, who, on the contrary, should be able to know who is really responsible for a certain policy when they vote in national,local or European elections or in referenda on national or European constitutional issues.
Resumo:
Understanding the complex relationships between quantities measured by volcanic monitoring network and shallow magma processes is a crucial headway for the comprehension of volcanic processes and a more realistic evaluation of the associated hazard. This question is very relevant at Campi Flegrei, a volcanic quiescent caldera immediately north-west of Napoli (Italy). The system activity shows a high fumarole release and periodic ground slow movement (bradyseism) with high seismicity. This activity, with the high people density and the presence of military and industrial buildings, makes Campi Flegrei one of the areas with higher volcanic hazard in the world. In such a context my thesis has been focused on magma dynamics due to the refilling of shallow magma chambers, and on the geophysical signals detectable by seismic, deformative and gravimetric monitoring networks that are associated with this phenomenologies. Indeed, the refilling of magma chambers is a process frequently occurring just before a volcanic eruption; therefore, the faculty of identifying this dynamics by means of recorded signal analysis is important to evaluate the short term volcanic hazard. The space-time evolution of dynamics due to injection of new magma in the magma chamber has been studied performing numerical simulations with, and implementing additional features in, the code GALES (Longo et al., 2006), recently developed and still on the upgrade at the Istituto Nazionale di Geofisica e Vulcanologia in Pisa (Italy). GALES is a finite element code based on a physico-mathematical two dimensional, transient model able to treat fluids as multiphase homogeneous mixtures, compressible to incompressible. The fundamental equations of mass, momentum and energy balance are discretised both in time and space using the Galerkin Least-Squares and discontinuity-capturing stabilisation technique. The physical properties of the mixture are computed as a function of local conditions of magma composition, pressure and temperature.The model features enable to study a broad range of phenomenologies characterizing pre and sin-eruptive magma dynamics in a wide domain from the volcanic crater to deep magma feeding zones. The study of displacement field associated with the simulated fluid dynamics has been carried out with a numerical code developed by the Geophysical group at the University College Dublin (O’Brien and Bean, 2004b), with whom we started a very profitable collaboration. In this code, the seismic wave propagation in heterogeneous media with free surface (e.g. the Earth’s surface) is simulated using a discrete elastic lattice where particle interactions are controlled by the Hooke’s law. This method allows to consider medium heterogeneities and complex topography. The initial and boundary conditions for the simulations have been defined within a coordinate project (INGV-DPC 2004-06 V3_2 “Research on active volcanoes, precursors, scenarios, hazard and risk - Campi Flegrei”), to which this thesis contributes, and many researchers experienced on Campi Flegrei in volcanological, seismic, petrological, geochemical fields, etc. collaborate. Numerical simulations of magma and rock dynamis have been coupled as described in the thesis. The first part of the thesis consists of a parametric study aimed at understanding the eect of the presence in magma of carbon dioxide in magma in the convection dynamics. Indeed, the presence of this volatile was relevant in many Campi Flegrei eruptions, including some eruptions commonly considered as reference for a future activity of this volcano. A set of simulations considering an elliptical magma chamber, compositionally uniform, refilled from below by a magma with volatile content equal or dierent from that of the resident magma has been performed. To do this, a multicomponent non-ideal magma saturation model (Papale et al., 2006) that considers the simultaneous presence of CO2 and H2O, has been implemented in GALES. Results show that the presence of CO2 in the incoming magma increases its buoyancy force promoting convection ad mixing. The simulated dynamics produce pressure transients with frequency and amplitude in the sensitivity range of modern geophysical monitoring networks such as the one installed at Campi Flegrei . In the second part, simulations more related with the Campi Flegrei volcanic system have been performed. The simulated system has been defined on the basis of conditions consistent with the bulk of knowledge of Campi Flegrei and in particular of the Agnano-Monte Spina eruption (4100 B.P.), commonly considered as reference for a future high intensity eruption in this area. The magmatic system has been modelled as a long dyke refilling a small shallow magma chamber; magmas with trachytic and phonolitic composition and variable volatile content of H2O and CO2 have been considered. The simulations have been carried out changing the condition of magma injection, the system configuration (magma chamber geometry, dyke size) and the resident and refilling magma composition and volatile content, in order to study the influence of these factors on the simulated dynamics. Simulation results allow to follow each step of the gas-rich magma ascent in the denser magma, highlighting the details of magma convection and mixing. In particular, the presence of more CO2 in the deep magma results in more ecient and faster dynamics. Through this simulations the variation of the gravimetric field has been determined. Afterward, the space-time distribution of stress resulting from numerical simulations have been used as boundary conditions for the simulations of the displacement field imposed by the magmatic dynamics on rocks. The properties of the simulated domain (rock density, P and S wave velocities) have been based on data from literature on active and passive tomographic experiments, obtained through a collaboration with A. Zollo at the Dept. of Physics of the Federici II Univeristy in Napoli. The elasto-dynamics simulations allow to determine the variations of the space-time distribution of deformation and the seismic signal associated with the studied magmatic dynamics. In particular, results show that these dynamics induce deformations similar to those measured at Campi Flegrei and seismic signals with energies concentrated on the typical frequency bands observed in volcanic areas. The present work shows that an approach based on the solution of equations describing the physics of processes within a magmatic fluid and the surrounding rock system is able to recognise and describe the relationships between geophysical signals detectable on the surface and deep magma dynamics. Therefore, the results suggest that the combined study of geophysical data and informations from numerical simulations can allow in a near future a more ecient evaluation of the short term volcanic hazard.
Resumo:
Magnetic resonance imaging (MRI) is today precluded to patients bearing active implantable medical devices AIMDs). The great advantages related to this diagnostic modality, together with the increasing number of people benefiting from implantable devices, in particular pacemakers(PM)and carioverter/defibrillators (ICD), is prompting the scientific community the study the possibility to extend MRI also to implanted patients. The MRI induced specific absorption rate (SAR) and the consequent heating of biological tissues is one of the major concerns that makes patients bearing metallic structures contraindicated for MRI scans. To date, both in-vivo and in-vitro studies have demonstrated the potentially dangerous temperature increase caused by the radiofrequency (RF) field generated during MRI procedures in the tissues surrounding thin metallic implants. On the other side, the technical evolution of MRI scanners and of AIMDs together with published data on the lack of adverse events have reopened the interest in this field and suggest that, under given conditions, MRI can be safely performed also in implanted patients. With a better understanding of the hazards of performing MRI scans on implanted patients as well as the development of MRI safe devices, we may soon enter an era where the ability of this imaging modality may be more widely used to assist in the appropriate diagnosis of patients with devices. In this study both experimental measures and numerical analysis were performed. Aim of the study is to systematically investigate the effects of the MRI RF filed on implantable devices and to identify the elements that play a major role in the induced heating. Furthermore, we aimed at developing a realistic numerical model able to simulate the interactions between an RF coil for MRI and biological tissues implanted with a PM, and to predict the induced SAR as a function of the particular path of the PM lead. The methods developed and validated during the PhD program led to the design of an experimental framework for the accurate measure of PM lead heating induced by MRI systems. In addition, numerical models based on Finite-Differences Time-Domain (FDTD) simulations were validated to obtain a general tool for investigating the large number of parameters and factors involved in this complex phenomenon. The results obtained demonstrated that the MRI induced heating on metallic implants is a real risk that represents a contraindication in extending MRI scans also to patient bearing a PM, an ICD, or other thin metallic objects. On the other side, both experimental data and numerical results show that, under particular conditions, MRI procedures might be consider reasonably safe also for an implanted patient. The complexity and the large number of variables involved, make difficult to define a unique set of such conditions: when the benefits of a MRI investigation cannot be obtained using other imaging techniques, the possibility to perform the scan should not be immediately excluded, but some considerations are always needed.
Resumo:
The research is part of a survey for the detection of the hydraulic and geotechnical conditions of river embankments funded by the Reno River Basin Regional Technical Service of the Region Emilia-Romagna. The hydraulic safety of the Reno River, one of the main rivers in North-Eastern Italy, is indeed of primary importance to the Emilia-Romagna regional administration. The large longitudinal extent of the banks (several hundreds of kilometres) has placed great interest in non-destructive geophysical methods, which, compared to other methods such as drilling, allow for the faster and often less expensive acquisition of high-resolution data. The present work aims to experience the Ground Penetrating Radar (GPR) for the detection of local non-homogeneities (mainly stratigraphic contacts, cavities and conduits) inside the Reno River and its tributaries embankments, taking into account supplementary data collected with traditional destructive tests (boreholes, cone penetration tests etc.). A comparison with non-destructive methodologies likewise electric resistivity tomography (ERT), Multi-channels Analysis of Surface Waves (MASW), FDEM induction, was also carried out in order to verify the usability of GPR and to provide integration of various geophysical methods in the process of regular maintenance and check of the embankments condition. The first part of this thesis is dedicated to the explanation of the state of art concerning the geographic, geomorphologic and geotechnical characteristics of Reno River and its tributaries embankments, as well as the description of some geophysical applications provided on embankments belonging to European and North-American Rivers, which were used as bibliographic basis for this thesis realisation. The second part is an overview of the geophysical methods that were employed for this research, (with a particular attention to the GPR), reporting also their theoretical basis and a deepening of some techniques of the geophysical data analysis and representation, when applied to river embankments. The successive chapters, following the main scope of this research that is to highlight advantages and drawbacks in the use of Ground Penetrating Radar applied to Reno River and its tributaries embankments, show the results obtained analyzing different cases that could yield the formation of weakness zones, which successively lead to the embankment failure. As advantages, a considerable velocity of acquisition and a spatial resolution of the obtained data, incomparable with respect to other methodologies, were recorded. With regard to the drawbacks, some factors, related to the attenuation losses of wave propagation, due to different content in clay, silt, and sand, as well as surface effects have significantly limited the correlation between GPR profiles and geotechnical information and therefore compromised the embankment safety assessment. Recapitulating, the Ground Penetrating Radar could represent a suitable tool for checking up river dike conditions, but its use has significantly limited by geometric and geotechnical characteristics of the Reno River and its tributaries levees. As a matter of facts, only the shallower part of the embankment was investigate, achieving also information just related to changes in electrical properties, without any numerical measurement. Furthermore, GPR application is ineffective for a preliminary assessment of embankment safety conditions, while for detailed campaigns at shallow depth, which aims to achieve immediate results with optimal precision, its usage is totally recommended. The cases where multidisciplinary approach was tested, reveal an optimal interconnection of the various geophysical methodologies employed, producing qualitative results concerning the preliminary phase (FDEM), assuring quantitative and high confidential description of the subsoil (ERT) and finally, providing fast and highly detailed analysis (GPR). Trying to furnish some recommendations for future researches, the simultaneous exploitation of many geophysical devices to assess safety conditions of river embankments is absolutely suggested, especially to face reliable flood event, when the entire extension of the embankments themselves must be investigated.
Resumo:
The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).
Resumo:
Ion channels are pore-forming proteins that regulate the flow of ions across biological cell membranes. Ion channels are fundamental in generating and regulating the electrical activity of cells in the nervous system and the contraction of muscolar cells. Solid-state nanopores are nanometer-scale pores located in electrically insulating membranes. They can be adopted as detectors of specific molecules in electrolytic solutions. Permeation of ions from one electrolytic solution to another, through a protein channel or a synthetic pore is a process of considerable importance and realistic analysis of the main dependencies of ion current on the geometrical and compositional characteristics of these structures are highly required. The project described by this thesis is an effort to improve the understanding of ion channels by devising methods for computer simulation that can predict channel conductance from channel structure. This project describes theory, algorithms and implementation techniques used to develop a novel 3-D numerical simulator of ion channels and synthetic nanopores based on the Brownian Dynamics technique. This numerical simulator could represent a valid tool for the study of protein ion channel and synthetic nanopores, allowing to investigate at the atomic-level the complex electrostatic interactions that determine channel conductance and ion selectivity. Moreover it will provide insights on how parameters like temperature, applied voltage, and pore shape could influence ion translocation dynamics. Furthermore it will help making predictions of conductance of given channel structures and it will add information like electrostatic potential or ionic concentrations throughout the simulation domain helping the understanding of ion flow through membrane pores.
Resumo:
The laser driven ion acceleration is a burgeoning field of resarch and is attracting a growing number of scientists since the first results reported in 2000 obtained irradiating thin solid foils by high power laser pulses. The growing interest is driven by the peculiar characteristics of the produced bunches, the compactness of the whole accelerating system and the very short accelerating length of this all-optical accelerators. A fervent theoretical and experimental work has been done since then. An important part of the theoretical study is done by means of numerical simulations and the most widely used technique exploits PIC codes (“Particle In Cell'”). In this thesis the PIC code AlaDyn, developed by our research group considering innovative algorithms, is described. My work has been devoted to the developement of the code and the investigation of the laser driven ion acceleration for different target configurations. Two target configurations for the proton acceleration are presented together with the results of the 2D and 3D numerical investigation. One target configuration consists of a solid foil with a low density layer attached on the irradiated side. The nearly critical plasma of the foam layer allows a very high energy absorption by the target and an increase of the proton energy up to a factor 3, when compared to the ``pure'' TNSA configuration. The differences of the regime with respect to the standard TNSA are described The case of nearly critical density targets has been investigated with 3D simulations. In this case the laser travels throughout the plasma and exits on the rear side. During the propagation, the laser drills a channel and induce a magnetic vortex that expanding on the rear side of the targer is source of a very intense electric field. The protons of the plasma are strongly accelerated up to energies of 100 MeV using a 200PW laser.
Resumo:
Research work carried out in focusing a novel multiphase-multilevel ac motor drive system much suitable for low-voltage high-current power applications. In specific, six-phase asymmetrical induction motor with open-end stator winding configuration, fed from four standard two-level three-phase voltage source inverters (VSIs). Proposed synchronous reference frame control algorithm shares the total dc source power among the 4 VSIs in each switching cycle with three degree of freedom. Precisely, first degree of freedom concerns with the current sharing between two three-phase stator windings. Based on modified multilevel space vector pulse width modulation shares the voltage between each single VSIs of two three-phase stator windings with second and third degree of freedom, having proper multilevel output waveforms. Complete model of whole ac motor drive based on three-phase space vector decomposition approach was developed in PLECS - numerical simulation software working in MATLAB environment. Proposed synchronous reference control algorithm was framed in MATLAB with modified multilevel space vector pulse width modulator. The effectiveness of the entire ac motor drives system was tested. Simulation results are given in detail to show symmetrical and asymmetrical, power sharing conditions. Furthermore, the three degree of freedom are exploited to investigate fault tolerant capabilities in post-fault conditions. Complete set of simulation results are provided when one, two and three VSIs are faulty. Hardware prototype model of quad-inverter was implemented with two passive three-phase open-winding loads using two TMS320F2812 DSP controllers. Developed McBSP (multi-channel buffered serial port) communication algorithm able to control the four VSIs for PWM communication and synchronization. Open-loop control scheme based on inverse three-phase decomposition approach was developed to control entire quad-inverter configuration and tested with balanced and unbalanced operating conditions with simplified PWM techniques. Both simulation and experimental results are always in good agreement with theoretical developments.
Resumo:
Over the past few years, the switch towards renewable sources for energy production is considered as necessary for the future sustainability of the world environment. Hydrogen is one of the most promising energy vectors for the stocking of low density renewable sources such as wind, biomasses and sun. The production of hydrogen by the steam-iron process could be one of the most versatile approaches useful for the employment of different reducing bio-based fuels. The steam iron process is a two-step chemical looping reaction based (i) on the reduction of an iron-based oxide with an organic compound followed by (ii) a reoxidation of the reduced solid material by water, which lead to the production of hydrogen. The overall reaction is the water oxidation of the organic fuel (gasification or reforming processes) but the inherent separation of the two semireactions allows the production of carbon-free hydrogen. In this thesis, steam-iron cycle with methanol is proposed and three different oxides with the generic formula AFe2O4 (A=Co,Ni,Fe) are compared in order to understand how the chemical properties and the structural differences can affect the productivity of the overall process. The modifications occurred in used samples are deeply investigated by the analysis of used materials. A specific study on CoFe2O4-based process using both classical and in-situ/ex-situ analysis is reported employing many characterization techniques such as FTIR spectroscopy, TEM, XRD, XPS, BET, TPR and Mössbauer spectroscopy.