925 resultados para Optimal allocation of voltage regulators and capacitor
Resumo:
Recently, a rising interest in political and economic integration/disintegration issues has been developed in the political economy field. This growing strand of literature partly draws on traditional issues of fiscal federalism and optimum public good provision and focuses on a trade-off between the benefits of centralization, arising from economies of scale or externalities, and the costs of harmonizing policies as a consequence of the increased heterogeneity of individual preferences in an international union or in a country composed of at least two regions. This thesis stems from this strand of literature and aims to shed some light on two highly relevant aspects of the political economy of European integration. The first concerns the role of public opinion in the integration process; more precisely, how economic benefits and costs of integration shape citizens' support for European Union (EU) membership. The second is the allocation of policy competences among different levels of government: European, national and regional. Chapter 1 introduces the topics developed in this thesis by reviewing the main recent theoretical developments in the political economy analysis of integration processes. It is structured as follows. First, it briefly surveys a few relevant articles on economic theories of integration and disintegration processes (Alesina and Spolaore 1997, Bolton and Roland 1997, Alesina et al. 2000, Casella and Feinstein 2002) and discusses their relevance for the study of the impact of economic benefits and costs on public opinion attitude towards the EU. Subsequently, it explores the links existing between such political economy literature and theories of fiscal federalism, especially with regard to normative considerations concerning the optimal allocation of competences in a union. Chapter 2 firstly proposes a model of citizens’ support for membership of international unions, with explicit reference to the EU; subsequently it tests the model on a panel of EU countries. What are the factors that influence public opinion support for the European Union (EU)? In international relations theory, the idea that citizens' support for the EU depends on material benefits deriving from integration, i.e. whether European integration makes individuals economically better off (utilitarian support), has been common since the 1970s, but has never been the subject of a formal treatment (Hix 2005). A small number of studies in the 1990s have investigated econometrically the link between national economic performance and mass support for European integration (Eichenberg and Dalton 1993; Anderson and Kalthenthaler 1996), but only making informal assumptions. The main aim of Chapter 2 is thus to propose and test our model with a view to providing a more complete and theoretically grounded picture of public support for the EU. Following theories of utilitarian support, we assume that citizens are in favour of membership if they receive economic benefits from it. To develop this idea, we propose a simple political economic model drawing on the recent economic literature on integration and disintegration processes. The basic element is the existence of a trade-off between the benefits of centralisation and the costs of harmonising policies in presence of heterogeneous preferences among countries. The approach we follow is that of the recent literature on the political economy of international unions and the unification or break-up of nations (Bolton and Roland 1997, Alesina and Wacziarg 1999, Alesina et al. 2001, 2005a, to mention only the relevant). The general perspective is that unification provides returns to scale in the provision of public goods, but reduces each member state’s ability to determine its most favoured bundle of public goods. In the simple model presented in Chapter 2, support for membership of the union is increasing in the union’s average income and in the loss of efficiency stemming from being outside the union, and decreasing in a country’s average income, while increasing heterogeneity of preferences among countries points to a reduced scope of the union. Afterwards we empirically test the model with data on the EU; more precisely, we perform an econometric analysis employing a panel of member countries over time. The second part of Chapter 2 thus tries to answer the following question: does public opinion support for the EU really depend on economic factors? The findings are broadly consistent with our theoretical expectations: the conditions of the national economy, differences in income among member states and heterogeneity of preferences shape citizens’ attitude towards their country’s membership of the EU. Consequently, this analysis offers some interesting policy implications for the present debate about ratification of the European Constitution and, more generally, about how the EU could act in order to gain more support from the European public. Citizens in many member states are called to express their opinion in national referenda, which may well end up in rejection of the Constitution, as recently happened in France and the Netherlands, triggering a European-wide political crisis. These events show that nowadays understanding public attitude towards the EU is not only of academic interest, but has a strong relevance for policy-making too. Chapter 3 empirically investigates the link between European integration and regional autonomy in Italy. Over the last few decades, the double tendency towards supranationalism and regional autonomy, which has characterised some European States, has taken a very interesting form in this country, because Italy, besides being one of the founding members of the EU, also implemented a process of decentralisation during the 1970s, further strengthened by a constitutional reform in 2001. Moreover, the issue of the allocation of competences among the EU, the Member States and the regions is now especially topical. The process leading to the drafting of European Constitution (even if then it has not come into force) has attracted much attention from a constitutional political economy perspective both on a normative and positive point of view (Breuss and Eller 2004, Mueller 2005). The Italian parliament has recently passed a new thorough constitutional reform, still to be approved by citizens in a referendum, which includes, among other things, the so called “devolution”, i.e. granting the regions exclusive competence in public health care, education and local police. Following and extending the methodology proposed in a recent influential article by Alesina et al. (2005b), which only concentrated on the EU activity (treaties, legislation, and European Court of Justice’s rulings), we develop a set of quantitative indicators measuring the intensity of the legislative activity of the Italian State, the EU and the Italian regions from 1973 to 2005 in a large number of policy categories. By doing so, we seek to answer the following broad questions. Are European and regional legislations substitutes for state laws? To what extent are the competences attributed by the European treaties or the Italian Constitution actually exerted in the various policy areas? Is their exertion consistent with the normative recommendations from the economic literature about their optimum allocation among different levels of government? The main results show that, first, there seems to be a certain substitutability between EU and national legislations (even if not a very strong one), but not between regional and national ones. Second, the EU concentrates its legislative activity mainly in international trade and agriculture, whilst social policy is where the regions and the State (which is also the main actor in foreign policy) are more active. Third, at least two levels of government (in some cases all of them) are significantly involved in the legislative activity in many sectors, even where the rationale for that is, at best, very questionable, indicating that they actually share a larger number of policy tasks than that suggested by the economic theory. It appears therefore that an excessive number of competences are actually shared among different levels of government. From an economic perspective, it may well be recommended that some competences be shared, but only when the balance between scale or spillover effects and heterogeneity of preferences suggests so. When, on the contrary, too many levels of government are involved in a certain policy area, the distinction between their different responsibilities easily becomes unnecessarily blurred. This may not only leads to a slower and inefficient policy-making process, but also risks to make it too complicate to understand for citizens, who, on the contrary, should be able to know who is really responsible for a certain policy when they vote in national,local or European elections or in referenda on national or European constitutional issues.
Resumo:
[EN]This Ph.D. thesis presents a general, robust methodology that may cover any type of 2D acoustic optimization problem. A procedure involving the coupling of Boundary Elements (BE) and Evolutionary Algorithms is proposed for systematic geometric modifications of road barriers that lead to designs with ever-increasing screening performance. Numerical simulations involving single- and multi-objective optimizations of noise barriers of varied nature are included in this document. results disclosed justify the implementation of this methodology by leading to optimal solutions of previously defined topologies that, in general, greatly outperform the acoustic efficiency of classical, widely used barrier designs normally erected near roads.
Resumo:
In calcareous soils, which are a large share of agricultural soils worldwide, iron availability is limited. Consequently, the whole plant physiology is affected, because of the key role of iron in redox metabolism, resulting in reduced crop yield and quality. Peach cultivation is economically important in northern Italy, and is easily subjected to iron chlorosis. The management of iron nutrition in peach includes grafting on bicarbonate-tolerant rootstocks; other forms of management may be expensive and environmentally impacting. Four genotypes, used as rootstocks for peach and characterized by different degrees of tolerance to chlorosis, were tested in vitro on optimal and bicarbonate-enriched medium. Their redox status and antioxidant responses were assayed; the production and possible roles of nitric oxide (NO) and related compounds were also studied. The most sensitive genotypes show a stronger reduction of the antioxidant enzymatic activities and an increased oxidative stress. A high production of NO was found to be associated to resistant genotypes, whereas sensitive genotypes reacted to stress by downregulating nitrosoglutathione reductase activity. Therefore, NO is proposed to improve the internal iron availability, or to stimulate iron intake.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Atmospheric CO2 concentration ([CO2]) has increased over the last 250 years, mainly due to human activities. Of total anthropogenic emissions, almost 31% has been sequestered by the terrestrial biosphere. A considerable contribution to this sink comes from temperate and boreal forest ecosystems of the northern hemisphere, which contain a large amount of carbon (C) stored as biomass and soil organic matter. Several potential drivers for this forest C sequestration have been proposed, including increasing atmospheric [CO2], temperature, nitrogen (N) deposition and changes in management practices. However, it is not known which of these drivers are most important. The overall aim of this thesis project was to develop a simple ecosystem model which explicitly incorporates our best understanding of the mechanisms by which these drivers affect forest C storage, and to use this model to investigate the sensitivity of the forest ecosystem to these drivers. I firstly developed a version of the Generic Decomposition and Yield (G’DAY) model to explicitly investigate the mechanisms leading to forest C sequestration following N deposition. Specifically, I modified the G’DAY model to include advances in understanding of C allocation, canopy N uptake, and leaf trait relationships. I also incorporated a simple forest management practice subroutine. Secondly, I investigated the effect of CO2 fertilization on forest productivity with relation to the soil N availability feedback. I modified the model to allow it to simulate short-term responses of deciduous forests to environmental drivers, and applied it to data from a large-scale forest Free-Air CO2 Enrichment (FACE) experiment. Finally, I used the model to investigate the combined effects of recent observed changes in atmospheric [CO2], N deposition, and climate on a European forest stand. The model developed in my thesis project was an effective tool for analysis of effects of environmental drivers on forest ecosystem C storage. Key results from model simulations include: (i) N availability has a major role in forest ecosystem C sequestration; (ii) atmospheric N deposition is an important driver of N availability on short and long time-scales; (iii) rising temperature increases C storage by enhancing soil N availability and (iv) increasing [CO2] significantly affects forest growth and C storage only when N availability is not limiting.
Resumo:
Osmotic Dehydration and Vacuum Impregnation are interesting operations in the food industry with applications in minimal fruit processing and/or freezing, allowing to develop new products with specific innovative characteristics. Osmotic dehydration is widely used for the partial removal of water from cellular tissue by immersion in hypertonic (osmotic) solution. The driving force for the diffusion of water from the tissue is provided by the differences in water chemical potential between the external solution and the internal liquid phase of the cells. Vacuum Impregnation of porous products immersed in a liquid phase consist of reduction of pressure in a solid-liquid system (vacuum step) followed by the restoration of atmospheric pressure (atmospheric step). During the vacuum step the internal gas in the product pores is expanded and partially flows out while during the atmospheric step, there is a compression of residual gas and the external liquid flows into the pores (Fito, 1994). This process is also a very useful unit operation in food engineering as it allows to introduce specific solutes in the tissue which can play different functions (antioxidants, pH regulators, preservatives, cryoprotectants etc.). The present study attempts to enhance our understanding and knowledge of fruit as living organism, interacting dynamically with the environment, and to explore metabolic, structural, physico-chemical changes during fruit processing. The use of innovative approaches and/or technologies such as SAFES (Systematic Approach to Food Engineering System), LF-NMR (Low Frequency Nuclear Magnetic Resonance), GASMAS (Gas in Scattering Media Absorption Spectroscopy) are very promising to deeply study these phenomena. SAFES methodology was applied in order to study irreversibility of the structural changes of kiwifruit during short time of osmotic treatment. The results showed that the deformed tissue can recover its initial state 300 min after osmotic dehydration at 25 °C. The LF-NMR resulted very useful in water status and compartmentalization study, permitting to separate observation of three different water population presented in vacuole, cytoplasm plus extracellular space and cell wall. GASMAS techniques was able to study the pressure equilibration after Vacuum Impregnation showing that after restoration of atmospheric pressure in the solid-liquid system, there was a reminding internal low pressure in the apple tissue that slowly increases until reaching the atmospheric pressure, in a time scale that depends on the vacuum applied during the vacuum step. The physiological response of apple tissue on Vacuum Impregnation process was studied indicating the possibility of vesicular transport within the cells. Finally, the possibility to extend the freezing tolerance of strawberry fruits impregnated with cryoprotectants was proven.
Resumo:
In dieser Arbeit wurde die Elektronenemission von Nanopartikeln auf Oberflächen mittels spektroskopischen Photoelektronenmikroskopie untersucht. Speziell wurden metallische Nanocluster untersucht, als selbstorganisierte Ensembles auf Silizium oder Glassubstraten, sowie ferner ein Metall-Chalcogenid (MoS2) Nanoröhren-Prototyp auf Silizium. Der Hauptteil der Untersuchungen war auf die Wechselwirkung von fs-Laserstrahlung mit den Nanopartikeln konzentriert. Die Energie der Lichtquanten war kleiner als die Austrittsarbeit der untersuchten Proben, so dass Ein-Photonen-Photoemission ausgeschlossen werden konnte. Unsere Untersuchungen zeigten, dass ausgehend von einem kontinuierlichen Metallfilm bis hin zu Clusterfilmen ein anderer Emissionsmechanismus konkurrierend zur Multiphotonen-Photoemission auftritt und für kleine Cluster zu dominieren beginnt. Die Natur dieses neuen Mechanismus` wurde durch verschiedenartige Experimente untersucht. Der Übergang von einem kontinuierlichen zu einem Nanopartikelfilm ist begleitet von einer Zunahme des Emissionsstroms von mehr als eine Größenordnung. Die Photoemissions-Intensität wächst mit abnehmender zeitlicher Breite des Laserpulses, aber diese Abhängigkeit wird weniger steil mit sinkender Partikelgröße. Die experimentellen Resultate wurden durch verschiedene Elektronenemissions-Mechanismen erklärt, z.B. Multiphotonen-Photoemission (nPPE), thermionische Emission und thermisch unterstützte nPPE sowie optische Feldemission. Der erste Mechanismus überwiegt für kontinuierliche Filme und Partikel mit Größen oberhalb von mehreren zehn Nanometern, der zweite und dritte für Filme von Nanopartikeln von einer Größe von wenigen Nanometern. Die mikrospektroskopischen Messungen bestätigten den 2PPE-Emissionsmechanismus von dünnen Silberfilmen bei „blauer“ Laseranregung (hν=375-425nm). Das Einsetzen des Ferminiveaus ist relativ scharf und verschiebt sich um 2hν, wenn die Quantenenergie erhöht wird, wogegen es bei „roter“ Laseranregung (hν=750-850nm) deutlich verbreitert ist. Es zeigte sich, dass mit zunehmender Laserleistung die Ausbeute von niederenergetischen Elektronen schwächer zunimmt als die Ausbeute von höherenergetischen Elektronen nahe der Fermikante in einem Spektrum. Das ist ein klarer Hinweis auf eine Koexistenz verschiedener Emissionsmechanismen in einem Spektrum. Um die Größenabhängigkeit des Emissionsverhaltens theoretisch zu verstehen, wurde ein statistischer Zugang zur Lichtabsorption kleiner Metallpartikel abgeleitet und diskutiert. Die Elektronenemissionseigenschaften bei Laseranregung wurden in zusätzlichen Untersuchungen mit einer anderen Anregungsart verglichen, der Passage eines Tunnelstroms durch einen Metall-Clusterfilm nahe der Perkolationsschwelle. Die elektrischen und Emissionseigenschaften von stromtragenden Silberclusterfilmen, welche in einer schmalen Lücke (5-25 µm Breite) zwischen Silberkontakten auf einem Isolator hergestellt wurden, wurden zum ersten Mal mit einem Emissions-Elektronenmikroskop (EEM) untersucht. Die Elektronenemission beginnt im nicht-Ohmschen Bereich der Leitungsstrom-Spannungskurve des Clusterfilms. Wir untersuchten das Verhalten eines einzigen Emissionszentrums im EEM. Es zeigte sich, dass die Emissionszentren in einem stromleitenden Silberclusterfilm Punktquellen für Elektronen sind, welche hohe Emissions-Stromdichten (mehr als 100 A/cm2) tragen können. Die Breite der Energieverteilung der Elektronen von einem einzelnen Emissionszentrum wurde auf etwa 0.5-0.6 eV abgeschätzt. Als Emissionsmechanismus wird die thermionische Emission von dem „steady-state“ heißen Elektronengas in stromdurchflossenen metallischen Partikeln vorgeschlagen. Größenselektierte, einzelne auf Si-Substraten deponierte MoS2-Nanoröhren wurden mit einer Flugzeit-basierten Zweiphotonen-Photoemissions-Spektromikroskopie untersucht. Die Nanoröhren-Spektren wiesen bei fs-Laser Anregung eine erstaunlich hohe Emissionsintensität auf, deutlich höher als die SiOx Substratoberfläche. Dagegen waren die Röhren unsichtbar bei VUV-Anregung bei hν=21.2 eV. Eine ab-initio-Rechnung für einen MoS2-Slab erklärt die hohe Intensität durch eine hohe Dichte freier intermediärer Zustände beim Zweiphotonen-Übergang bei hν=3.1 eV.
A farm-level programming model to compare the atmospheric impact of conventional and organic farming
Resumo:
A model is developed to represent the activity of a farm using the method of linear programming. Two are the main components of the model, the balance of soil fertility and the livestock nutrition. According to the first, the farm is supposed to have a total requirement of nitrogen, which is to be accomplished either through internal sources (manure) or through external sources (fertilisers). The second component describes the animal husbandry as having a nutritional requirement which must be satisfied through the internal production of arable crops or the acquisition of feed from the market. The farmer is supposed to maximise total net income from the agricultural and the zoo-technical activities by choosing one rotation among those available for climate and acclivity. The perspective of the analysis is one of a short period: the structure of the farm is supposed to be fixed without possibility to change the allocation of permanent crops and the amount of animal husbandry. The model is integrated with an environmental module that describes the role of the farm within the carbon-nitrogen cycle. On the one hand the farm allows storing carbon through the photosynthesis of the plants and the accumulation of carbon in the soil; on the other some activities of the farm emit greenhouse gases into the atmosphere. The model is tested for some representative farms of the Emilia-Romagna region, showing to be capable to give different results for conventional and organic farming and providing first results concerning the different atmospheric impact. Relevant data about the representative farms and the feasible rotations are extracted from the FADN database, with an integration of the coefficients from the literature.
Resumo:
The discovery of the Cosmic Microwave Background (CMB) radiation in 1965 is one of the fundamental milestones supporting the Big Bang theory. The CMB is one of the most important source of information in cosmology. The excellent accuracy of the recent CMB data of WMAP and Planck satellites confirmed the validity of the standard cosmological model and set a new challenge for the data analysis processes and their interpretation. In this thesis we deal with several aspects and useful tools of the data analysis. We focus on their optimization in order to have a complete exploitation of the Planck data and contribute to the final published results. The issues investigated are: the change of coordinates of CMB maps using the HEALPix package, the problem of the aliasing effect in the generation of low resolution maps, the comparison of the Angular Power Spectrum (APS) extraction performances of the optimal QML method, implemented in the code called BolPol, and the pseudo-Cl method, implemented in Cromaster. The QML method has been then applied to the Planck data at large angular scales to extract the CMB APS. The same method has been applied also to analyze the TT parity and the Low Variance anomalies in the Planck maps, showing a consistent deviation from the standard cosmological model, the possible origins for this results have been discussed. The Cromaster code instead has been applied to the 408 MHz and 1.42 GHz surveys focusing on the analysis of the APS of selected regions of the synchrotron emission. The new generation of CMB experiments will be dedicated to polarization measurements, for which are necessary high accuracy devices for separating the polarizations. Here a new technology, called Photonic Crystals, is exploited to develop a new polarization splitter device and its performances are compared to the devices used nowadays.
Resumo:
One of the most diffused electronic device is the field effect transistor (FET), contained in number of billions in each electronic device. Organic optoelectronics is an emerging field that exploits the unique properties of conjugated organic materials to develop new applications that require a combination of performance, low cost and processability. Organic single crystals are the material with best performances and purity among the variety of different form of organic semiconductors. This thesis is focused on electrical and optical characterization of Rubrene single crystal bulk and thin films. Rubrene bulk is well known but for the first time we studied thin films. The first Current-voltage characterization has been performed for the first time on three Rubrene thin films with three different thickness to extract the charge carriers mobility and to assess its crystalline structure. As results we see that mobility increase with thickness. Field effect transistor based on Rubrene thin films on $SiO_2$ have been characterize by current-voltage (I-V) analyses (at several temperatures) and reveals a hopping conduction. Hopping behavior probably is due to the lattice mismatch with the substrate or intrinsic defectivity of the thin films. To understand effects of contact resistance we tested thin films with the Transmission Line Method (TLM) method. The TLM method revealeds that contact resistance is negligible but evidenced a Schottky behavior in a limited but well determined range of T. To avoid this effect we carried out annealing treatment after the electrode evaporation iswe performed a compete I-V characterization as a function of in temperature to extract the electronic density of states (DOS) distribution through the Space Charge Limited Current (SCLC) method. The results show a DOS with an exponential trenddistribution, as expected. The measured mobility of thin films is about 0.1cm^2/Vs and it increases with the film thickness. Further studies are necessary to investigate the reason and improve performances. From photocurrent spectrum we calculated an Eg of about 2.2eV and both thin films and bulk have a good crystal order. Further measurement are necessary to solve some open problems
Resumo:
Polymeric membranes represent a promising technology for gas separation processes, thanks to low costs, reduced energy consumption and limited waste production. The present thesis aims at studying the transport properties of two membrane materials, suitable for CO2 purification applications. In the first part, a polyimide, Matrimid 5218, has been throughout investigated, with particular reference to the effect of thermal treatment, aging and the presence of water vapor in the gas transport process. Permeability measurements showed that thermal history affects relevantly the diffusion of gas molecules across the membrane, influencing also the stability of the separation performances. Subsequently, the effect of water on Matrimid transport properties has been characterized for a wide set of incondensable penetrants. A monotonous reduction of permeability took place at increasing the water concentration within the polymer matrix, affecting the investigated gaseous species to the same extent, despite the different thermodynamic and kinetic features. In this view, a novel empirical model, based on the Free Volume Theory, has been proposed to qualitatively describe the phenomenon. Moreover, according to the accurate representation of the experimental data, the suggested approach has been combined with a more rigorous thermodynamic tool (NELF Model), allowing an exhaustive description of water influence on the single parameters contributing to the gas permeation across the membrane. In the second part, the study has focused on the synthesis and characterization of facilitated transport membranes, able to achieving outstanding separation performances thanks to the chemical enhancement of CO2 permeability. In particular, the transport properties have been investigated for high pressure CO2 separation applications and specific solutions have been proposed to solve stability issues, frequently arising under such severe conditions. Finally, the effect of different process parameters have been investigated, aiming at the identification of the optimal conditions capable to maximize the separation performance.
Resumo:
This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.
Resumo:
Während der letzten Jahre wurde für Spinfilter-Detektoren ein wesentlicher Schritt in Richtung stark erhöhter Effizienz vollzogen. Das ist eine wichtige Voraussetzung für spinaufgelöste Messungen mit Hilfe von modernen Elektronensp ektrometern und Impulsmikroskopen. In dieser Doktorarbeit wurden bisherige Arbeiten der parallel abbildenden Technik weiterentwickelt, die darauf beruht, dass ein elektronenoptisches Bild unter Ausnutzung der k-parallel Erhaltung in der Niedrigenergie-Elektronenbeugung auch nach einer Reflektion an einer kristallinen Oberfläche erhalten bleibt. Frühere Messungen basierend auf der spekularen Reflexion an einerrnW(001) Oberfläche [Kolbe et al., 2011; Tusche et al., 2011] wurden auf einenrnviel größeren Parameterbereich erweitert und mit Ir(001) wurde ein neues System untersucht, welches eine sehr viel längere Lebensdauer der gereinigten Kristalloberfläche im UHV aufweist. Die Streuenergie- und Einfallswinkel-“Landschaft” der Spinempfindlichkeit S und der Reflektivität I/I0 von gestreuten Elektronen wurde im Bereich von 13.7 - 36.7 eV Streuenergie und 30◦ - 60◦ Streuwinkel gemessen. Die dazu neu aufgebaute Messanordnung umfasst eine spinpolarisierte GaAs Elektronenquellernund einen drehbaren Elektronendetektor (Delayline Detektor) zur ortsauflösenden Detektion der gestreuten Elektronen. Die Ergebnisse zeigen mehrere Regionen mit hoher Asymmetrie und großem Gütefaktor (figure of merit FoM), definiert als S2 · I/I0. Diese Regionen eröffnen einen Weg für eine deutliche Verbesserung der Vielkanal-Spinfiltertechnik für die Elektronenspektroskopie und Impulsmikroskopie. Im praktischen Einsatz erwies sich die Ir(001)-Einkristalloberfläche in Bezug auf längere Lebensdauer im UHV (ca. 1 Messtag), verbunden mit hoher FOM als sehr vielversprechend. Der Ir(001)-Detektor wurde in Verbindung mit einem Halbkugelanalysator bei einem zeitaufgelösten Experiment im Femtosekunden-Bereich am Freie-Elektronen-Laser FLASH bei DESY eingesetzt. Als gute Arbeitspunkte erwiesen sich 45◦ Streuwinkel und 39 eV Streuenergie, mit einer nutzbaren Energiebreite von 5 eV, sowie 10 eV Streuenergie mit einem schmaleren Profil von < 1 eV aber etwa 10× größerer Gütefunktion. Die Spinasymmetrie erreicht Werte bis 70 %, was den Einfluss von apparativen Asymmetrien deutlich reduziert. Die resultierende Messungen und Energie-Winkel-Landschaft zeigt recht gute Übereinstimmung mit der Theorie (relativistic layer-KKR SPLEED code [Braun et al., 2013; Feder et al.,rn2012])
Resumo:
Isochrysis galbana is a widely-used strain in aquaculture in spite of its low productivity. To maximize the productivity of processes based on this microalgae strain, a model was developed considering the influence of irradiance, temperature, pH and dissolved oxygen concentration on the photosynthesis and respiration rate. Results demonstrate that this strain tolerates temperatures up to 35ºC but it is highly sensitive to irradiances higher than 500 µE·m-2·s-1 and dissolved oxygen concentrations higher than 11 mg·l-1. With the researcher group of the “Universidad de Almeria”, the developed model was validated using data from an industrial-scale outdoor tubular photobioreactor demonstrating that inadequate temperature and dissolved oxygen concentrations reduce productivity to half that which is maximal, according to light availability under real outdoor conditions. The developed model is a useful tool for managing working processes, especially in the development of new processes based on this strain and to take decisions regarding optimal control strategies. Also the outdoor production of Isochrysis galbana T-iso in industrial size tubular photobioreactors (3.0 m3) has been studied. Experiments were performed modifying the dilution rate and evaluating the biomass productivity and quality, in addition to the overall performance of the system. Results confirmed that T-iso can be produced outdoor at commercial scale in continuous mode, productivities up to 20 g·m-2·day-1 of biomass rich in proteins (45%) and lipids (25%) being obtained. The utilization of this type of photobioreactors allows controlling the contamination and pH of the cultures, but daily variation of solar radiation imposes the existence of inadequate dissolved oxygen concentration and temperature at which the cells are exposed to inside the reactor. Excessive dissolved oxygen reduced the biomass productivity to 68% of maximal, whereas inadequate temperature reduces to 63% of maximal. Thus, optimally controlling these parameters the biomass productivity can be duplicated. These results confirm the potential to produce this valuable strain at commercial scale in optimally designed/operated tubular photobioreactors as a biotechnological industry.
Resumo:
The goals of any treatment of cervical spine injuries are: return to maximum functional ability, minimum of residual pain, decrease of any neurological deficit, minimum of residual deformity and prevention of further disability. The advantages of surgical treatment are the ability to reach optimal reduction, immediate stability, direct decompression of the cord and the exiting roots, the need for only minimum external fixation, the possibility for early mobilisation and clearly decreased nursing problems. There are some reasons why those goals can be reached better by anterior surgery. Usually the bony compression of the cord and roots comes from the front therefore anterior decompression is usually the procedure of choice. Also, the anterior stabilisation with a plate is usually simpler than a posterior instrumentation. It needs to be stressed that closed reduction by traction can align the fractured spine and indirectly decompress the neural structures in about 70%. The necessary weight is 2.5 kg per level of injury. In the upper cervical spine, the odontoid fracture type 2 is an indication for anterior surgery by direct screw fixation. Joint C1/C2 dislocations or fractures or certain odontoid fractures can be treated with a fusion of the C1/C2 joint by anterior transarticular screw fixation. In the lower and middle cervical spine, anterior plating combined with iliac crest or fibular strut graft is the procedure of choice, however, a solid graft can also be replaced by filled solid or expandable vertebral cages. The complication of this surgery is low, when properly executed and anterior surgery may only be contra-indicated in case of a significant lesion or locked joints.