18 resultados para clean and large throughput differential pumping system

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The preparation of conformationally hindered molecules and their study by DNMR and computational methods are my thesis’s core. In the first chapter, the conformations and the stereodynamics of symmetrically ortho-disubstituted aryl carbinols and aryl ethers are described. In the second chapter, the structures of axially chiral atropisomers of hindered biphenyl carbinols are studied. In the third chapter, the steric barriers and the -barrier of 1,8-di-aylbiphenylenes are determined. Interesting atropisomers are found in the cases of arylanthrones, arylanthraquinones and arylanthracenes and are reported in the fourth chapter. By the combined use of dynamic NMR, ECD spectroscopy and DFT computations, the conformations and the absolute configurations of 2-Naphthylalkylsulfoxides are studied in the fifth chapter. In the last chapter, a new synthetic route to ,’-arylated secondary or tertiary alcohols by lithiated O-benzyl-carbamates carrying an N-aryl substituent and DFT calculations to determinate the cyclic intermediate are reported. This work was done in the research group of Prof. Jonathan Clayden, at the University of Manchester.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several decision and control tasks involve networks of cyber-physical systems that need to be coordinated and controlled according to a fully-distributed paradigm involving only local communications without any central unit. This thesis focuses on distributed optimization and games over networks from a system theoretical perspective. In the addressed frameworks, we consider agents communicating only with neighbors and running distributed algorithms with optimization-oriented goals. The distinctive feature of this thesis is to interpret these algorithms as dynamical systems and, thus, to resort to powerful system theoretical tools for both their analysis and design. We first address the so-called consensus optimization setup. In this context, we provide an original system theoretical analysis of the well-known Gradient Tracking algorithm in the general case of nonconvex objective functions. Then, inspired by this method, we provide and study a series of extensions to improve the performance and to deal with more challenging settings like, e.g., the derivative-free framework or the online one. Subsequently, we tackle the recently emerged framework named distributed aggregative optimization. For this setup, we develop and analyze novel schemes to handle (i) online instances of the problem, (ii) ``personalized'' optimization frameworks, and (iii) feedback optimization settings. Finally, we adopt a system theoretical approach to address aggregative games over networks both in the presence or absence of linear coupling constraints among the decision variables of the players. In this context, we design and inspect novel fully-distributed algorithms, based on tracking mechanisms, that outperform state-of-the-art methods in finding the Nash equilibrium of the game.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Italian radio telescopes currently undergo a major upgrade period in response to the growing demand for deep radio observations, such as surveys on large sky areas or observations of vast samples of compact radio sources. The optimised employment of the Italian antennas, at first constructed mainly for VLBI activities and provided with a control system (FS – Field System) not tailored to single-dish observations, required important modifications in particular of the guiding software and data acquisition system. The production of a completely new control system called ESCS (Enhanced Single-dish Control System) for the Medicina dish started in 2007, in synergy with the software development for the forthcoming Sardinia Radio Telescope (SRT). The aim is to produce a system optimised for single-dish observations in continuum, spectrometry and polarimetry. ESCS is also planned to be installed at the Noto site. A substantial part of this thesis work consisted in designing and developing subsystems within ESCS, in order to provide this software with tools to carry out large maps, spanning from the implementation of On-The-Fly fast scans (following both conventional and innovative observing strategies) to the production of single-dish standard output files and the realisation of tools for the quick-look of the acquired data. The test period coincided with the commissioning phase for two devices temporarily installed – while waiting for the SRT to be completed – on the Medicina antenna: a 18-26 GHz 7-feed receiver and the 14-channel analogue backend developed for its use. It is worth stressing that it is the only K-band multi-feed receiver at present available worldwide. The commissioning of the overall hardware/software system constituted a considerable section of the thesis work. Tests were led in order to verify the system stability and its capabilities, down to sensitivity levels which had never been reached in Medicina using the previous observing techniques and hardware devices. The aim was also to assess the scientific potential of the multi-feed receiver for the production of wide maps, exploiting its temporary availability on a mid-sized antenna. Dishes like the 32-m antennas at Medicina and Noto, in fact, offer the best conditions for large-area surveys, especially at high frequencies, as they provide a suited compromise between sufficiently large beam sizes to cover quickly large areas of the sky (typical of small-sized telescopes) and sensitivity (typical of large-sized telescopes). The KNoWS (K-band Northern Wide Survey) project is aimed at the realisation of a full-northern-sky survey at 21 GHz; its pilot observations, performed using the new ESCS tools and a peculiar observing strategy, constituted an ideal test-bed for ESCS itself and for the multi-feed/backend system. The KNoWS group, which I am part of, supported the commissioning activities also providing map-making and source-extraction tools, in order to complete the necessary data reduction pipeline and assess the general system scientific capabilities. The K-band observations, which were carried out in several sessions along the December 2008-March 2010 period, were accompanied by the realisation of a 5 GHz test survey during the summertime, which is not suitable for high-frequency observations. This activity was conceived in order to check the new analogue backend separately from the multi-feed receiver, and to simultaneously produce original scientific data (the 6-cm Medicina Survey, 6MS, a polar cap survey to complete PMN-GB6 and provide an all-sky coverage at 5 GHz).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Ph.D. Thesis has been carried out in the framework of a long-term and large project devoted to describe the main photometric, chemical, evolutionary and integrated properties of a representative sample of Large and Small Magellanic Cloud (LMC and SMC respectively) clusters. The globular clusters system of these two Irregular galaxies provides a rich resource for investigating stellar and chemical evolution and to obtain a detailed view of the star formation history and chemical enrichment of the Clouds. The results discussed here are based on the analysis of high-resolution photometric and spectroscopic datasets obtained by using the last generation of imagers and spectrographs. The principal aims of this project are summarized as follows: • The study of the AGB and RGB sequences in a sample of MC clusters, through the analysis of a wide near-infrared photometric database, including 33 Magellanic globulars obtained in three observing runs with the near-infrared camera SOFI@NTT (ESO, La Silla). • The study of the chemical properties of a sample of MCs clusters, by using optical and near-infrared high-resolution spectra. 3 observing runs have been secured to our group to observe 9 LMC clusters (with ages between 100 Myr and 13 Gyr) with the optical high-resolution spectrograph FLAMES@VLT (ESO, Paranal) and 4 very young (<30 Myr) clusters (3 in the LMC and 1 in the SMC) with the near-infrared high-resolution spectrograph CRIRES@VLT. • The study of the photometric properties of the main evolutive sequences in optical Color- Magnitude Diagrams (CMD) obtained by using HST archive data, with the final aim of dating several clusters via the comparison between the observed CMDs and theoretical isochrones. The determination of the age of a stellar population requires an accurate measure of the Main Sequence (MS) Turn-Off (TO) luminosity and the knowledge of the distance modulus, reddening and overall metallicity. For this purpose, we limited the study of the age just to the clusters already observed with high-resolution spectroscopy, in order to date only clusters with accurate estimates of the overall metallicity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transcription is controlled by promoter-selective transcriptional factors (TFs), which bind to cis-regulatory enhancers elements, termed hormone response elements (HREs), in a specific subset of genes. Regulation by these factors involves either the recruitment of coactivators or corepressors and direct interaction with the basal transcriptional machinery (1). Hormone-activated nuclear receptors (NRs) are well characterized transcriptional factors (2) that bind to the promoters of their target genes and recruit primary and secondary coactivator proteins which possess many enzymatic activities required for gene expression (1,3,4). In the present study, using single-cell high-resolution fluorescent microscopy and high throughput microscopy (HTM) coupled to computational imaging analysis, we investigated transcriptional regulation controlled by the estrogen receptor alpha (ERalpha), in terms of large scale chromatin remodeling and interaction with the associated coactivator SRC-3 (Steroid Receptor Coactivator-3), a member of p160 family (28) primary coactivators. ERalpha is a steroid-dependent transcriptional factor (16) that belongs to the NRs superfamily (2,3) and, in response to the hormone 17-ß estradiol (E2), regulates transcription of distinct target genes involved in development, puberty, and homeostasis (8,16). ERalpha spends most of its lifetime in the nucleus and undergoes a rapid (within minutes) intranuclear redistribution following the addition of either agonist or antagonist (17,18,19). We designed a HeLa cell line (PRL-HeLa), engineered with a chromosomeintegrated reporter gene array (PRL-array) containing multicopy hormone response-binding elements for ERalpha that are derived from the physiological enhancer/promoter region of the prolactin gene. Following GFP-ER transfection of PRL-HeLa cells, we were able to observe in situ ligand dependent (i) recruitment to the array of the receptor and associated coregulators, (ii) chromatin remodeling, and (iii) direct transcriptional readout of the reporter gene. Addition of E2 causes a visible opening (decondensation) of the PRL-array, colocalization of RNA Polymerase II, and transcriptional readout of the reporter gene, detected by mRNA FISH. On the contrary, when cells were treated with an ERalpha antagonist (Tamoxifen or ICI), a dramatic condensation of the PRL-array was observed, displacement of RNA Polymerase II, and complete decreasing in the transcriptional FISH signal. All p160 family coactivators (28) colocalize with ERalpha at the PRL-array. Steroid Receptor Coactivator-3 (SRC-3/AIB1/ACTR/pCIP/RAC3/TRAM1) is a p160 family member and a known oncogenic protein (4,34). SRC-3 is regulated by a variety of posttranslational modifications, including methylation, phosphorylation, acetylation, ubiquitination and sumoylation (4,35). These events have been shown to be important for its interaction with other coactivator proteins and NRs and for its oncogenic potential (37,39). A number of extracellular signaling molecules, like steroid hormones, growth factors and cytokines, induce SRC-3 phosphorylation (40). These actions are mediated by a wide range of kinases, including extracellular-regulated kinase 1 and 2 (ERK1-2), c-Jun N-terminal kinase, p38 MAPK, and IkB kinases (IKKs) (41,42,43). Here, we report SRC-3 to be a nucleocytoplasmic shuttling protein, whose cellular localization is regulated by phosphorylation and interaction with ERalpha. Using a combination of high throughput and fluorescence microscopy, we show that both chemical inhibition (with U0126) and siRNA downregulation of the MAP/ERK1/2 kinase (MEK1/2) pathway induce a cytoplasmic shift in SRC-3 localization, whereas stimulation by EGF signaling enhances its nuclear localization by inducing phosphorylation at T24, S857, and S860, known partecipants in the regulation of SRC-3 activity (39). Accordingly, the cytoplasmic localization of a non-phosphorylatable SRC-3 mutant further supports these results. In the presence of ERalpha, U0126 also dramatically reduces: hormone-dependent colocalization of ERalpha and SRC-3 in the nucleus; formation of ER-SRC-3 coimmunoprecipitation complex in cell lysates; localization of SRC-3 at the ER-targeted prolactin promoter array (PRL-array) and transcriptional activity. Finally, we show that SRC-3 can also function as a cotransporter, facilitating the nuclear-cytoplasmic shuttling of estrogen receptor. While a wealth of studies have revealed the molecular functions of NRs and coregulators, there is a paucity of data on how these functions are spatiotemporally organized in the cellular context. Technically and conceptually, our findings have a new impact upon evaluating gene transcriptional control and mechanisms of action of gene regulators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bread dough and particularly wheat dough, due to its viscoelastic behaviour, is probably the most dynamic and complicated rheological system and its characteristics are very important since they highly affect final products’ textural and sensorial properties. The study of dough rheology has been a very challenging task for many researchers since it can provide numerous information about dough formulation, structure and processing. This explains why dough rheology has been a matter of investigation for several decades. In this research rheological assessment of doughs and breads was performed by using empirical and fundamental methods at both small and large deformation, in order to characterize different types of doughs and final products such as bread. In order to study the structural aspects of food products, image analysis techniques was used for the integration of the information coming from empirical and fundamental rheological measurements. Evaluation of dough properties was carried out by texture profile analysis (TPA), dough stickiness (Chen and Hoseney cell) and uniaxial extensibility determination (Kieffer test) by using a Texture Analyser; small deformation rheological measurements, were performed on a controlled stress–strain rheometer; moreover the structure of different doughs was observed by using the image analysis; while bread characteristics were studied by using texture profile analysis (TPA) and image analysis. The objective of this research was to understand if the different rheological measurements were able to characterize and differentiate the different samples analysed. This in order to investigate the effect of different formulation and processing conditions on dough and final product from a structural point of view. For this aim the following different materials were performed and analysed: - frozen dough realized without yeast; - frozen dough and bread made with frozen dough; - doughs obtained by using different fermentation method; - doughs made by Kamut® flour; - dough and bread realized with the addition of ginger powder; - final products coming from different bakeries. The influence of sub-zero storage time on non-fermented and fermented dough viscoelastic performance and on final product (bread) was evaluated by using small deformation and large deformation methods. In general, the longer the sub-zero storage time the lower the positive viscoelastic attributes. The effect of fermentation time and of different type of fermentation (straight-dough method; sponge-and-dough procedure and poolish method) on rheological properties of doughs were investigated using empirical and fundamental analysis and image analysis was used to integrate this information throughout the evaluation of the dough’s structure. The results of fundamental rheological test showed that the incorporation of sourdough (poolish method) provoked changes that were different from those seen in the others type of fermentation. The affirmative action of some ingredients (extra-virgin olive oil and a liposomic lecithin emulsifier) to improve rheological characteristics of Kamut® dough has been confirmed also when subjected to low temperatures (24 hours and 48 hours at 4°C). Small deformation oscillatory measurements and large deformation mechanical tests performed provided useful information on the rheological properties of samples realized by using different amounts of ginger powder, showing that the sample with the highest amount of ginger powder (6%) had worse rheological characteristics compared to the other samples. Moisture content, specific volume, texture and crumb grain characteristics are the major quality attributes of bread products. The different sample analyzed, “Coppia Ferrarese”, “Pane Comune Romagnolo” and “Filone Terra di San Marino”, showed a decrease of crumb moisture and an increase in hardness over the storage time. Parameters such as cohesiveness and springiness, evaluated by TPA that are indicator of quality of fresh bread, decreased during the storage. By using empirical rheological tests we found several differences among the samples, due to the different ingredients used in formulation and the different process adopted to prepare the sample, but since these products are handmade, the differences could be account as a surplus value. In conclusion small deformation (in fundamental units) and large deformation methods showed a significant role in monitoring the influence of different ingredients used in formulation, different processing and storage conditions on dough viscoelastic performance and on final product. Finally the knowledge of formulation, processing and storage conditions together with the evaluation of structural and rheological characteristics is fundamental for the study of complex matrices like bakery products, where numerous variable can influence their final quality (e.g. raw material, bread-making procedure, time and temperature of the fermentation and baking).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ALICE experiment at the LHC has been designed to cope with the experimental conditions and observables of a Quark Gluon Plasma reaction. One of the main assets of the ALICE experiment with respect to the other LHC experiments is the particle identification. The large Time-Of-Flight (TOF) detector is the main particle identification detector of the ALICE experiment. The overall time resolution, better that 80 ps, allows the particle identification over a large momentum range (up to 2.5 GeV/c for pi/K and 4 GeV/c for K/p). The TOF makes use of the Multi-gap Resistive Plate Chamber (MRPC), a detector with high efficiency, fast response and intrinsic time resoltion better than 40 ps. The TOF detector embeds a highly-segmented trigger system that exploits the fast rise time and the relatively low noise of the MRPC strips, in order to identify several event topologies. This work aims to provide detailed description of the TOF trigger system. The results achieved in the 2009 cosmic-ray run at CERN are presented to show the performances and readiness of TOF trigger system. The proposed trigger configuration for the proton-proton and Pb-Pb beams are detailed as well with estimates of the efficiencies and purity samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents and discusses TEDA, an algorithm for the automatic detection in real-time of tsunamis and large amplitude waves on sea level records. TEDA has been developed in the frame of the Tsunami Research Team of the University of Bologna for coastal tide gauges and it has been calibrated and tested for the tide gauge station of Adak Island, in Alaska. A preliminary study to apply TEDA to offshore buoys in the Pacific Ocean is also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The worldwide demand for a clean and low-fuel-consuming transport promotes the development of safe, high energy and power electrochemical storage and conversion systems. Lithium-ion batteries (LIBs) are considered today the best technology for this application as demonstrated by the recent interest of automotive industry in hybrid (HEV) and electric vehicles (EV) based on LIBs. This thesis work, starting from the synthesis and characterization of electrode materials and the use of non-conventional electrolytes, demonstrates that LIBs with novel and safe electrolytes and electrode materials meet the targets of specific energy and power established by U.S.A. Department of Energy (DOE) for automotive application in HEV and EV. In chapter 2 is reported the origin of all chemicals used, the description of the instruments used for synthesis and chemical-physical characterizations, the electrodes preparation, the batteries configuration and the electrochemical characterization procedure of electrodes and batteries. Since the electrolyte is the main critical point of a battery, in particular in large- format modules, in chapter 3 we focused on the characterization of innovative and safe electrolytes based on ionic liquids (characterized by high boiling/decomposition points, thermal and electrochemical stability and appreciable conductivity) and mixtures of ionic liquid with conventional electrolyte. In chapter 4 is discussed the microwave accelerated sol–gel synthesis of the carbon- coated lithium iron phosphate (LiFePO 4 -C), an excellent cathode material for LIBs thanks to its intrinsic safety and tolerance to abusive conditions, which showed excellent electrochemical performance in terms of specific capacity and stability. In chapter 5 are presented the chemical-physical and electrochemical characterizations of graphite and titanium-based anode materials in different electrolytes. We also characterized a new anodic material, amorphous SnCo alloy, synthetized with a nanowire morphology that showed to strongly enhance the electrochemical stability of the material during galvanostatic full charge/discharge cycling. Finally, in chapter 6, are reported different types of batteries, assembled using the LiFePO 4 -C cathode material, different anode materials and electrolytes, characterized by deep galvanostatic charge/discharge cycles at different C-rates and by test procedures of the DOE protocol for evaluating pulse power capability and available energy. First, we tested a battery with the innovative cathode material LiFePO 4 -C and conventional graphite anode and carbonate-based electrolyte (EC DMC LiPF 6 1M) that demonstrated to surpass easily the target for power-assist HEV application. Given that the big concern of conventional lithium-ion batteries is the flammability of highly volatile organic carbonate- based electrolytes, we made safe batteries with electrolytes based on ionic liquid (IL). In order to use graphite anode in IL electrolyte we added to the IL 10% w/w of vinylene carbonate (VC) that produces a stable SEI (solid electrolyte interphase) and prevents the graphite exfoliation phenomenon. Then we assembled batteries with LiFePO 4 -C cathode, graphite anode and PYR 14 TFSI 0.4m LiTFSI with 10% w/w of VC that overcame the DOE targets for HEV application and were stable for over 275 cycles. We also assembled and characterized ―high safety‖ batteries with electrolytes based on pure IL, PYR 14 TFSI with 0.4m LiTFSI as lithium salt, and on mixture of this IL and standard electrolyte (PYR 14 TFSI 50% w/w and EC DMC LiPF 6 50% w/w), using titanium-based anodes (TiO 2 and Li 4 Ti 5 O 12 ) that are commonly considered safer than graphite in abusive conditions. The batteries bearing the pure ionic liquid did not satisfy the targets for HEV application, but the batteries with Li 4 Ti 5 O 12 anode and 50-50 mixture electrolyte were able to surpass the targets. We also assembled and characterized a lithium battery (with lithium metal anode) with a polymeric electrolyte based on poly-ethilenoxide (PEO 20 – LiCF 3 SO 3 +10%ZrO 2 ), which satisfied the targets for EV application and showed a very impressive cycling stability. In conclusion, we developed three lithium-ion batteries of different chemistries that demonstrated to be suitable for application in power-assist hybrid vehicles: graphite/EC DMC LiPF 6 /LiFePO 4 -C, graphite/PYR 14 TFSI 0.4m LiTFSI with 10% VC/LiFePO 4 -C and Li 4 T i5 O 12 /PYR 14 TFSI 50%-EC DMC LiPF 6 50%/LiFePO 4 -C. We also demonstrated that an all solid-state polymer lithium battery as Li/PEO 20 –LiCF 3 SO 3 +10%ZrO 2 /LiFePO 4 -C is suitable for application on electric vehicles. Furthermore we developed a promising anodic material alternative to the graphite, based on SnCo amorphous alloy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The continuous advancements and enhancements of wireless systems are enabling new compelling scenarios where mobile services can adapt according to the current execution context, represented by the computational resources available at the local device, current physical location, people in physical proximity, and so forth. Such services called context-aware require the timely delivery of all relevant information describing the current context, and that introduces several unsolved complexities, spanning from low-level context data transmission up to context data storage and replication into the mobile system. In addition, to ensure correct and scalable context provisioning, it is crucial to integrate and interoperate with different wireless technologies (WiFi, Bluetooth, etc.) and modes (infrastructure-based and ad-hoc), and to use decentralized solutions to store and replicate context data on mobile devices. These challenges call for novel middleware solutions, here called Context Data Distribution Infrastructures (CDDIs), capable of delivering relevant context data to mobile devices, while hiding all the issues introduced by data distribution in heterogeneous and large-scale mobile settings. This dissertation thoroughly analyzes CDDIs for mobile systems, with the main goal of achieving a holistic approach to the design of such type of middleware solutions. We discuss the main functions needed by context data distribution in large mobile systems, and we claim the precise definition and clean respect of quality-based contracts between context consumers and CDDI to reconfigure main middleware components at runtime. We present the design and the implementation of our proposals, both in simulation-based and in real-world scenarios, along with an extensive evaluation that confirms the technical soundness of proposed CDDI solutions. Finally, we consider three highly heterogeneous scenarios, namely disaster areas, smart campuses, and smart cities, to better remark the wide technical validity of our analysis and solutions under different network deployments and quality constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The improvement of devices provided by Nanotechnology has put forward new classes of sensors, called bio-nanosensors, which are very promising for the detection of biochemical molecules in a large variety of applications. Their use in lab-on-a-chip could gives rise to new opportunities in many fields, from health-care and bio-warfare to environmental and high-throughput screening for pharmaceutical industry. Bio-nanosensors have great advantages in terms of cost, performance, and parallelization. Indeed, they require very low quantities of reagents and improve the overall signal-to-noise-ratio due to increase of binding signal variations vs. area and reduction of stray capacitances. Additionally, they give rise to new challenges, such as the need to design high-performance low-noise integrated electronic interfaces. This thesis is related to the design of high-performance advanced CMOS interfaces for electrochemical bio-nanosensors. The main focus of the thesis is: 1) critical analysis of noise in sensing interfaces, 2) devising new techniques for noise reduction in discrete-time approaches, 3) developing new architectures for low-noise, low-power sensing interfaces. The manuscript reports a multi-project activity focusing on low-noise design and presents two developed integrated circuits (ICs) as examples of advanced CMOS interfaces for bio-nanosensors. The first project concerns low-noise current-sensing interface for DC and transient measurements of electrophysiological signals. The focus of this research activity is on the noise optimization of the electronic interface. A new noise reduction technique has been developed so as to realize an integrated CMOS interfaces with performance comparable with state-of-the-art instrumentations. The second project intends to realize a stand-alone, high-accuracy electrochemical impedance spectroscopy interface. The system is tailored for conductivity-temperature-depth sensors in environmental applications, as well as for bio-nanosensors. It is based on a band-pass delta-sigma technique and combines low-noise performance with low-power requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.