923 resultados para automated full waveform logging system
Resumo:
The software Seed Vigor Imaging System (SVIS®), has been successfully used to evaluate seed physiological potential by automated analyses of scanned seedlings. In this research, the efficiency of this system was compared to other tests accepted for assessing cucumber (Cucumis sativus L.) seed vigor of distinct seed lots of Supremo and Safira cultivars. Seeds were subjected to germination, traditional and saturated salt accelerated aging, seedling emergence, seedling length and SVIS analyses (determination of vigor indices and seedling growth uniformity, lengths of primary root, hypocotyl and whole seedlings). It was also determined whether the definition of seedling growth/uniformity ratios affects the sensitivity of the SVIS®. Results showed that analyses SVIS have provided consistent identification of seed lots performance, and have produced information comparable to those from recommended seed vigor tests, thus demonstrating a suitable sensitivity for a rapid and objective evaluation of physiological potential of cucumber seeds. Analyses of four-days-old cucumber seedlings using the SVIS® are more accurate and growth/uniformity does not affect the precision of results.
Resumo:
Human endogenous retroviruses (HERVs) arise from ancient infections of the host germline cells by exogenous retroviruses, constituting 8% of the human genome. Elevated level of envelope transcripts from HERVs-W has been detected in CSF, plasma and brain tissues from patients with Multiple Sclerosis (MS), most of them from Xq22.3, 15q21.3, and 6q21 chromosomes. However, since the locus Xq22.3 (ERVWE2) lack the 5' LTR promoter and the putative protein should be truncated due to a stop codon, we investigated the ERVWE2 genomic loci from 84 individuals, including MS patients with active HERV-W expression detected in PBMC. In addition, an automated search for promoter sequences in 20 kb nearby region of ERVWE2 reference sequence was performed. Several putative binding sites for cellular cofactors and enhancers were found, suggesting that transcription may occur via alternative promoters. However, ERVWE2 DNA sequencing of MS and healthy individuals revealed that all of them harbor a stop codon at site 39, undermining the expression of a full-length protein. Finally, since plaque formation in central nervous system (CNS) of MS patients is attributed to immunological mechanisms triggered by autoimmune attack against myelin, we also investigated the level of similarity between envelope protein and myelin oligodendrocyte glycoprotein (MOG). Comparison of the MOG to the envelope identified five retroviral regions similar to the Ig-like domain of MOG. Interestingly, one of them includes T and B cell epitopes, capable to induce T effector functions and circulating Abs in rats. In sum, although no DNA substitutions that would link ERVWE2 to the MS pathogeny was found, the similarity between the envelope protein to MOG extends the idea that ERVEW2 may be involved on the immunopathogenesis of MS, maybe facilitating the MOG recognizing by the immune system. Although awaiting experimental evidences, the data presented here may expand the scope of the endogenous retroviruses involvement on MS pathogenesis
Resumo:
[EN] Background This study aims to design an empirical test on the sensitivity of the prescribing doctors to the price afforded for the patient, and to apply it to the population data of primary care dispensations for cardiovascular disease and mental illness in the Spanish National Health System (NHS). Implications for drug policies are discussed. Methods We used population data of 17 therapeutic groups of cardiovascular and mental illness drugs aggregated by health areas to obtain 1424 observations ((8 cardiovascular groups * 70 areas) + (9 psychotropics groups * 96 areas)). All drugs are free for pensioners. For non-pensioner patients 10 of the 17 therapeutic groups have a reduced copayment (RC) status of only 10% of the price with a ceiling of €2.64 per pack, while the remaining 7 groups have a full copayment (FC) rate of 40%. Differences in the average price among dispensations for pensioners and non-pensioners were modelled with multilevel regression models to test the following hypothesis: 1) in FC drugs there is a significant positive difference between the average prices of drugs prescribed to pensioners and non-pensioners; 2) in RC drugs there is no significant price differential between pensioner and non-pensioner patients; 3) the price differential of FC drugs prescribed to pensioners and non-pensioners is greater the higher the price of the drugs. Results The average monthly price of dispensations to pensioners and non-pensioners does not differ for RC drugs, but for FC drugs pensioners get more expensive dispensations than non-pensioners (estimated difference of €9.74 by DDD and month). There is a positive and significant effect of the drug price on the differential price between pensioners and non-pensioners. For FC drugs, each additional euro of the drug price increases the differential by nearly half a euro (0.492). We did not find any significant differences in the intensity of the price effect among FC therapeutic groups. Conclusions Doctors working in the Spanish NHS seem to be sensitive to the price that can be afforded by patients when they fill in prescriptions, although alternative hypothesis could also explain the results found.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.
Resumo:
In this thesis the performances of the CMS Drift Tubes Local Trigger System of the CMS detector are studied. CMS is one of the general purpose experiments that will operate at the Large Hadron Collider at CERN. Results from data collected during the Cosmic Run At Four Tesla (CRAFT) commissioning exercise, a globally coordinated run period where the full experiment was involved and configured to detect cosmic rays crossing the CMS cavern, are presented. These include analyses on the precision and accuracy of the trigger reconstruction mechanism and measurement of the trigger efficiency. The description of a method to perform system synchronization is also reported, together with a comparison of the outcomes of trigger electronics and its software emulator code.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
An Adaptive Optic (AO) system is a fundamental requirement of 8m-class telescopes. We know that in order to obtain the maximum possible resolution allowed by these telescopes we need to correct the atmospheric turbulence. Thanks to adaptive optic systems we are able to use all the effective potential of these instruments, drawing all the information from the universe sources as best as possible. In an AO system there are two main components: the wavefront sensor (WFS) that is able to measure the aberrations on the incoming wavefront in the telescope, and the deformable mirror (DM) that is able to assume a shape opposite to the one measured by the sensor. The two subsystem are connected by the reconstructor (REC). In order to do this, the REC requires a “common language" between these two main AO components. It means that it needs a mapping between the sensor-space and the mirror-space, called an interaction matrix (IM). Therefore, in order to operate correctly, an AO system has a main requirement: the measure of an IM in order to obtain a calibration of the whole AO system. The IM measurement is a 'mile stone' for an AO system and must be done regardless of the telescope size or class. Usually, this calibration step is done adding to the telescope system an auxiliary artificial source of light (i.e a fiber) that illuminates both the deformable mirror and the sensor, permitting the calibration of the AO system. For large telescope (more than 8m, like Extremely Large Telescopes, ELTs) the fiber based IM measurement requires challenging optical setups that in some cases are also impractical to build. In these cases, new techniques to measure the IM are needed. In this PhD work we want to check the possibility of a different method of calibration that can be applied directly on sky, at the telescope, without any auxiliary source. Such a technique can be used to calibrate AO system on a telescope of any size. We want to test the new calibration technique, called “sinusoidal modulation technique”, on the Large Binocular Telescope (LBT) AO system, which is already a complete AO system with the two main components: a secondary deformable mirror with by 672 actuators, and a pyramid wavefront sensor. My first phase of PhD work was helping to implement the WFS board (containing the pyramid sensor and all the auxiliary optical components) working both optical alignments and tests of some optical components. Thanks to the “solar tower” facility of the Astrophysical Observatory of Arcetri (Firenze), we have been able to reproduce an environment very similar to the telescope one, testing the main LBT AO components: the pyramid sensor and the secondary deformable mirror. Thanks to this the second phase of my PhD thesis: the measure of IM applying the sinusoidal modulation technique. At first we have measured the IM using a fiber auxiliary source to calibrate the system, without any kind of disturbance injected. After that, we have tried to use this calibration technique in order to measure the IM directly “on sky”, so adding an atmospheric disturbance to the AO system. The results obtained in this PhD work measuring the IM directly in the Arcetri solar tower system are crucial for the future development: the possibility of the acquisition of IM directly on sky means that we are able to calibrate an AO system also for extremely large telescope class where classic IM measurements technique are problematic and, sometimes, impossible. Finally we have not to forget the reason why we need this: the main aim is to observe the universe. Thanks to these new big class of telescopes and only using their full capabilities, we will be able to increase our knowledge of the universe objects observed, because we will be able to resolve more detailed characteristics, discovering, analyzing and understanding the behavior of the universe components.
Resumo:
The Italian radio telescopes currently undergo a major upgrade period in response to the growing demand for deep radio observations, such as surveys on large sky areas or observations of vast samples of compact radio sources. The optimised employment of the Italian antennas, at first constructed mainly for VLBI activities and provided with a control system (FS – Field System) not tailored to single-dish observations, required important modifications in particular of the guiding software and data acquisition system. The production of a completely new control system called ESCS (Enhanced Single-dish Control System) for the Medicina dish started in 2007, in synergy with the software development for the forthcoming Sardinia Radio Telescope (SRT). The aim is to produce a system optimised for single-dish observations in continuum, spectrometry and polarimetry. ESCS is also planned to be installed at the Noto site. A substantial part of this thesis work consisted in designing and developing subsystems within ESCS, in order to provide this software with tools to carry out large maps, spanning from the implementation of On-The-Fly fast scans (following both conventional and innovative observing strategies) to the production of single-dish standard output files and the realisation of tools for the quick-look of the acquired data. The test period coincided with the commissioning phase for two devices temporarily installed – while waiting for the SRT to be completed – on the Medicina antenna: a 18-26 GHz 7-feed receiver and the 14-channel analogue backend developed for its use. It is worth stressing that it is the only K-band multi-feed receiver at present available worldwide. The commissioning of the overall hardware/software system constituted a considerable section of the thesis work. Tests were led in order to verify the system stability and its capabilities, down to sensitivity levels which had never been reached in Medicina using the previous observing techniques and hardware devices. The aim was also to assess the scientific potential of the multi-feed receiver for the production of wide maps, exploiting its temporary availability on a mid-sized antenna. Dishes like the 32-m antennas at Medicina and Noto, in fact, offer the best conditions for large-area surveys, especially at high frequencies, as they provide a suited compromise between sufficiently large beam sizes to cover quickly large areas of the sky (typical of small-sized telescopes) and sensitivity (typical of large-sized telescopes). The KNoWS (K-band Northern Wide Survey) project is aimed at the realisation of a full-northern-sky survey at 21 GHz; its pilot observations, performed using the new ESCS tools and a peculiar observing strategy, constituted an ideal test-bed for ESCS itself and for the multi-feed/backend system. The KNoWS group, which I am part of, supported the commissioning activities also providing map-making and source-extraction tools, in order to complete the necessary data reduction pipeline and assess the general system scientific capabilities. The K-band observations, which were carried out in several sessions along the December 2008-March 2010 period, were accompanied by the realisation of a 5 GHz test survey during the summertime, which is not suitable for high-frequency observations. This activity was conceived in order to check the new analogue backend separately from the multi-feed receiver, and to simultaneously produce original scientific data (the 6-cm Medicina Survey, 6MS, a polar cap survey to complete PMN-GB6 and provide an all-sky coverage at 5 GHz).
Resumo:
Aufbau einer kontinuierlichen, mehrdimensionalen Hochleistungs-flüssigchromatographie-Anlage für die Trennung von Proteinen und Peptiden mit integrierter größenselektiver ProbenfraktionierungEs wurde eine mehrdimensionale HPLC-Trennmethode für Proteine und Peptide mit einem Molekulargewicht von <15 kDa entwickelt.Im ersten Schritt werden die Zielanalyte von höhermolekularen sowie nicht ionischen Bestandteilen mit Hilfe von 'Restricted Access Materialien' (RAM) mit Ionenaustauscher-Funktionalität getrennt. Anschließend werden die Proteine auf einer analytischen Ionenaustauscher-Säule sowie auf Reversed-Phase-Säulen getrennt. Zur Vermeidung von Probenverlusten wurde ein kontinuierlich arbeitendes, voll automatisiertes System auf Basis unterschiedlicher Trenngeschwindigkeiten und vier parallelen RP-Säulen aufgebaut.Es werden jeweils zwei RP-Säulen gleichzeitig, jedoch mit zeitlich versetztem Beginn eluiert, um durch flache Gradienten ausreichende Trennleistungen zu erhalten. Während die dritte Säule regeneriert wird, erfolgt das Beladen der vierte Säule durch Anreicherung der Proteine und Peptide am Säulenkopf. Während der Gesamtanalysenzeit von 96 Minuten werden in Intervallen von 4 Minuten Fraktionen aus der 1. Dimension auf die RP-Säulen überführt und innerhalb von 8 Minuten getrennt, wobei 24 RP-Chromatogramme resultieren.Als Testsubstanzen wurden u.a. Standardproteine, Proteine und Peptide aus humanem Hämofiltrat sowie aus Lungenfibroblast-Zellkulturüberständen eingesetzt. Weiterhin wurden Fraktionen gesammelt und mittels MALDI-TOF Massenspektrometrie untersucht. Bei einer Injektion wurden in den 24 RP-Chromatogrammen mehr als 1000 Peaks aufgelöst. Der theoretische Wert der Peakkapazität liegt bei ungefähr 3000.
Resumo:
The primary objective of this thesis is to obtain a better understanding of the 3D velocity structure of the lithosphere in central Italy. To this end, I adopted the Spectral-Element Method to perform accurate numerical simulations of the complex wavefields generated by the 2009 Mw 6.3 L’Aquila event and by its foreshocks and aftershocks together with some additional events within our target region. For the mainshock, the source was represented by a finite fault and different models for central Italy, both 1D and 3D, were tested. Surface topography, attenuation and Moho discontinuity were also accounted for. Three-component synthetic waveforms were compared to the corresponding recorded data. The results of these analyses show that 3D models, including all the known structural heterogeneities in the region, are essential to accurately reproduce waveform propagation. They allow to capture features of the seismograms, mainly related to topography or to low wavespeed areas, and, combined with a finite fault model, result into a favorable match between data and synthetics for frequencies up to ~0.5 Hz. We also obtained peak ground velocity maps, that provide valuable information for seismic hazard assessment. The remaining differences between data and synthetics led us to take advantage of SEM combined with an adjoint method to iteratively improve the available 3D structure model for central Italy. A total of 63 events and 52 stations in the region were considered. We performed five iterations of the tomographic inversion, by calculating the misfit function gradient - necessary for the model update - from adjoint sensitivity kernels, constructed using only two simulations for each event. Our last updated model features a reduced traveltime misfit function and improved agreement between data and synthetics, although further iterations, as well as refined source solutions, are necessary to obtain a new reference 3D model for central Italy tomography.
Resumo:
The subject of this thesis is the development of a Gaschromatography (GC) system for non-methane hydrocarbons (NMHCs) and measurement of samples within the project CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container, www.caribic-atmospheric.com). Air samples collected at cruising altitude from the upper troposphere and lowermost stratosphere contain hydrocarbons at low levels (ppt range), which imposes substantial demands on detection limits. Full automation enabled to maintain constant conditions during the sample processing and analyses. Additionally, automation allows overnight operation thus saving time. A gas chromatography using flame ionization detection (FID) together with the dual column approach enables simultaneous detection with almost equal carbon atom response for all hydrocarbons except for ethyne. The first part of this thesis presents the technical descriptions of individual parts of the analytical system. Apart from the sample treatment and calibration procedures, the sample collector is described. The second part deals with analytical performance of the GC system by discussing tests that had been made. Finally, results for measurement flight are assessed in terms of quality of the data and two flights are discussed in detail. Analytical performance is characterized using detection limits for each compound, using uncertainties for each compound, using tests of calibration mixture conditioning and carbon dioxide trap to find out their influence on analyses, and finally by comparing the responses of calibrated substances during period when analyses of the flights were made. Comparison of both systems shows good agreement. However, because of insufficient capacity of the CO2 trap the signal of one column was suppressed due to breakthroughed carbon dioxide so much that its results appeared to be unreliable. Plausibility tests for the internal consistency of the given data sets are based on common patterns exhibited by tropospheric NMHCs. All tests show that samples from the first flights do not comply with the expected pattern. Additionally, detected alkene artefacts suggest potential problems with storing or contamination within all measurement flights. Two last flights # 130-133 and # 166-169 comply with the tests therefore their detailed analysis is made. Samples were analyzed in terms of their origin (troposphere vs. stratosphere, backward trajectories), their aging (NMHCs ratios) and detected plumes were compared to chemical signatures of Asian outflows. In the last chapter a future development of the presented system with focus on separation is drawn. An extensive appendix documents all important aspects of the dissertation from theoretical introduction through illustration of sample treatment to overview diagrams for the measured flights.
Resumo:
In the present work, a multi physics simulation of an innovative safety system for light water nuclear reactor is performed, with the aim to increase the reliability of its main decay heat removal system. The system studied, denoted by the acronym PERSEO (in Pool Energy Removal System for Emergency Operation) is able to remove the decay power from the primary side of the light water nuclear reactor through a heat suppression pool. The experimental facility, located at SIET laboratories (PIACENZA), is an evolution of the Thermal Valve concept where the triggering valve is installed liquid side, on a line connecting two pools at the bottom. During the normal operation, the valve is closed, while in emergency conditions it opens, the heat exchanger is flooded with consequent heat transfer from the primary side to the pool side. In order to verify the correct system behavior during long term accidental transient, two main experimental PERSEO tests are analyzed. For this purpose, a coupling between the mono dimensional system code CATHARE, which reproduces the system scale behavior, with a three-dimensional CFD code NEPTUNE CFD, allowing a full investigation of the pools and the injector, is implemented. The coupling between the two codes is realized through the boundary conditions. In a first analysis, the facility is simulated by the system code CATHARE V2.5 to validate the results with the experimental data. The comparison of the numerical results obtained shows a different void distribution during the boiling conditions inside the heat suppression pool for the two cases of single nodalization and three volume nodalization scheme of the pool. Finaly, to improve the investigation capability of the void distribution inside the pool and the temperature stratification phenomena below the injector, a two and three dimensional CFD models with a simplified geometry of the system are adopted.
Resumo:
Dendritic systems, and in particular polyphenylene dendrimers, have recently attracted considerable attention from the synthetic organic chemistry community, as well as from photophysicists, particularly in view of the search for synthetic model analogies to photoelectric materials to fabricate organic light-emitting diodes (OLEDs), and even more advanced areas of research such as light-harvesting system, energy transfer and non-host device. Geometrically, dendrimers are unique systems that consist of a core, one or more dendrons, and surface groups. The different parts of the macromolecule can be selected to give the desired optoelectronic and processing properties. Compared to small molecular or polymeric light-emitting materials, these dendritic materials can combine the benefits of both previous classes. The high molecular weights of these dendritic macromolecules, as well as the surface groups often attached to the distal ends of the dendrons, can improve the solution processability, and thus can be deposited from solution by simple processes such as spin-coating and ink-jet printing. Moreover, even better than the traditional polymeric light-emitting materials, the well-defined monodisperse distributed dendrimers possess a high purity comparable to that of small molecules, and as such can be fabricated into high performance OLEDs. Most importantly, the emissive chromophores can be located at the core of the dendrimer, within the dendrons, and/or at the surface of the dendrimers because of their unique dendritic architectures. The different parts of the macromolecule can be selected to give the desired optoelectronic and processing properties. Therefore, the main goals of this thesis are the design and synthesis, characterization of novel functional dendrimers, e.g. polytriphenylene dendrimers for blue fluorescent, as well as iridium(III) complex cored polyphenylene dendrimers for green and red phosphorescent light emitting diodes. In additional to the above mentioned advantages of dendrimer based OLEDs, the modular molecular architecture and various functionalized units at different locations in polyphenylene dendrimers open up a tremendous scope for tuning a wide range of properties in addition to color, such as intermolecular interactions, charge mobility, quantum yield, and exciton diffusion. In conclusion, research into dendrimer containing OLEDs combines fundamental aspects of organic semiconductor physics, novel and highly sophisticated organic synthetic chemistry and elaborate device technology.rn
Resumo:
A first phase of the research activity has been related to the study of the state of art of the infrastructures for cycling, bicycle use and methods for evaluation. In this part, the candidate has studied the "bicycle system" in countries with high bicycle use and in particular in the Netherlands. Has been carried out an evaluation of the questionnaires of the survey conducted within the European project BICY on mobility in general in 13 cities of the participating countries. The questionnaire was designed, tested and implemented, and was later validated by a test in Bologna. The results were corrected with information on demographic situation and compared with official data. The cycling infrastructure analysis was conducted on the basis of information from the OpenStreetMap database. The activity consisted in programming algorithms in Python that allow to extract data from the database infrastructure for a region, to sort and filter cycling infrastructure calculating some attributes, such as the length of the arcs paths. The results obtained were compared with official data where available. The structure of the thesis is as follows: 1. Introduction: description of the state of cycling in several advanced countries, description of methods of analysis and their importance to implement appropriate policies for cycling. Supply and demand of bicycle infrastructures. 2. Survey on mobility: it gives details of the investigation developed and the method of evaluation. The results obtained are presented and compared with official data. 3. Analysis cycling infrastructure based on information from the database of OpenStreetMap: describes the methods and algorithms developed during the PhD. The results obtained by the algorithms are compared with official data. 4. Discussion: The above results are discussed and compared. In particular the cycle demand is compared with the length of cycle networks within a city. 5. Conclusions