64 resultados para exploit
Resumo:
Electrochemical biosensors provide an attractive means to analyze the content of a biological sample due to the direct conversion of a biological event to an electronic signal, enabling the development of cheap, small, portable and simple devices, that allow multiplex and real-time detection. At the same time nanobiotechnology is drastically revolutionizing the biosensors development and different transduction strategies exploit concepts developed in these field to simplify the analysis operations for operators and end users, offering higher specificity, higher sensitivity, higher operational stability, integrated sample treatments and shorter analysis time. The aim of this PhD work has been the application of nanobiotechnological strategies to electrochemical biosensors for the detection of biological macromolecules. Specifically, one project was focused on the application of a DNA nanotechnology called hybridization chain reaction (HCR), to amplify the hybridization signal in an electrochemical DNA biosensor. Another project on which the research activity was focused concerns the development of an electrochemical biosensor based on a biological model membrane anchored to a solid surface (tBLM), for the recognition of interactions between the lipid membrane and different types of target molecules.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
This thesis collects the outcomes of a Ph.D. course in Telecommunications engineering and it is focused on enabling techniques for Spread Spectrum (SS) navigation and communication satellite systems. It provides innovations for both interference management and code synchronization techniques. These two aspects are critical for modern navigation and communication systems and constitute the common denominator of the work. The thesis is organized in two parts: the former deals with interference management. We have proposed a novel technique for the enhancement of the sensitivity level of an advanced interference detection and localization system operating in the Global Navigation Satellite System (GNSS) bands, which allows the identification of interfering signals received with power even lower than the GNSS signals. Moreover, we have introduced an effective cancellation technique for signals transmitted by jammers, exploiting their repetitive characteristics, which strongly reduces the interference level at the receiver. The second part, deals with code synchronization. More in detail, we have designed the code synchronization circuit for a Telemetry, Tracking and Control system operating during the Launch and Early Orbit Phase; the proposed solution allows to cope with the very large frequency uncertainty and dynamics characterizing this scenario, and performs the estimation of the code epoch, of the carrier frequency and of the carrier frequency variation rate. Furthermore, considering a generic pair of circuits performing code acquisition, we have proposed a comprehensive framework for the design and the analysis of the optimal cooperation procedure, which minimizes the time required to accomplish synchronization. The study results particularly interesting since it enables the reduction of the code acquisition time without increasing the computational complexity. Finally, considering a network of collaborating navigation receivers, we have proposed an innovative cooperative code acquisition scheme, which allows exploit the shared code epoch information between neighbor nodes, according to the Peer-to-Peer paradigm.
Resumo:
In this thesis we investigated the versatility and the potential applications of different kinds of alkylidene malonates, acetoacetates, malonamides and acetoacetoamides. Our research group devoted great attention to this kind of compounds since alkylidenes can be considered important intermediates in the synthesis of several scaffolds, to be inserted into molecules of potential biological and pharmaceutical interest. The increasing use of alkylidenes is due to their ability to react as unsaturated electrophiles and to the possibility to exploit them as intermediates for the introduction of different kind of functionalities.The preparation of alkylidene malonates, acetoacetates, malonamides and acetoacetoamides is presented in chapter 1. This section deals with different preparation methods of alkylidenes that we developed during the last few years and to the technologies involved for each synthetic protocol. The reactivity that allowed to use the alkylidenes as intermediates in the synthesis of scaffolds for biologically active compounds is shown in chapter 2. In particular, we will discuss the most important reactions used to obtain the desired molecules, and we will focus on the most interesting aspects of these latter ones. Finally, chapter 3 will illustrate the potential applications and the related syntheses of potential bioactive compounds. The synthesized molecules find application in several fields and for this reason we considered each class of compounds in its related branch of interest.
Resumo:
Hybrid vehicles (HV), comprising a conventional ICE-based powertrain and a secondary energy source, to be converted into mechanical power as well, represent a well-established alternative to substantially reduce both fuel consumption and tailpipe emissions of passenger cars. Several HV architectures are either being studied or already available on market, e.g. Mechanical, Electric, Hydraulic and Pneumatic Hybrid Vehicles. Among the others, Electric (HEV) and Mechanical (HSF-HV) parallel Hybrid configurations are examined throughout this Thesis. To fully exploit the HVs potential, an optimal choice of the hybrid components to be installed must be properly designed, while an effective Supervisory Control must be adopted to coordinate the way the different power sources are managed and how they interact. Real-time controllers can be derived starting from the obtained optimal benchmark results. However, the application of these powerful instruments require a simplified and yet reliable and accurate model of the hybrid vehicle system. This can be a complex task, especially when the complexity of the system grows, i.e. a HSF-HV system assessed in this Thesis. The first task of the following dissertation is to establish the optimal modeling approach for an innovative and promising mechanical hybrid vehicle architecture. It will be shown how the chosen modeling paradigm can affect the goodness and the amount of computational effort of the solution, using an optimization technique based on Dynamic Programming. The second goal concerns the control of pollutant emissions in a parallel Diesel-HEV. The emissions level obtained under real world driving conditions is substantially higher than the usual result obtained in a homologation cycle. For this reason, an on-line control strategy capable of guaranteeing the respect of the desired emissions level, while minimizing fuel consumption and avoiding excessive battery depletion is the target of the corresponding section of the Thesis.
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.
Resumo:
Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their cor- rectness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock- freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session π-calculus, into the standard typed π- calculus, in order to understand their expressive power.
Resumo:
This thesis analysis micro and macro aspect of applied fiscal policy issues. The first chapter investigates the extent to which local budget spending composition reacts to fiscal rules variations. I consider the budget of Italian municipalities and exploit specific changes in the Domestic Stability Pact’s rules, to perform a difference-in-discontinuities analysis. The results show that imposing a cap on the total amount of consumption and investment is not as binding as two caps, one for consumption and a different one for investment. More specifically, consumption is triggered by changes in wages and services spending, while investment relies on infrastructure movements. In addition, there is evidence that when an increase in investment is achieved, there is also a higher budget deficit level. The second chapter intends to analyze the extent to which fiscal policy shocks are able to affect macrovariables during business cycle fluctuations, differentiating among three intervention channels: public taxation, consumption and investment. The econometric methodology implemented is a Panel Vector Autoregressive model with a structural characterization. The results show that fiscal shocks have different multipliers in relation to expansion or contraction periods: output does not react during good times while there are significant effects in bad ones. The third chapter evaluates the effects of fiscal policy announcements by the Italian government on the long-term sovereign bond spread of Italy relative to Germany. After collecting data on relevant fiscal policy announcements, we perform an econometric comparative analysis between the three cabinets that followed one another during the period 2009-2013. The results suggest that only fiscal policy announcements made by members of Monti’s cabinet have been effective in influencing significantly the Italian spread in the expected direction, revealing a remarkable credibility gap between Berlusconi’s and Letta’s governments with respect to Monti’s administration.
Resumo:
In this thesis we focus on optimization and simulation techniques applied to solve strategic, tactical and operational problems rising in the healthcare sector. At first we present three applications to Emilia-Romagna Public Health System (SSR) developed in collaboration with Agenzia Sanitaria e Sociale dell'Emilia-Romagna (ASSR), a regional center for innovation and improvement in health. Agenzia launched a strategic campaign aimed at introducing Operations Research techniques as decision making tools to support technological and organizational innovations. The three applications focus on forecast and fund allocation of medical specialty positions, breast screening program extension and operating theater planning. The case studies exploit the potential of combinatorial optimization, discrete event simulation and system dynamics techniques to solve resource constrained problem arising within Emilia-Romagna territory. We then present an application in collaboration with Dipartimento di Epidemiologia del Lazio that focuses on population demand of service allocation to regional emergency departments. Finally, a simulation-optimization approach, developed in collaboration with INESC TECH center of Porto, to evaluate matching policies for the kidney exchange problem is discussed.
Resumo:
Shape memory materials (SMMs) represent an important class of smart materials that have the ability to return from a deformed state to their original shape. Thanks to such a property, SMMs are utilized in a wide range of innovative applications. The increasing number of applications and the consequent involvement of industrial players in the field have motivated researchers to formulate constitutive models able to catch the complex behavior of these materials and to develop robust computational tools for design purposes. Such a research field is still under progress, especially in the prediction of shape memory polymer (SMP) behavior and of important effects characterizing shape memory alloy (SMA) applications. Moreover, the frequent use of shape memory and metallic materials in biomedical devices, particularly in cardiovascular stents, implanted in the human body and experiencing millions of in-vivo cycles by the blood pressure, clearly indicates the need for a deeper understanding of fatigue/fracture failure in microsize components. The development of reliable stent designs against fatigue is still an open subject in scientific literature. Motivated by the described framework, the thesis focuses on several research issues involving the advanced constitutive, numerical and fatigue modeling of elastoplastic and shape memory materials. Starting from the constitutive modeling, the thesis proposes to develop refined phenomenological models for reliable SMA and SMP behavior descriptions. Then, concerning the numerical modeling, the thesis proposes to implement the models into numerical software by developing implicit/explicit time-integration algorithms, to guarantee robust computational tools for practical purposes. The described modeling activities are completed by experimental investigations on SMA actuator springs and polyethylene polymers. Finally, regarding the fatigue modeling, the thesis proposes the introduction of a general computational approach for the fatigue-life assessment of a classical stent design, in order to exploit computer-based simulations to prevent failures and modify design, without testing numerous devices.
Resumo:
Multiparental cross designs for mapping quantitative trait loci (QTL) in crops are efficient alternatives to conventional biparental experimental populations because they exploit a broader genetic basis and higher mapping resolution. We describe the development and deployment of a multiparental recombinant inbred line (RIL) population in durum wheat (Triticum durum Desf.) obtained by crossing four elite cultivars characterized by different traits of agronomic value. A linkage map spanning 2,663 cM and including 7,594 single nucleotide polymorphisms (SNPs) was produced by genotyping 338 RILs with a wheat-dedicated 90k SNP chip. A cluster file was developed for correct allele calling in the framework of the tetraploid durum wheat genome. Based on phenotypic data collected over four field experiments, a multi-trait quantitative trait loci (QTL) analysis was carried out for 18 traits of agronomic relevance (including yield, yield-components, morpho-physiological and seed quality traits). Across environments, a total of 63 QTL were identified and characterized in terms of the four founder haplotypes. We mapped two QTL for grain yield across environments and 23 QTL for grain yield components. A novel major QTL for number of grain per spikelet/ear was mapped on chr 2A and shown to control up to 39% of phenotypic variance in this cross. Functionally different QTL alleles, in terms of direction and size of genetic effect, were distributed among the four parents. Based on the occurrence of QTL-clusters, we characterized the breeding values (in terms of effects on yield) of most of QTL for heading and maturity as well as yield component and quality QTL. This multiparental RIL population provides the wheat community with a highly informative QTL mapping resource enabling the dissection of the genetic architecture of multiple agronomic relevant traits in durum wheat.
Resumo:
L’obiettivo della presente dissertazione è quello di creare un nuovo linguaggio controllato, denominato Español Técnico Simplificado (ETS). Basato sulla specifica tecnica del Simplified Technical English (STE), ufficialmente conosciuta come ASD-STE100, lo spagnolo controllato ETS si presenta come un documento metalinguistico in grado di fornire ad un redattore o traduttore tecnico alcune regole specifiche per produrre un documento tecnico. La strategia di implementazione conduce allo studio preliminare di alcuni linguaggi controllati simili all’inglese STE, quali il Français Rationalisé e il Simplified Technical Spanish. Attraverso un approccio caratteristico della linguistica dei corpora, la soluzione proposta fornisce il nuovo linguaggio controllato mediante l’estrazione di informazioni specifiche da un corpus ad-hoc di lingua spagnola appositamente creato ed interrogato. I risultati evidenziano un metodo linguistico (controllato) utile a produrre documentazione tecnica priva di ogni eventuale ambiguità. Il sistema ETS, infatti, si fonda sul concetto della intelligibilità in quanto condizione necessaria da soddisfare nell’ambito della produzione di un testo controllato. E, attraverso la sua macrostruttura, il documento ETS fornisce gli strumenti necessari per rendere il testo controllato univoco. Infatti, tale struttura bipartita suddivide in maniera logica i dettami: una prima parte riguarda e contiene regole sintattiche e stilistiche; una seconda parte riguarda e contiene un dizionario di un numero limitato di lemmi opportunamente selezionati. Il tutto a favore del principio della biunivocità dei segni, in questo caso, della lingua spagnola. Il progetto, nel suo insieme, apre le porte ad un linguaggio nuovo in alternativa a quelli presenti, totalmente creato in accademia, che vale come prototipo a cui far seguire altri progetti di ricerca.
Resumo:
Il ritrovamento della Casa dei due peristili a Phoinike ha aperto la strada a un’intensa opera di revisione di tutti i dati relativi all’edilizia domestica nella regione, con studi comparativi verso nord (Illiria meridionale) e verso sud (il resto dell’Epiro, ovvero Tesprozia e Molossia). Tutta quest’area della Grecia nord-occidentale è stata caratterizzata nell’antichità da un’urbanizzazione scarsa numericamente e tardiva cronologicamente (non prima del IV sec. a.C.), a parte ovviamente le colonie corinzio-corciresi di area Adriatico- Ionica d’età arcaica (come Ambracia, Apollonia, Epidamnos). A un’urbanistica di tipo razionale e programmato (ad es. Cassope, Orraon, Gitani in Tesprozia, Antigonea in Caonia) si associano numerosi casi di abitati cresciuti soprattutto in rapporto alla natura del suolo, spesso diseguale e montagnoso (ad es. Dymokastro/Elina in Tesprozia, Çuka e Aitoit nella Kestrine), talora semplici villaggi fortificati, privi di una vera fisionomia urbana. D’altro canto il concetto classico di polis così come lo impieghiamo normalmente per la Grecia centro-meridionale non ha valore qui, in uno stato di tipo federale e dominato dall’economia del pascolo e della selva. In questo contesto l’edilizia domestica assume caratteri differenti fra IV e I sec. a.C.: da un lato le città ortogonali ripetono schemi egualitari con poche eccezioni, soprattutto alle origini (IV sec. a.C., come a Cassope e forse a Gitani), dall’altro si delinea una spiccata differenziazione a partire dal III sec., quando si adottano modelli architettonici differenti, come i peristili, indizio di una più forte differenziazione sociale (esemplari i casi di Antigonea e anche di Byllis in Illiria meridionale). I centri minori e fortificati d’altura impiegano formule abitative più semplici, che sfruttano l’articolazione del terreno roccioso per realizzare abitazioni a quote differenti, utilizzando la roccia naturale anche come pareti o pavimenti dei vani.
Resumo:
In this work I discuss several key aspects of welfare economics and policy analysis and I propose two original contributions to the growing field of behavioral public policymaking. After providing a historical perspective of welfare economics and an overview of policy analysis processes in the introductory chapter, in chapter 2 I discuss a debated issue of policymaking, the choice of the social welfare function. I contribute to this debate by proposing an original methodological contribution based on the analysis of the quantitative relationship among different social welfare functional forms commonly used by policy analysts. In chapter 3 I then discuss a behavioral policy to contrast indirect tax evasion based on the use of lotteries. I show that the predictions of my model based on non-expected utility are consistent with observed, and so far unexplained, empirical evidence of the policy success. Finally, in chapter 4 I investigate by mean of a laboratory experiment the effects of social influence on the individual likelihood to engage in altruistic punishment. I show that bystanders’ decision to engage in punishment is influenced by the punishment behavior of their peers and I suggest ways to enact behavioral policies that exploit this finding.
Resumo:
The aim of this work is to provide a precise and accurate measurement of the 238U(n,gamma) reaction cross-section. This reaction is of fundamental importance for the design calculations of nuclear reactors, governing the behaviour of the reactor core. In particular, fast neutron reactors, which are experiencing a growing interest for their ability to burn radioactive waste, operate in the high energy region of the neutron spectrum. In this energy region inconsistencies between the existing measurements are present up to 15%, and the most recent evaluations disagree each other. In addition, the assessment of nuclear data uncertainty performed for innovative reactor systems shows that the uncertainty in the radiative capture cross-section of 238U should be further reduced to 1-3% in the energy region from 20 eV to 25 keV. To this purpose, addressed by the Nuclear Energy Agency as a priority nuclear data need, complementary experiments, one at the GELINA and two at the n_TOF facility, were scheduled within the ANDES project within the 7th Framework Project of the European Commission. The results of one of the 238U(n,gamma) measurement performed at the n_TOF CERN facility are presented in this work, carried out with a detection system constituted of two liquid scintillators. The very accurate cross section from this work is compared with the results obtained from the other measurement performed at the n_TOF facility, which exploit a different and complementary detection technique. The excellent agreement between the two data-sets points out that they can contribute to the reduction of the cross section uncertainty down to the required 1-3%.