28 resultados para Application specific integrated circuits
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.
Resumo:
The full exploitation of multi-hop multi-path connectivity opportunities offered by heterogeneous wireless interfaces could enable innovative Always Best Served (ABS) deployment scenarios where mobile clients dynamically self-organize to offer/exploit Internet connectivity at best. Only novel middleware solutions based on heterogeneous context information can seamlessly enable this scenario: middleware solutions should i) provide a translucent access to low-level components, to achieve both fully aware and simplified pre-configured interactions, ii) permit to fully exploit communication interface capabilities, i.e., not only getting but also providing connectivity in a peer-to-peer fashion, thus relieving final users and application developers from the burden of directly managing wireless interface heterogeneity, and iii) consider user mobility as crucial context information evaluating at provision time the suitability of available Internet points of access differently when the mobile client is still or in motion. The novelty of this research work resides in three primary points. First of all, it proposes a novel model and taxonomy providing a common vocabulary to easily describe and position solutions in the area of context-aware autonomic management of preferred network opportunities. Secondly, it presents PoSIM, a context-aware middleware for the synergic exploitation and control of heterogeneous positioning systems that facilitates the development and portability of location-based services. PoSIM is translucent, i.e., it can provide application developers with differentiated visibility of data characteristics and control possibilities of available positioning solutions, thus dynamically adapting to application-specific deployment requirements and enabling cross-layer management decisions. Finally, it provides the MMHC solution for the self-organization of multi-hop multi-path heterogeneous connectivity. MMHC considers a limited set of practical indicators on node mobility and wireless network characteristics for a coarsegrained estimation of expected reliability/quality of multi-hop paths available at runtime. In particular, MMHC manages the durability/throughput-aware formation and selection of different multi-hop paths simultaneously. Furthermore, MMHC provides a novel solution based on adaptive buffers, proactively managed based on handover prediction, to support continuous services, especially by pre-fetching multimedia contents to avoid streaming interruptions.
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.
Resumo:
Cardiac morphogenesis is a complex process governed by evolutionarily conserved transcription factors and signaling molecules. The Drosophila cardiac tube is linear, made of 52 pairs of cardiomyocytes (CMs), which express specific transcription factor genes that have human homologues implicated in Congenital Heart Diseases (CHDs) (NKX2-5, GATA4 and TBX5). The Drosophila cardiac tube is linear and composed of a rostral portion named aorta and a caudal one called heart, distinguished by morphological and functional differences controlled by Hox genes, key regulators of axial patterning. Overexpression and inactivation of the Hox gene abdominal-A (abd-A), which is expressed exclusively in the heart, revealed that abd-A controls heart identity. The aim of our work is to isolate the heart-specific cisregulatory sequences of abd-A direct target genes, the realizator genes granting heart identity. In each segment of the heart, four pairs of cardiomyocytes (CMs) express tinman (tin), homologous to NKX2-5, and acquire strong contractile and automatic rhythmic activities. By tyramide amplified FISH, we found that seven genes, encoding ion channels, pumps or transporters, are specifically expressed in the Tin-CMs of the heart. We initially used online available tools to identify their heart-specific cisregutatory modules by looking for Conserved Non-coding Sequences containing clusters of binding sites for various cardiac transcription factors, including Hox proteins. Based on these data we generated several reporter gene constructs and transgenic embryos, but none of them showed reporter gene expression in the heart. In order to identify additional abd-A target genes, we performed microarray experiments comparing the transcriptomes of aorta versus heart and identified 144 genes overexpressed in the heart. In order to find the heart-specific cis-regulatory regions of these target genes we developed a new bioinformatic approach where prediction is based on pattern matching and ordered statistics. We first retrieved Conserved Noncoding Sequences from the alignment between the D.melanogaster and D.pseudobscura genomes. We scored for combinations of conserved occurrences of ABD-A, ABD-B, TIN, PNR, dMEF2, MADS box, T-box and E-box sites and we ranked these results based on two independent strategies. On one hand we ranked the putative cis-regulatory sequences according to best scored ABD-A biding sites, on the other hand we scored according to conservation of binding sites. We integrated and ranked again the two lists obtained independently to produce a final rank. We generated nGFP reporter construct flies for in vivo validation. We identified three 1kblong heart-specific enhancers. By in vivo and in vitro experiments we are determining whether they are direct abd-A targets, demonstrating the role of a Hox gene in the realization of heart identity. The identified abd-A direct target genes may be targets also of the NKX2-5, GATA4 and/or TBX5 homologues tin, pannier and Doc genes, respectively. The identification of sequences coregulated by a Hox protein and the homologues of transcription factors causing CHDs, will provide a mean to test whether these factors function as Hox cofactors granting cardiac specificity to Hox proteins, increasing our knowledge on the molecular mechanisms underlying CHDs. Finally, it may be investigated whether these Hox targets are involved in CHDs.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
Research in art conservation has been developed from the early 1950s, giving a significant contribution to the conservation-restoration of cultural heritage artefacts. In fact, only through a profound knowledge about the nature and conditions of constituent materials, suitable decisions on the conservation and restoration measures can thus be adopted and preservation practices enhanced. The study of ancient artworks is particularly challenging as they can be considered as heterogeneous and multilayered systems where numerous interactions between the different components as well as degradation and ageing phenomena take place. However, difficulties to physically separate the different layers due to their thickness (1-200 µm) can result in the inaccurate attribution of the identified compounds to a specific layer. Therefore, details can only be analysed when the sample preparation method leaves the layer structure intact, as for example the preparation of embedding cross sections in synthetic resins. Hence, spatially resolved analytical techniques are required not only to exactly characterize the nature of the compounds but also to obtain precise chemical and physical information about ongoing changes. This thesis focuses on the application of FTIR microspectroscopic techniques for cultural heritage materials. The first section is aimed at introducing the use of FTIR microscopy in conservation science with a particular attention to the sampling criteria and sample preparation methods. The second section is aimed at evaluating and validating the use of different FTIR microscopic analytical methods applied to the study of different art conservation issues which may be encountered dealing with cultural heritage artefacts: the characterisation of the artistic execution technique (chapter II-1), the studies on degradation phenomena (chapter II-2) and finally the evaluation of protective treatments (chapter II-3). The third and last section is divided into three chapters which underline recent developments in FTIR spectroscopy for the characterisation of paint cross sections and in particular thin organic layers: a newly developed preparation method with embedding systems in infrared transparent salts (chapter III-1), the new opportunities offered by macro-ATR imaging spectroscopy (chapter III-2) and the possibilities achieved with the different FTIR microspectroscopic techniques nowadays available (chapter III-3). In chapter II-1, FTIR microspectroscopy as molecular analysis, is presented in an integrated approach with other analytical techniques. The proposed sequence is optimized in function of the limited quantity of sample available and this methodology permits to identify the painting materials and characterise the adopted execution technique and state of conservation. Chapter II-2 describes the characterisation of the degradation products with FTIR microscopy since the investigation on the ageing processes encountered in old artefacts represents one of the most important issues in conservation research. Metal carboxylates resulting from the interaction between pigments and binding media are characterized using synthesised metal palmitates and their production is detected on copper-, zinc-, manganese- and lead- (associated with lead carbonate) based pigments dispersed either in oil or egg tempera. Moreover, significant effects seem to be obtained with iron and cobalt (acceleration of the triglycerides hydrolysis). For the first time on sienna and umber paints, manganese carboxylates are also observed. Finally in chapter II-3, FTIR microscopy is combined with further elemental analyses to characterise and estimate the performances and stability of newly developed treatments, which should better fit conservation-restoration problems. In the second part, in chapter III-1, an innovative embedding system in potassium bromide is reported focusing on the characterisation and localisation of organic substances in cross sections. Not only the identification but also the distribution of proteinaceous, lipidic or resinaceous materials, are evidenced directly on different paint cross sections, especially in thin layers of the order of 10 µm. Chapter III-2 describes the use of a conventional diamond ATR accessory coupled with a focal plane array to obtain chemical images of multi-layered paint cross sections. A rapid and simple identification of the different compounds is achieved without the use of any infrared microscope objectives. Finally, the latest FTIR techniques available are highlighted in chapter III-3 in a comparative study for the characterisation of paint cross sections. Results in terms of spatial resolution, data quality and chemical information obtained are presented and in particular, a new FTIR microscope equipped with a linear array detector, which permits reducing the spatial resolution limit to approximately 5 µm, provides very promising results and may represent a good alternative to either mapping or imaging systems.
Resumo:
The last decades have seen an unrivaled growth and diffusion of mobile telecommunications. Several standards have been developed to this purposes, from GSM mobile phone communications to WLAN IEEE 802.11, providing different services for the the transmission of signals ranging from voice to high data rate digital communications and Digital Video Broadcasting (DVB). In this wide research and market field, this thesis focuses on Ultra Wideband (UWB) communications, an emerging technology for providing very high data rate transmissions over very short distances. In particular the presented research deals with the circuit design of enabling blocks for MB-OFDM UWB CMOS single-chip transceivers, namely the frequency synthesizer and the transmission mixer and power amplifier. First we discuss three different models for the simulation of chargepump phase-locked loops, namely the continuous time s-domain and discrete time z-domain approximations and the exact semi-analytical time-domain model. The limitations of the two approximated models are analyzed in terms of error in the computed settling time as a function of loop parameters, deriving practical conditions under which the different models are reliable for fast settling PLLs up to fourth order. Besides, a phase noise analysis method based upon the time-domain model is introduced and compared to the results obtained by means of the s-domain model. We compare the three models over the simulation of a fast switching PLL to be integrated in a frequency synthesizer for WiMedia MB-OFDM UWB systems. In the second part, the theoretical analysis is applied to the design of a 60mW 3.4 to 9.2GHz 12 Bands frequency synthesizer for MB-OFDM UWB based on two wide-band PLLs. The design is presented and discussed up to layout level. A test chip has been implemented in TSMC CMOS 90nm technology, measured data is provided. The functionality of the circuit is proved and specifications are met with state-of-the-art area occupation and power consumption. The last part of the thesis deals with the design of a transmission mixer and a power amplifier for MB-OFDM UWB band group 1. The design has been carried on up to layout level in ST Microlectronics 65nm CMOS technology. Main characteristics of the systems are the wideband behavior (1.6 GHz of bandwidth) and the constant behavior over process parameters, temperature and supply voltage thanks to the design of dedicated adaptive biasing circuits.
Resumo:
During the last few years, a great deal of interest has risen concerning the applications of stochastic methods to several biochemical and biological phenomena. Phenomena like gene expression, cellular memory, bet-hedging strategy in bacterial growth and many others, cannot be described by continuous stochastic models due to their intrinsic discreteness and randomness. In this thesis I have used the Chemical Master Equation (CME) technique to modelize some feedback cycles and analyzing their properties, including experimental data. In the first part of this work, the effect of stochastic stability is discussed on a toy model of the genetic switch that triggers the cellular division, which malfunctioning is known to be one of the hallmarks of cancer. The second system I have worked on is the so-called futile cycle, a closed cycle of two enzymatic reactions that adds and removes a chemical compound, called phosphate group, to a specific substrate. I have thus investigated how adding noise to the enzyme (that is usually in the order of few hundred molecules) modifies the probability of observing a specific number of phosphorylated substrate molecules, and confirmed theoretical predictions with numerical simulations. In the third part the results of the study of a chain of multiple phosphorylation-dephosphorylation cycles will be presented. We will discuss an approximation method for the exact solution in the bidimensional case and the relationship that this method has with the thermodynamic properties of the system, which is an open system far from equilibrium.In the last section the agreement between the theoretical prediction of the total protein quantity in a mouse cells population and the observed quantity will be shown, measured via fluorescence microscopy.
Resumo:
In many communities, supplying water for the people is a huge task and the fact that this essential service can be carried out by the private sector respecting the right to water, is a debated issue. This dissertation investigates the mechanisms through which a 'perceived rights violation' - which represents a specific form of perceived injustice which derives from the violation of absolute moral principles – can promote collective action. Indeed, literature on morality and collective action suggests that even if many people apparently sustain high moral principles (like human rights), only a minority decides to act in order to defend them. Taking advantage of the political situation in Italy, and the recent mobilization for "public water" we hypothesized that, because of its "sacred value", the perceived violation of the right to water facilitates identification with the social movement and activism. Through five studies adopting qualitative and quantitative methods, we confirmed our hypotheses demonstrating that the perceived violation of the right to water can sustain activism and it can influence vote intentions at the referendum for 'public water'. This path to collective action coexists with other 'classical' predictors of collective action, like instrumental factors (personal advantages, efficacy beliefs) and anger. The perceived rights violation can derive both from personal values (i.e. universalism) and external factors (i.e. a mobilization campaign). Furthermore, we demonstrated that it is possible to enhance the perceived violation of the right to water and anger through a specifically designed communication campaign. The final chapter summarizes the main findings and discusses the results, suggesting some innovative line of research for collective action literature.
Resumo:
In genere, negli studi di vocazionalità delle colture, vengono presi in considerazione solo variabili ambientali pedo-climatiche. La coltivazione di una coltura comporta anche un impatto ambientale derivante dalle pratiche agronomiche ed il territorio può essere più o meno sensibile a questi impatti in base alla sua vulnerabilità. In questo studio si vuole sviluppare una metodologia per relazionare spazialmente l’impatto delle colture con le caratteristiche sito specifiche del territorio in modo da considerare anche questo aspetto nell’allocazione negli studi di vocazionalità. LCA è stato utilizzato per quantificare diversi impatti di alcune colture erbacee alimentari e da energia, relazionati a mappe di vulnerabilità costruite con l’utilizzo di GIS, attraverso il calcolo di coefficienti di rischio di allocazione per ogni combinazione coltura-area vulnerabile. Le colture energetiche sono state considerate come un uso alternativo del suolo per diminuire l’impatto ambientale. Il caso studio ha mostrato che l’allocazione delle colture può essere diversa in base al tipo e al numero di impatti considerati. Il risultato sono delle mappe in cui sono riportate le distribuzioni ottimali delle colture al fine di minimizzare gli impatti, rispetto a mais e grano, due colture alimentari importanti nell’area di studio. Le colture con l’impatto più alto dovrebbero essere coltivate nelle aree a vulnerabilità bassa, e viceversa. Se il rischio ambientale è la priorità, mais, colza, grano, girasole, e sorgo da fibra dovrebbero essere coltivate solo nelle aree a vulnerabilità bassa o moderata, mentre, le colture energetiche erbacee perenni, come il panico, potrebbero essere coltivate anche nelle aree a vulnerabilità alta, rappresentando cosi una opportunità per aumentare la sostenibilità di uso del suolo rurale. Lo strumento LCA-GIS inoltre, integrato con mappe di uso attuale del suolo, può aiutare a valutarne il suo grado di sostenibilità ambientale.
Resumo:
Modern food systems are characterized by a high energy intensity as well as by the production of large amounts of waste, residuals and food losses. This inefficiency presents major consequences, in terms of GHG emissions, waste disposal, and natural resource depletion. The research hypothesis is that residual biomass material could contribute to the energetic needs of food systems, if recovered as an integrated renewable energy source (RES), leading to a sensitive reduction of the impacts of food systems, primarily in terms of fossil fuel consumption and GHG emissions. In order to assess these effects, a comparative life cycle assessment (LCA) has been conducted to compare two different food systems: a fossil fuel-based system and an integrated system with the use of residual as RES for self-consumption. The food product under analysis has been the peach nectar, from cultivation to end-of-life. The aim of this LCA is twofold. On one hand, it allows an evaluation of the energy inefficiencies related to agro-food waste. On the other hand, it illustrates how the integration of bioenergy into food systems could effectively contribute to reduce this inefficiency. Data about inputs and waste generated has been collected mainly through literature review and databases. Energy balance, GHG emissions (Global Warming Potential) and waste generation have been analyzed in order to identify the relative requirements and contribution of the different segments. An evaluation of the energy “loss” through the different categories of waste allowed to provide details about the consequences associated with its management and/or disposal. Results should provide an insight of the impacts associated with inefficiencies within food systems. The comparison provides a measure of the potential reuse of wasted biomass and the amount of energy recoverable, that could represent a first step for the formulation of specific policies on the integration of bioenergies for self-consumption.
An Integrated Transmission-Media Noise Calibration Software For Deep-Space Radio Science Experiments
Resumo:
The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.
Resumo:
The aim of this thesis is to investigate the nature of quantum computation and the question of the quantum speed-up over classical computation by comparing two different quantum computational frameworks, the traditional quantum circuit model and the cluster-state quantum computer. After an introductory survey of the theoretical and epistemological questions concerning quantum computation, the first part of this thesis provides a presentation of cluster-state computation suitable for a philosophical audience. In spite of the computational equivalence between the two frameworks, their differences can be considered as structural. Entanglement is shown to play a fundamental role in both quantum circuits and cluster-state computers; this supports, from a new perspective, the argument that entanglement can reasonably explain the quantum speed-up over classical computation. However, quantum circuits and cluster-state computers diverge with regard to one of the explanations of quantum computation that actually accords a central role to entanglement, i.e. the Everett interpretation. It is argued that, while cluster-state quantum computation does not show an Everettian failure in accounting for the computational processes, it threatens that interpretation of being not-explanatory. This analysis presented here should be integrated in a more general work in order to include also further frameworks of quantum computation, e.g. topological quantum computation. However, what is revealed by this work is that the speed-up question does not capture all that is at stake: both quantum circuits and cluster-state computers achieve the speed-up, but the challenges that they posit go besides that specific question. Then, the existence of alternative equivalent quantum computational models suggests that the ultimate question should be moved from the speed-up to a sort of “representation theorem” for quantum computation, to be meant as the general goal of identifying the physical features underlying these alternative frameworks that allow for labelling those frameworks as “quantum computation”.