8 resultados para Energy methods

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new multi-energy CT for small animals is being developed at the Physics Department of the University of Bologna, Italy. The system makes use of a set of quasi-monochromatic X-ray beams, with energy tunable in a range from 26 KeV to 72 KeV. These beams are produced by Bragg diffraction on a Highly Oriented Pyrolytic Graphite crystal. With quasi-monochromatic sources it is possible to perform multi-energy investigation in a more effective way, as compared with conventional X-ray tubes. Multi-energy techniques allow extracting physical information from the materials, such as effective atomic number, mass-thickness, density, that can be used to distinguish and quantitatively characterize the irradiated tissues. The aim of the system is the investigation and the development of new pre-clinic methods for the early detection of the tumors in small animals. An innovative technique, the Triple-Energy Radiography with Contrast Medium (TER), has been successfully implemented on our system. TER consist in combining a set of three quasi-monochromatic images of an object, in order to obtain a corresponding set of three single-tissue images, which are the mass-thickness map of three reference materials. TER can be applied to the quantitative mass-thickness-map reconstruction of a contrast medium, because it is able to remove completely the signal due to other tissues (i.e. the structural background noise). The technique is very sensitive to the contrast medium and is insensitive to the superposition of different materials. The method is a good candidate to the early detection of the tumor angiogenesis in mice. In this work we describe the tomographic system, with a particular focus on the quasi-monochromatic source. Moreover the TER method is presented with some preliminary results about small animal imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proper ion channels’ functioning is a prerequisite for a normal cell and disorders involving ion channels, or channelopathies, underlie many human diseases. Long QT syndromes (LQTS) for example may arise from the malfunctioning of hERG channel, caused either by the binding of drugs or mutations in HERG gene. In the first part of this thesis I present a framework to investigate the mechanism of ion conduction through hERG channel. The free energy profile governing the elementary steps of ion translocation in the pore was computed by means of umbrella sampling simulations. Compared to previous studies, we detected a different dynamic behavior: according to our data hERG is more likely to mediate a conduction mechanism which has been referred to as “single-vacancy-like” by Roux and coworkers (2001), rather then a “knock-on” mechanism. The same protocol was applied to a model of hERG presenting the Gly628Ser mutation, found to be cause of congenital LQTS. The results provided interesting insights about the reason of the malfunctioning of the mutant channel. Since they have critical functions in viruses’ life cycle, viral ion channels, such as M2 proton channel, are considered attractive targets for antiviral therapy. A deep knowledge of the mechanisms that the virus employs to survive in the host cell is of primary importance in the identification of new antiviral strategies. In the second part of this thesis I shed light on the role that M2 plays in the control of electrical potential inside the virus, being the charge equilibration a condition required to allow proton influx. The ion conduction through M2 was simulated using metadynamics technique. Based on our results we suggest that a potential anion-mediated cation-proton exchange, as well as a direct anion-proton exchange could both contribute to explain the activity of the M2 channel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increase in environmental and healthy concerns, combined with the possibility to exploit waste as a valuable energy resource, has led to explore alternative methods for waste final disposal. In this context, the energy conversion of Municipal Solid Waste (MSW) in Waste-To-Energy (WTE) power plant is increasing throughout Europe, both in terms of plants number and capacity, furthered by legislative directives. Due to the heterogeneous nature of waste, some differences with respect to a conventional fossil fuel power plant have to be considered in the energy conversion process. In fact, as a consequence of the well-known corrosion problems, the thermodynamic efficiency of WTE power plants typically ranging in the interval 25% ÷ 30%. The new Waste Framework Directive 2008/98/EC promotes production of energy from waste introducing an energy efficiency criteria (the so-called “R1 formula”) to evaluate plant recovery status. The aim of the Directive is to drive WTE facilities to maximize energy recovery and utilization of waste heat, in order to substitute energy produced with conventional fossil fuels fired power plants. This calls for novel approaches and possibilities to maximize the conversion of MSW into energy. In particular, the idea of an integrated configuration made up of a WTE and a Gas Turbine (GT) originates, driven by the desire to eliminate or, at least, mitigate limitations affecting the WTE conversion process bounding the thermodynamic efficiency of the cycle. The aim of this Ph.D thesis is to investigate, from a thermodynamic point of view, the integrated WTE-GT system sharing the steam cycle, sharing the flue gas paths or combining both ways. The carried out analysis investigates and defines the logic governing plants match in terms of steam production and steam turbine power output as function of the thermal powers introduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In genere, negli studi di vocazionalità delle colture, vengono presi in considerazione solo variabili ambientali pedo-climatiche. La coltivazione di una coltura comporta anche un impatto ambientale derivante dalle pratiche agronomiche ed il territorio può essere più o meno sensibile a questi impatti in base alla sua vulnerabilità. In questo studio si vuole sviluppare una metodologia per relazionare spazialmente l’impatto delle colture con le caratteristiche sito specifiche del territorio in modo da considerare anche questo aspetto nell’allocazione negli studi di vocazionalità. LCA è stato utilizzato per quantificare diversi impatti di alcune colture erbacee alimentari e da energia, relazionati a mappe di vulnerabilità costruite con l’utilizzo di GIS, attraverso il calcolo di coefficienti di rischio di allocazione per ogni combinazione coltura-area vulnerabile. Le colture energetiche sono state considerate come un uso alternativo del suolo per diminuire l’impatto ambientale. Il caso studio ha mostrato che l’allocazione delle colture può essere diversa in base al tipo e al numero di impatti considerati. Il risultato sono delle mappe in cui sono riportate le distribuzioni ottimali delle colture al fine di minimizzare gli impatti, rispetto a mais e grano, due colture alimentari importanti nell’area di studio. Le colture con l’impatto più alto dovrebbero essere coltivate nelle aree a vulnerabilità bassa, e viceversa. Se il rischio ambientale è la priorità, mais, colza, grano, girasole, e sorgo da fibra dovrebbero essere coltivate solo nelle aree a vulnerabilità bassa o moderata, mentre, le colture energetiche erbacee perenni, come il panico, potrebbero essere coltivate anche nelle aree a vulnerabilità alta, rappresentando cosi una opportunità per aumentare la sostenibilità di uso del suolo rurale. Lo strumento LCA-GIS inoltre, integrato con mappe di uso attuale del suolo, può aiutare a valutarne il suo grado di sostenibilità ambientale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.