24 resultados para the energy per baryon

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new multi-energy CT for small animals is being developed at the Physics Department of the University of Bologna, Italy. The system makes use of a set of quasi-monochromatic X-ray beams, with energy tunable in a range from 26 KeV to 72 KeV. These beams are produced by Bragg diffraction on a Highly Oriented Pyrolytic Graphite crystal. With quasi-monochromatic sources it is possible to perform multi-energy investigation in a more effective way, as compared with conventional X-ray tubes. Multi-energy techniques allow extracting physical information from the materials, such as effective atomic number, mass-thickness, density, that can be used to distinguish and quantitatively characterize the irradiated tissues. The aim of the system is the investigation and the development of new pre-clinic methods for the early detection of the tumors in small animals. An innovative technique, the Triple-Energy Radiography with Contrast Medium (TER), has been successfully implemented on our system. TER consist in combining a set of three quasi-monochromatic images of an object, in order to obtain a corresponding set of three single-tissue images, which are the mass-thickness map of three reference materials. TER can be applied to the quantitative mass-thickness-map reconstruction of a contrast medium, because it is able to remove completely the signal due to other tissues (i.e. the structural background noise). The technique is very sensitive to the contrast medium and is insensitive to the superposition of different materials. The method is a good candidate to the early detection of the tumor angiogenesis in mice. In this work we describe the tomographic system, with a particular focus on the quasi-monochromatic source. Moreover the TER method is presented with some preliminary results about small animal imaging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research activity described in this thesis is focused mainly on the study of finite-element techniques applied to thermo-fluid dynamic problems of plant components and on the study of dynamic simulation techniques applied to integrated building design in order to enhance the energy performance of the building. The first part of this doctorate thesis is a broad dissertation on second law analysis of thermodynamic processes with the purpose of including the issue of the energy efficiency of buildings within a wider cultural context which is usually not considered by professionals in the energy sector. In particular, the first chapter includes, a rigorous scheme for the deduction of the expressions for molar exergy and molar flow exergy of pure chemical fuels. The study shows that molar exergy and molar flow exergy coincide when the temperature and pressure of the fuel are equal to those of the environment in which the combustion reaction takes place. A simple method to determine the Gibbs free energy for non-standard values of the temperature and pressure of the environment is then clarified. For hydrogen, carbon dioxide, and several hydrocarbons, the dependence of the molar exergy on the temperature and relative humidity of the environment is reported, together with an evaluation of molar exergy and molar flow exergy when the temperature and pressure of the fuel are different from those of the environment. As an application of second law analysis, a comparison of the thermodynamic efficiency of a condensing boiler and of a heat pump is also reported. The second chapter presents a study of borehole heat exchangers, that is, a polyethylene piping network buried in the soil which allows a ground-coupled heat pump to exchange heat with the ground. After a brief overview of low-enthalpy geothermal plants, an apparatus designed and assembled by the author to carry out thermal response tests is presented. Data obtained by means of in situ thermal response tests are reported and evaluated by means of a finite-element simulation method, implemented through the software package COMSOL Multyphysics. The simulation method allows the determination of the precise value of the effective thermal properties of the ground and of the grout, which are essential for the design of borehole heat exchangers. In addition to the study of a single plant component, namely the borehole heat exchanger, in the third chapter is presented a thorough process for the plant design of a zero carbon building complex. The plant is composed of: 1) a ground-coupled heat pump system for space heating and cooling, with electricity supplied by photovoltaic solar collectors; 2) air dehumidifiers; 3) thermal solar collectors to match 70% of domestic hot water energy use, and a wood pellet boiler for the remaining domestic hot water energy use and for exceptional winter peaks. This chapter includes the design methodology adopted: 1) dynamic simulation of the building complex with the software package TRNSYS for evaluating the energy requirements of the building complex; 2) ground-coupled heat pumps modelled by means of TRNSYS; and 3) evaluation of the total length of the borehole heat exchanger by an iterative method developed by the author. An economic feasibility and an exergy analysis of the proposed plant, compared with two other plants, are reported. The exergy analysis was performed by considering the embodied energy of the components of each plant and the exergy loss during the functioning of the plants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this PhD thesis the crashworthiness topic is studied with the perspective of the development of a small-scale experimental test able to characterize a material in terms of energy absorption. The material properties obtained are then used to validate a nu- merical model of the experimental test itself. Consequently, the numerical model, calibrated on the specific ma- terial, can be extended to more complex structures and used to simulate their energy absorption behavior. The experimental activity started at University of Washington in Seattle, WA (USA) and continued at Second Faculty of Engi- neering, University of Bologna, Forl`ı (Italy), where the numerical model for the simulation of the experimental test was implemented and optimized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let’s put ourselves in the shoes of an energy company. Our fleet of electricity production plants mainly includes gas, hydroelectric and waste-to-energy plants. We also sold contracts for the supply of gas and electricity. For each year we have to plan the trading of the volumes needed by the plants and customers: better to fix the price of these volumes in advance with the so-called forward contracts, instead of waiting for the delivery months, exposing ourselves to price uncertainty. Here’s the thing: trying to keep uncertainty under control in a market that has never shown such extreme scenarios as in recent years: a pandemic, a worsening climate crisis and a war that is affecting economies around the world have made the energy market more volatile than ever. How to make decisions in such uncertain contexts? There is an optimization problem: given a year, we need to choose the optimal planning of volume trading times, to meet the needs of our portfolio at the best prices, taking into account the liquidity constraints given by the market and the risk constraints imposed by the company. Algorithms are needed for the generation of market scenarios over a finite time horizon, that is, a probabilistic distribution that allows a view of all the dates between now and the end of the year of interest. Algorithms are needed to solve the optimization problem: we have proposed more than one and compared them; a very simple one, which avoids considering part of the complexity, moving on to a scenario approach and finally a reinforcement learning approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work activities reported in this PhD thesis regard the functionalization of composite materials and the realization of energy harvesting devices by using nanostructured piezoelectric materials, which can be integrated in the composite without affecting its mechanical properties. The self-sensing composite materials were fabricated by interleaving between the plies of the laminate the piezoelectric elements. The problem of negatively impacting on the mechanical properties of the hosting structure was addressed by shaping the piezoelectric materials in appropriate ways. In the case of polymeric piezoelectric materials, the electrospinning technique allowed to produce highly-porous nanofibrous membranes which can be immerged in the hosting matrix without inducing delamination risk. The flexibility of the polymers was exploited also for the production of flexible tactile sensors. The sensing performances of the specimens were evaluated also in terms of lifetime with fatigue tests. In the case of ceramic piezo-materials, the production and the interleaving of nanometric piezoelectric powder limitedly affected the impact resistance of the laminate, which showed enhanced sensing properties. In addition to this, a model was proposed to predict the piezoelectric response of the self-sensing composite materials as function of the amount of the piezo-phase within the laminate and to adapt its sensing functionalities also for quasi-static loads. Indeed, one final application of the work was to integrate the piezoelectric nanofibers in the sole of a prosthetic foot in order to detect the walking cycle, which has a period in the order of 1 second. In the end, the energy harvesting capabilities of the piezoelectric materials were investigated, with the aim to design wearable devices able to collect energy from the environment and from the body movements. The research activities focused both on the power transfer capability to an external load and the charging of an energy storage unit, like, e.g., a supercapacitor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ad oggi le città europee si configurano come i principali centri di cultura, innovazione e sviluppo economico. Tuttavia, ospitando circa il 75% della popolazione e consumando quasi l’80% dell’energia prodotta, a causa delle significative emissioni di gas serra esse contribuiscono in modo rilevante ai cambiamenti climatici e, allo stesso tempo, ne subiscono gli effetti più intensi. La Comunità Europea ha preso atto della necessità di intraprendere un’azione sinergica che adotti strategie di mitigazione climatica e preveda misure di adattamento per far fronte agli impatti climatici ormai inevitabili. L'orientamento dei Programmi europei di Ricerca e Innovazione sul tema delle città smart e clima-neutrali sposta l'attenzione dalla dimensione urbana verso la scala di distretto. In questa prospettiva, i Positive Energy Districts (PEDs) si configurano come distretti di nuova edificazione, ma anche come soluzioni ambiziose per la riqualificazione di quartieri esistenti che gestiscono in modo attivo il fabbisogno energetico con un bilancio nullo di emissioni e un surplus di energia prodotta da rinnovabili. La ricerca di dottorato focalizza l’indagine sul modello PEDs esplorandone il potenziale di applicabilità nel contesto urbano consolidato. Nello specifico, la tesi lavora allo sviluppo di due contributi di ricerca originali: il PED-Portfolio e il PED-Toolkit. Tali contributi propongono un approccio sistemico, attraverso il quale intraprendere un percorso di conoscenza e sperimentazione del modello PEDs in una prospettiva di riduzione del fabbisogno energetico, ma anche in un’ottica di migliore accessibilità, vivibilità e resilienza di questi distretti. Al fine di verificare l’applicabilità dei risultati della ricerca, gli strumenti sviluppati vengono testati su un’area pilota e gli esiti di tale sperimentazione sono poi messi a confronto con il quadro dello stato dell’arte e con le principali linee di ricerca internazionali sul tema PEDs, affinando gli esiti del progetto di dottorato in un processo di ricerca-sperimentazione-ricerca.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La ricerca oggetto di questa tesi, come si evince dal titolo stesso, è volta alla riduzione dei consumi per vetture a forte carattere sportivo ed elevate prestazioni specifiche. In particolare, tutte le attività descritte fanno riferimento ad un ben definito modello di vettura, ovvero la Maserati Quattroporte. Lo scenario all’interno del quale questo lavoro si inquadra, è quello di una forte spinta alla riduzione dei cosiddetti gas serra, ossia dell’anidride carbonica, in linea con quelle che sono le disposizioni dettate dal protocollo di Kyoto. La necessità di ridurre l’immissione in atmosfera di CO2 sta condizionando tutti i settori della società: dal riscaldamento degli edifici privati a quello degli stabilimenti industriali, dalla generazione di energia ai processi produttivi in senso lato. Nell’ambito di questo panorama, chiaramente, sono chiamati ad uno sforzo considerevole i costruttori di automobili, alle quali è imputata una percentuale considerevole dell’anidride carbonica prodotta ogni giorno e riversata nell’atmosfera. Al delicato problema inquinamento ne va aggiunto uno forse ancor più contingente e diretto, legato a ragioni di carattere economico. I combustibili fossili, come tutti sanno, sono una fonte di energia non rinnovabile, la cui disponibilità è legata a giacimenti situati in opportune zone del pianeta e non inesauribili. Per di più, la situazione socio politica che il medio oriente sta affrontando, unita alla crescente domanda da parte di quei paesi in cui il processo di industrializzazione è partito da poco a ritmi vertiginosi, hanno letteralmente fatto lievitare il prezzo del petrolio. A causa di ciò, avere una vettura efficiente in senso lato e, quindi, a ridotti consumi, è a tutti gli effetti un contenuto di prodotto apprezzato dal punto di vista del marketing, anche per i segmenti vettura più alti. Nell’ambito di questa ricerca il problema dei consumi è stato affrontato come una conseguenza del comportamento globale della vettura in termini di efficienza, valutando il miglior compromesso fra le diverse aree funzionali costituenti il veicolo. Una parte consistente del lavoro è stata dedicata alla messa a punto di un modello di calcolo, attraverso il quale eseguire una serie di analisi di sensibilità sull’influenza dei diversi parametri vettura sul consumo complessivo di carburante. Sulla base di tali indicazioni, è stata proposta una modifica dei rapporti del cambio elettro-attuato con lo scopo di ottimizzare il compromesso tra consumi e prestazioni, senza inficiare considerevolmente queste ultime. La soluzione proposta è stata effettivamente realizzata e provata su vettura, dando la possibilità di verificare i risultati ed operare un’approfondita attività di correlazione del modello di calcolo per i consumi. Il beneficio ottenuto in termini di autonomia è stato decisamente significativo con riferimento sia ai cicli di omologazione europei, che a quelli statunitensi. Sono state inoltre analizzate le ripercussioni dal punto di vista delle prestazioni ed anche in questo caso i numerosi dati rilevati hanno permesso di migliorare il livello di correlazione del modello di simulazione per le prestazioni. La vettura con la nuova rapportatura proposta è stata poi confrontata con un prototipo di Maserati Quattroporte avente cambio automatico e convertitore di coppia. Questa ulteriore attività ha permesso di valutare il differente comportamento tra le due soluzioni, sia in termini di consumo istantaneo, che di consumo complessivo rilevato durante le principali missioni su banco a rulli previste dalle normative. L’ultima sezione del lavoro è stata dedicata alla valutazione dell’efficienza energetica del sistema vettura, intesa come resistenza all’avanzamento incontrata durante il moto ad una determinata velocità. Sono state indagate sperimentalmente le curve di “coast down” della Quattroporte e di alcune concorrenti e sono stati proposti degli interventi volti alla riduzione del coefficiente di penetrazione aerodinamica, pur con il vincolo di non alterare lo stile vettura.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biomasses and their possible use as energy resource are of great interest today, and the general problem of energy resources as well. In the present study the key questions of the convenience, from both energy and economy standpoints, have been addressed without any bias: the problem has been handled starting from “philosophical” bases disregarding any pre-settled ideology or political trend, but simply using mathematical approaches as logical tools for defining balances in a right way. In this context quantitative indexes such as LCA and EROEI have been widely used, together with multicriteria methods (such as ELECTRE) as decision supporting tools. This approach permits to remove mythologies, such as the unrealistic concept of clean energy, or the strange idea of biomasses as a magic to solve every thing in the field of the energy. As a consequence the present study aims to find any relevant aspect potentially useful for the society, looking at any possible source of energy without prejudices but without unrealistic expectations too. For what concerns biomasses, we studied in great details four very different cases of study, in order to have a scenario as various as much we can. A relevant result is the need to use biomasses together with other more efficient sources, especially recovering by-products from silviculture activities: but attention should be paid to the transportation and environmental costs. Another relevant result is the very difficult possibility of reliable evaluation of dedicated cultures as sources for “biomasses for energy”: the problem has to be carefully evaluated case-by-case, because what seems useful in a context, becomes totally disruptive in another one. In any case the concept itself of convenience is not well defined at a level of macrosystem: it seems more appropriate to limit this very concept at a level of microsystem, considering that what sounds fine in a limited well defined microsystem may cause great damage in another slightly different, or even very similar, microsystem. This approach seems the right way to solve the controversy about the concept of convenience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Environmental Management includes many components, among which we can include Environmental Management Systems (EMS), Environmental Reporting and Analysis, Environmental Information Systems and Environmental Communication. In this work two applications are presented: the developement and implementation of an Environmental Management System in local administrations, according to the European scheme "EMAS", and the analysis of a territorial energy system through scenario building and environmental sustainability assessment. Both applications are linked by the same objective, which is the quest for more scientifically sound elements; in fact, both EMS and energy planning are oftec carachterized by localism and poor comparability. Emergy synthesis, proposed by ecologist H.T. Odum and described in his book "Environmental Accounting: Emergy and Environmental Decision Making" (1996) has been chosen and applied as an environmental evaluation tool, in order complete the analysis with an assessment of the "global value" of goods and processes. In particular, eMergy syntesis has been applied in order to improve the evaluation of the significance of environmental aspects in an EMS, and in order to evaluate the environmental performance of three scenarios of future evolution of the energy system. Regarding EMS, in this work an application of an EMS together with the CLEAR methodology for environmental accounting is discussed, in order to improve the identification of the environmental aspects; data regarding environmental aspects and significant ones for 4 local authorities are also presented, together with a preliminary proposal for the integration of the assessment of the significance of environmental aspects with eMergy synthesis. Regarding the analysis of an energy system, in this work the carachterization of the current situation is presented together with the overall energy balance and the evaluation of the emissions of greenhouse gases; moreover, three scenarios of future evolution are described and discussed. The scenarios have been realized with the support of the LEAP software ("Long Term Energy Alternatives Planning System" by SEI - "Stockholm Environment Institute"). Finally, the eMergy synthesis of the current situation and of the three scenarios is shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The energy released during a seismic crisis in volcanic areas is strictly related to the physical processes in the volcanic structure. In particular Long Period seismicity, that seems to be related to the oscillation of a fluid-filled crack (Chouet , 1996, Chouet, 2003, McNutt, 2005), can precedes or accompanies an eruption. The present doctoral thesis is focused on the study of the LP seismicity recorded in the Campi Flegrei volcano (Campania, Italy) during the October 2006 crisis. Campi Flegrei Caldera is an active caldera; the combination of an active magmatic system and a dense populated area make the Campi Flegrei a critical volcano. The source dynamic of LP seismicity is thought to be very different from the other kind of seismicity ( Tectonic or Volcano Tectonic): it’s characterized by a time sustained source and a low content in frequency. This features implies that the duration–magnitude, that is commonly used for VT events and sometimes for LPs as well, is unadapted for LP magnitude evaluation. The main goal of this doctoral work was to develop a method for the determination of the magnitude for the LP seismicity; it’s based on the comparison of the energy of VT event and LP event, linking the energy to the VT moment magnitude. So the magnitude of the LP event would be the moment magnitude of a VT event with the same energy of the LP. We applied this method to the LP data-set recorded at Campi Flegrei caldera in 2006, to an LP data-set of Colima volcano recorded in 2005 – 2006 and for an event recorded at Etna volcano. Experimenting this method to lots of waveforms recorded at different volcanoes we tested its easy applicability and consequently its usefulness in the routinely and in the quasi-real time work of a volcanological observatory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’attuale rilevanza rappresentata dalla stretta relazione tra cambiamenti climatici e influenza antropogenica ha da tempo posto l’attenzione sull’effetto serra e sul surriscaldamento planetario così come sull’aumento delle concentrazioni atmosferiche dei gas climaticamente attivi, in primo luogo la CO2. Il radiocarbonio è attualmente il tracciante ambientale per eccellenza in grado di fornire mediante un approccio “top-down” un valido strumento di controllo per discriminare e quantificare il diossido di carbonio presente in atmosfera di provenienza fossile o biogenica. Ecco allora che ai settori applicativi tradizionali del 14C, quali le datazioni archeometriche, si affiancano nuovi ambiti legati da un lato al settore energetico per quanto riguarda le problematiche associate alle emissioni di impianti, ai combustibili, allo stoccaggio geologico della CO2, dall’altro al mercato in forte crescita dei cosiddetti prodotti biobased costituiti da materie prime rinnovabili. Nell’ambito del presente lavoro di tesi è stato quindi esplorato il mondo del radiocarbonio sia dal punto di vista strettamente tecnico e metodologico che dal punto di vista applicativo relativamente ai molteplici e diversificati campi d’indagine. E’ stato realizzato e validato un impianto di analisi basato sul metodo radiometrico mediante assorbimento diretto della CO2 ed analisi in scintillazione liquida apportando miglioramenti tecnologici ed accorgimenti procedurali volti a migliorare le performance del metodo in termini di semplicità, sensibilità e riproducibilità. Il metodo, pur rappresentando generalmente un buon compromesso rispetto alle metodologie tradizionalmente usate per l’analisi del 14C, risulta allo stato attuale ancora inadeguato a quei settori applicativi laddove è richiesta una precisione molto puntuale, ma competitivo per l’analisi di campioni moderni ad elevata concentrazione di 14C. La sperimentazione condotta su alcuni liquidi ionici, seppur preliminare e non conclusiva, apre infine nuove linee di ricerca sulla possibilità di utilizzare questa nuova classe di composti come mezzi per la cattura della CO2 e l’analisi del 14C in LSC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the well-known MC code FLUKA was used to simulate the GE PETrace cyclotron (16.5 MeV) installed at “S. Orsola-Malpighi” University Hospital (Bologna, IT) and routinely used in the production of positron emitting radionuclides. Simulations yielded estimates of various quantities of interest, including: the effective dose distribution around the equipment; the effective number of neutron produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of 41Ar, the assessment of the saturation yield of radionuclides used in nuclear medicine. The simulations were validated against experimental measurements in terms of physical and transport parameters to be used at the energy range of interest in the medical field. The validated model was also extensively used in several practical applications uncluding the direct cyclotron production of non-standard radionuclides such as 99mTc, the production of medical radionuclides at TRIUMF (Vancouver, CA) TR13 cyclotron (13 MeV), the complete design of the new PET facility of “Sacro Cuore – Don Calabria” Hospital (Negrar, IT), including the ACSI TR19 (19 MeV) cyclotron, the dose field around the energy selection system (degrader) of a proton therapy cyclotron, the design of plug-doors for a new cyclotron facility, in which a 70 MeV cyclotron will be installed, and the partial decommissioning of a PET facility, including the replacement of a Scanditronix MC17 cyclotron with a new TR19 cyclotron.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il presente studio si colloca all’interno di una ricerca più ampia volta alla definizione di criteri progettuali finalizzati all’ottimizzazione delle prestazioni energetiche delle cantine di aziende vitivinicole, di dimensioni produttive medio - piccole. Nello specifico la ricerca riguarda la riqualificazione di fabbricati rurali esistenti di modeste dimensioni, da convertire a magazzini per la conservazione del vino in bottiglia. Lo studio si pone come obiettivo la definizione di criteri di analisi per la valutazione di interventi di retrofit di tali fabbricati, volto sia al miglioramento delle prestazioni energetiche dell’involucro edilizio, sia alla riduzione del fabbisogno energetico legato al funzionamento di eventuali impianti di controllo termico. La ricerca è stata condotta mediante l’utilizzo del software di simulazione termica Energy Plus, per ottenere i valori simulati di temperatura interna relativi ai diversi scenari migliorativi ipotizzati, e mediante la successiva definizione di indicatori che esplicitino l’influenza delle principali variabili progettuali sull’andamento delle temperature interne dei locali di conservazione e sul fabbisogno energetico del fabbricato necessario a garantire l’intervallo di temperatura di comfort del vino. Tra tutti gli interventi possibili per il miglioramento della prestazione energetica degli edifici, quelli analizzati in questo studio prevedono l’aggiunta di un isolamento a cappotto delle pareti esterne, l’isolamento della copertura e l’aggiunta di una struttura ombreggiante vegetale esterna. I risultati ottenuti danno una prima indicazione sugli interventi più efficaci in termini di miglioramento energetico e mettono in luce l’utilità del criterio proposto nell’evidenziare le criticità degli interventi migliorativi ipotizzati. Il metodo definito nella presente ricerca risulta quindi un valido strumento di valutazione a supporto della progettazione degli interventi di retrofit dei fabbricati rurali da convertire a magazzini per la conservazione del vino.