15 resultados para per-survivor processing

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phenol and cresols represent a good example of primary chemical building blocks of which 2.8 million tons are currently produced in Europe each year. Currently, these primary phenolic building blocks are produced by refining processes from fossil hydrocarbons: 5% of the world-wide production comes from coal (which contains 0.2% of phenols) through the distillation of the tar residue after the production of coke, while 95% of current world production of phenol is produced by the distillation and cracking of crude oil. In nature phenolic compounds are present in terrestrial higher plants and ferns in several different chemical structures while they are essentially absent in lower organisms and in animals. Biomass (which contain 3-8% of phenols) represents a substantial source of secondary chemical building blocks presently underexploited. These phenolic derivatives are currently used in tens thousand of tons to produce high cost products such as food additives and flavours (i.e. vanillin), fine chemicals (i.e. non-steroidal anti-inflammatory drugs such as ibuprofen or flurbiprofen) and polymers (i.e. poly p-vinylphenol, a photosensitive polymer for electronic and optoelectronic applications). European agrifood waste represents a low cost abundant raw material (250 millions tons per year) which does not subtract land use and processing resources from necessary sustainable food production. The class of phenolic compounds is essentially constituted by simple phenols, phenolic acids, hydroxycinnamic acid derivatives, flavonoids and lignans. As in the case of coke production, the removal of the phenolic contents from biomass upgrades also the residual biomass. Focusing on the phenolic component of agrifood wastes, huge processing and marketing opportunities open since phenols are used as chemical intermediates for a large number of applications, ranging from pharmaceuticals, agricultural chemicals, food ingredients etc. Following this approach we developed a biorefining process to recover the phenolic fraction of wheat bran based on enzymatic commercial biocatalysts in completely water based process, and polymeric resins with the aim of substituting secondary chemical building blocks with the same compounds naturally present in biomass. We characterized several industrial enzymatic product for their ability to hydrolize the different molecular features that are present in wheat bran cell walls structures, focusing on the hydrolysis of polysaccharidic chains and phenolics cross links. This industrial biocatalysts were tested on wheat bran and the optimized process allowed to liquefy up to the 60 % of the treated matter. The enzymatic treatment was also able to solubilise up to the 30 % of the alkali extractable ferulic acid. An extraction process of the phenolic fraction of the hydrolyzed wheat bran based on an adsorbtion/desorption process on styrene-polyvinyl benzene weak cation-exchange resin Amberlite IRA 95 was developed. The efficiency of the resin was tested on different model system containing ferulic acid and the adsorption and desorption working parameters optimized for the crude enzymatic hydrolyzed wheat bran. The extraction process developed had an overall yield of the 82% and allowed to obtain concentrated extracts containing up to 3000 ppm of ferulic acid. The crude enzymatic hydrolyzed wheat bran and the concentrated extract were finally used as substrate in a bioconversion process of ferulic acid into vanillin through resting cells fermentation. The bioconversion process had a yields in vanillin of 60-70% within 5-6 hours of fermentation. Our findings are the first step on the way to demonstrating the economical feasibility for the recovery of biophenols from agrifood wastes through a whole crop approach in a sustainable biorefining process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I moderni sistemi embedded sono equipaggiati con risorse hardware che consentono l’esecuzione di applicazioni molto complesse come il decoding audio e video. La progettazione di simili sistemi deve soddisfare due esigenze opposte. Da un lato è necessario fornire un elevato potenziale computazionale, dall’altro bisogna rispettare dei vincoli stringenti riguardo il consumo di energia. Uno dei trend più diffusi per rispondere a queste esigenze opposte è quello di integrare su uno stesso chip un numero elevato di processori caratterizzati da un design semplificato e da bassi consumi. Tuttavia, per sfruttare effettivamente il potenziale computazionale offerto da una batteria di processoriè necessario rivisitare pesantemente le metodologie di sviluppo delle applicazioni. Con l’avvento dei sistemi multi-processore su singolo chip (MPSoC) il parallel programming si è diffuso largamente anche in ambito embedded. Tuttavia, i progressi nel campo della programmazione parallela non hanno mantenuto il passo con la capacità di integrare hardware parallelo su un singolo chip. Oltre all’introduzione di multipli processori, la necessità di ridurre i consumi degli MPSoC comporta altre soluzioni architetturali che hanno l’effetto diretto di complicare lo sviluppo delle applicazioni. Il design del sottosistema di memoria, in particolare, è un problema critico. Integrare sul chip dei banchi di memoria consente dei tempi d’accesso molto brevi e dei consumi molto contenuti. Sfortunatamente, la quantità di memoria on-chip che può essere integrata in un MPSoC è molto limitata. Per questo motivo è necessario aggiungere dei banchi di memoria off-chip, che hanno una capacità molto maggiore, come maggiori sono i consumi e i tempi d’accesso. La maggior parte degli MPSoC attualmente in commercio destina una parte del budget di area all’implementazione di memorie cache e/o scratchpad. Le scratchpad (SPM) sono spesso preferite alle cache nei sistemi MPSoC embedded, per motivi di maggiore predicibilità, minore occupazione d’area e – soprattutto – minori consumi. Per contro, mentre l’uso delle cache è completamente trasparente al programmatore, le SPM devono essere esplicitamente gestite dall’applicazione. Esporre l’organizzazione della gerarchia di memoria ll’applicazione consente di sfruttarne in maniera efficiente i vantaggi (ridotti tempi d’accesso e consumi). Per contro, per ottenere questi benefici è necessario scrivere le applicazioni in maniera tale che i dati vengano partizionati e allocati sulle varie memorie in maniera opportuna. L’onere di questo compito complesso ricade ovviamente sul programmatore. Questo scenario descrive bene l’esigenza di modelli di programmazione e strumenti di supporto che semplifichino lo sviluppo di applicazioni parallele. In questa tesi viene presentato un framework per lo sviluppo di software per MPSoC embedded basato su OpenMP. OpenMP è uno standard di fatto per la programmazione di multiprocessori con memoria shared, caratterizzato da un semplice approccio alla parallelizzazione tramite annotazioni (direttive per il compilatore). La sua interfaccia di programmazione consente di esprimere in maniera naturale e molto efficiente il parallelismo a livello di loop, molto diffuso tra le applicazioni embedded di tipo signal processing e multimedia. OpenMP costituisce un ottimo punto di partenza per la definizione di un modello di programmazione per MPSoC, soprattutto per la sua semplicità d’uso. D’altra parte, per sfruttare in maniera efficiente il potenziale computazionale di un MPSoC è necessario rivisitare profondamente l’implementazione del supporto OpenMP sia nel compilatore che nell’ambiente di supporto a runtime. Tutti i costrutti per gestire il parallelismo, la suddivisione del lavoro e la sincronizzazione inter-processore comportano un costo in termini di overhead che deve essere minimizzato per non comprometterre i vantaggi della parallelizzazione. Questo può essere ottenuto soltanto tramite una accurata analisi delle caratteristiche hardware e l’individuazione dei potenziali colli di bottiglia nell’architettura. Una implementazione del task management, della sincronizzazione a barriera e della condivisione dei dati che sfrutti efficientemente le risorse hardware consente di ottenere elevate performance e scalabilità. La condivisione dei dati, nel modello OpenMP, merita particolare attenzione. In un modello a memoria condivisa le strutture dati (array, matrici) accedute dal programma sono fisicamente allocate su una unica risorsa di memoria raggiungibile da tutti i processori. Al crescere del numero di processori in un sistema, l’accesso concorrente ad una singola risorsa di memoria costituisce un evidente collo di bottiglia. Per alleviare la pressione sulle memorie e sul sistema di connessione vengono da noi studiate e proposte delle tecniche di partizionamento delle strutture dati. Queste tecniche richiedono che una singola entità di tipo array venga trattata nel programma come l’insieme di tanti sotto-array, ciascuno dei quali può essere fisicamente allocato su una risorsa di memoria differente. Dal punto di vista del programma, indirizzare un array partizionato richiede che ad ogni accesso vengano eseguite delle istruzioni per ri-calcolare l’indirizzo fisico di destinazione. Questo è chiaramente un compito lungo, complesso e soggetto ad errori. Per questo motivo, le nostre tecniche di partizionamento sono state integrate nella l’interfaccia di programmazione di OpenMP, che è stata significativamente estesa. Specificamente, delle nuove direttive e clausole consentono al programmatore di annotare i dati di tipo array che si vuole partizionare e allocare in maniera distribuita sulla gerarchia di memoria. Sono stati inoltre sviluppati degli strumenti di supporto che consentono di raccogliere informazioni di profiling sul pattern di accesso agli array. Queste informazioni vengono sfruttate dal nostro compilatore per allocare le partizioni sulle varie risorse di memoria rispettando una relazione di affinità tra il task e i dati. Più precisamente, i passi di allocazione nel nostro compilatore assegnano una determinata partizione alla memoria scratchpad locale al processore che ospita il task che effettua il numero maggiore di accessi alla stessa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellite SAR (Synthetic Aperture Radar) interferometry represents a valid technique for digital elevation models (DEM) generation, providing metric accuracy even without ancillary data of good quality. Depending on the situations the interferometric phase could be interpreted both as topography and as a displacement eventually occurred between the two acquisitions. Once that these two components have been separated it is possible to produce a DEM from the first one or a displacement map from the second one. InSAR DEM (Digital Elevation Model) generation in the cryosphere is not a straightforward operation because almost every interferometric pair contains also a displacement component, which, even if small, when interpreted as topography during the phase to height conversion step could introduce huge errors in the final product. Considering a glacier, assuming the linearity of its velocity flux, it is therefore necessary to differentiate at least two pairs in order to isolate the topographic residue only. In case of an ice shelf the displacement component in the interferometric phase is determined not only by the flux of the glacier but also by the different heights of the two tides. As a matter of fact even if the two scenes of the interferometric pair are acquired at the same time of the day only the main terms of the tide disappear in the interferogram, while the other ones, smaller, do not elide themselves completely and so correspond to displacement fringes. Allowing for the availability of tidal gauges (or as an alternative of an accurate tidal model) it is possible to calculate a tidal correction to be applied to the differential interferogram. It is important to be aware that the tidal correction is applicable only knowing the position of the grounding line, which is often a controversial matter. In this thesis it is described the methodology applied for the generation of the DEM of the Drygalski ice tongue in Northern Victoria Land, Antarctica. The displacement has been determined both in an interferometric way and considering the coregistration offsets of the two scenes. A particular attention has been devoted to investigate the importance of the role of some parameters, such as timing annotations and orbits reliability. Results have been validated in a GIS environment by comparison with GPS displacement vectors (displacement map and InSAR DEM) and ICEsat GLAS points (InSAR DEM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work qualitative aspects of products that fall outside the classic Italian of food production view will be investigated, except for the apricot, a fruit, however, less studied by the methods considered here. The development of computer systems and the advanced software systems dedicated for statistical processing of data, has permitted the application of advanced technologies including the analysis of niche products. The near-infrared spectroscopic analysis was applied to the chemical industry for over twenty years and, subsequently, was applied in food industry with great success for non-destructive in line and off-line analysis. The work that will be presented below range from the use of spectroscopy for the determination of some rheological indices of ice cream applications to the characterization of the main quality indices of apricots, fresh dates, determination of the production areas of pistachio. Next to the spectroscopy will be illustrated different methods of multivariate analysis for spectra interpretation or for the construction of qualitative models of estimation. The thesis is divided into four separate studies that consider the same number of products. Each one of it is introduced by its own premise and ended with its own bibliography. This studies are preceded by a general discussion on the state of art and the basics of NIR spectroscopy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Mediterranean diet is rich in healthy substances such as fibres, vitamins and phenols. Often these molecules are lost during food processing. Olive oil milling waste waters, brans, grape skins are some of the most relevant agri-food by-products in the Mediterranean countries. These wastes are still rich in extremely valuable molecules, such as phenolic antioxidants, that have several interesting health promoting properties. Using innovative environmental friendly technologies based in the rational use of enzymatic treatment is possible to obtain from agri-food by-products new ingredients containing antioxidants that can be used as functional ingredients in order to produce fortified foods. These foods, having health protecting/promoting properties, on top of the traditional nutritional properties, are attracting consumer’s attentions due to the increasing awareness on health protection through prevention. The use of these new ingredients in different food preparation was studied in order to evaluate the effects that the food processing might have on the antioxidant fraction, the effect of these ingredient on foods appearances as well as the impact in terms of taste and scent, crucial feature for the acceptability of the final product. Using these new ingredients was possible to produce antioxidant bred, pasta, cheese, cookies and ice-cream. These food products retains very well the antioxidant properties conferred by the added ingredients despite the very different treatments that were performed. The food obtained had a good palatability and in some cases the final product had also a good success on the market.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il presente studio si concentra sulle diverse applicazioni del telerilevamento termico in ambito urbano. Vengono inizialmente descritti la radiazione infrarossa e le sue interazioni con l’atmosfera terrestre, le leggi principali che regolano lo scambio di calore per irraggiamento, le caratteristiche dei sensori e le diverse applicazioni di termografia. Successivamente sono trattati nel dettaglio gli aspetti caratteristici della termografia da piattaforma satellitare, finalizzata principalmente alla valutazione del fenomeno dell'Urban Heat Island; vengono descritti i sensori disponibili, le metodologie di correzione per gli effetti atmosferici, per la stima dell'emissività delle superfici e per il calcolo della temperatura superficiale dei pixels. Viene quindi illustrata la sperimentazione effettuata sull'area di Bologna mediante immagini multispettrali ASTER: i risultati mostrano come sull'area urbana sia riscontrabile la presenza dell'Isola di Calore Urbano, anche se la sua quantificazione risulta complessa. Si procede quindi alla descrizione di potenzialità e limiti della termografia aerea, dei suoi diversi utilizzi, delle modalità operative di rilievo e degli algoritmi utilizzati per il calcolo della temperatura superficiale delle coperture edilizie. Tramite l’analisi di alcune esperienze precedenti vengono trattati l’influenza dell’atmosfera, la modellazione dei suoi effetti sulla radianza rilevata, i diversi metodi per la stima dell’emissività. Viene quindi introdotto il progetto europeo Energycity, finalizzato alla creazione di un sistema GeoWeb di supporto spaziale alle decisioni per la riduzione di consumi energetici e produzione di gas serra su sette città dell'Europa Centrale. Vengono illustrate le modalità di rilievo e le attività di processing dei datasets digitali per la creazione di mappe di temperatura superficiale da implementare nel sistema SDSS. Viene infine descritta la sperimentazione effettuata sulle immagini termiche acquisite nel febbraio 2010 sulla città di Treviso, trasformate in un mosaico georiferito di temperatura radiometrica tramite correzioni geometriche e radiometriche; a seguito della correzione per l’emissività quest’ultimo verrà trasformato in un mosaico di temperatura superficiale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past years fruit and vegetable industry has become interested in the application of both osmotic dehydration and vacuum impregnation as mild technologies because of their low temperature and energy requirements. Osmotic dehydration is a partial dewatering process by immersion of cellular tissue in hypertonic solution. The diffusion of water from the vegetable tissue to the solution is usually accompanied by the simultaneous solutes counter-diffusion into the tissue. Vacuum impregnation is a unit operation in which porous products are immersed in a solution and subjected to a two-steps pressure change. The first step (vacuum increase) consists of the reduction of the pressure in a solid-liquid system and the gas in the product pores is expanded, partially flowing out. When the atmospheric pressure is restored (second step), the residual gas in the pores compresses and the external liquid flows into the pores. This unit operation allows introducing specific solutes in the tissue, e.g. antioxidants, pH regulators, preservatives, cryoprotectancts. Fruit and vegetable interact dynamically with the environment and the present study attempts to enhance our understanding on the structural, physico-chemical and metabolic changes of plant tissues upon the application of technological processes (osmotic dehydration and vacuum impregnation), by following a multianalytical approach. Macro (low-frequency nuclear magnetic resonance), micro (light microscopy) and ultrastructural (transmission electron microscopy) measurements combined with textural and differential scanning calorimetry analysis allowed evaluating the effects of individual osmotic dehydration or vacuum impregnation processes on (i) the interaction between air and liquid in real plant tissues, (ii) the plant tissue water state and (iii) the cell compartments. Isothermal calorimetry, respiration and photosynthesis determinations led to investigate the metabolic changes upon the application of osmotic dehydration or vacuum impregnation. The proposed multianalytical approach should enable both better designs of processing technologies and estimations of their effects on tissue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates two distinct research topics. The main topic (Part I) is the computational modelling of cardiomyocytes derived from human stem cells, both embryonic (hESC-CM) and induced-pluripotent (hiPSC-CM). The aim of this research line lies in developing models of the electrophysiology of hESC-CM and hiPSC-CM in order to integrate the available experimental data and getting in-silico models to be used for studying/making new hypotheses/planning experiments on aspects not fully understood yet, such as the maturation process, the functionality of the Ca2+ hangling or why the hESC-CM/hiPSC-CM action potentials (APs) show some differences with respect to APs from adult cardiomyocytes. Chapter I.1 introduces the main concepts about hESC-CMs/hiPSC-CMs, the cardiac AP, and computational modelling. Chapter I.2 presents the hESC-CM AP model, able to simulate the maturation process through two developmental stages, Early and Late, based on experimental and literature data. Chapter I.3 describes the hiPSC-CM AP model, able to simulate the ventricular-like and atrial-like phenotypes. This model was used to assess which currents are responsible for the differences between the ventricular-like AP and the adult ventricular AP. The secondary topic (Part II) consists in the study of texture descriptors for biological image processing. Chapter II.1 provides an overview on important texture descriptors such as Local Binary Pattern or Local Phase Quantization. Moreover the non-binary coding and the multi-threshold approach are here introduced. Chapter II.2 shows that the non-binary coding and the multi-threshold approach improve the classification performance of cellular/sub-cellular part images, taken from six datasets. Chapter II.3 describes the case study of the classification of indirect immunofluorescence images of HEp2 cells, used for the antinuclear antibody clinical test. Finally the general conclusions are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, the use of Reverse Engineering systems has got a considerable interest for a wide number of applications. Therefore, many research activities are focused on accuracy and precision of the acquired data and post processing phase improvements. In this context, this PhD Thesis deals with the definition of two novel methods for data post processing and data fusion between physical and geometrical information. In particular a technique has been defined for error definition in 3D points’ coordinates acquired by an optical triangulation laser scanner, with the aim to identify adequate correction arrays to apply under different acquisition parameters and operative conditions. Systematic error in data acquired is thus compensated, in order to increase accuracy value. Moreover, the definition of a 3D thermogram is examined. Object geometrical information and its thermal properties, coming from a thermographic inspection, are combined in order to have a temperature value for each recognizable point. Data acquired by an optical triangulation laser scanner are also used to normalize temperature values and make thermal data independent from thermal-camera point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le tecniche di Machine Learning sono molto utili in quanto consento di massimizzare l’utilizzo delle informazioni in tempo reale. Il metodo Random Forests può essere annoverato tra le tecniche di Machine Learning più recenti e performanti. Sfruttando le caratteristiche e le potenzialità di questo metodo, la presente tesi di dottorato affronta due casi di studio differenti; grazie ai quali è stato possibile elaborare due differenti modelli previsionali. Il primo caso di studio si è incentrato sui principali fiumi della regione Emilia-Romagna, caratterizzati da tempi di risposta molto brevi. La scelta di questi fiumi non è stata casuale: negli ultimi anni, infatti, in detti bacini si sono verificati diversi eventi di piena, in gran parte di tipo “flash flood”. Il secondo caso di studio riguarda le sezioni principali del fiume Po, dove il tempo di propagazione dell’onda di piena è maggiore rispetto ai corsi d’acqua del primo caso di studio analizzato. Partendo da una grande quantità di dati, il primo passo è stato selezionare e definire i dati in ingresso in funzione degli obiettivi da raggiungere, per entrambi i casi studio. Per l’elaborazione del modello relativo ai fiumi dell’Emilia-Romagna, sono stati presi in considerazione esclusivamente i dati osservati; a differenza del bacino del fiume Po in cui ai dati osservati sono stati affiancati anche i dati di previsione provenienti dalla catena modellistica Mike11 NAM/HD. Sfruttando una delle principali caratteristiche del metodo Random Forests, è stata stimata una probabilità di accadimento: questo aspetto è fondamentale sia nella fase tecnica che in fase decisionale per qualsiasi attività di intervento di protezione civile. L'elaborazione dei dati e i dati sviluppati sono stati effettuati in ambiente R. Al termine della fase di validazione, gli incoraggianti risultati ottenuti hanno permesso di inserire il modello sviluppato nel primo caso studio all’interno dell’architettura operativa di FEWS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La ricerca si pone l’obiettivo di analizzare strumenti e metodi per l’applicazione dell’H-BIM comprendendone le criticità e fornendo soluzioni utili in questo campo. Al contempo la finalità non è circoscrivibile alla semplice produzione di modelli 3D semanticamente strutturati e parametrici a partire da una nuvola di punti ottenuta con un rilievo digitale, ma si propone di definire i criteri e le metodiche di applicazione delle H-BIM all’interno dell’intero processo. L’impostazione metodologica scelta prevede un processo che parte dalla conoscenza dello stato dell’arte in tema di H-BIM con lo studio dell’attuale normativa in materia e i casi studio di maggior rilevanza. Si è condotta una revisione critica completa della letteratura in merito alla tecnologia BIM e H-BIM, analizzando esperienze di utilizzo della tecnologia BIM nel settore edile globale. Inoltre, al fine di promuovere soluzioni intelligenti all’interno del Facility Management è stato necessario analizzare le criticità presenti nelle procedure, rivedere i processi e i metodi per raccogliere e gestire i dati, nonché individuare le procedure adeguate per garantire il successo dell’implementazione. Sono state evidenziate le potenzialità procedurali e operative legate all’uso sistematico delle innovazioni digitali nell’ottica del Facility Management, oltre che allo studio degli strumenti di acquisizione ed elaborazione dei dati e di post-produzione. Si è proceduto al testing su casi specifici per l’analisi della fase di Scan-to-BIM, differenziati per tipologia di utilizzo, data di costruzione, proprietà e localizzazione. Il percorso seguito ha permesso di porre in luce il significato e le implicazioni dell’utilizzo del BIM nell’ambito del Facility Management, sulla base di una differenziazione delle applicazioni del modello BIM al variare delle condizioni in essere. Infine, sono state definite le conclusioni e formulate raccomandazioni riguardo al futuro utilizzo della tecnologia H-BIM nel settore delle costruzioni. In particolare, definendo l’emergente frontiera del Digital Twin, quale veicolo necessario nel futuro della Costruzione 4.0.