8 resultados para Work flow
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
In the Era of precision medicine and big medical data sharing, it is necessary to solve the work-flow of digital radiological big data in a productive and effective way. In particular, nowadays, it is possible to extract information “hidden” in digital images, in order to create diagnostic algorithms helping clinicians to set up more personalized therapies, which are in particular targets of modern oncological medicine. Digital images generated by the patient have a “texture” structure that is not visible but encrypted; it is “hidden” because it cannot be recognized by sight alone. Thanks to artificial intelligence, pre- and post-processing software and generation of mathematical calculation algorithms, we could perform a classification based on non-visible data contained in radiological images. Being able to calculate the volume of tissue body composition could lead to creating clasterized classes of patients inserted in standard morphological reference tables, based on human anatomy distinguished by gender and age, and maybe in future also by race. Furthermore, the branch of “morpho-radiology" is a useful modality to solve problems regarding personalized therapies, which is particularly needed in the oncological field. Actually oncological therapies are no longer based on generic drugs but on target personalized therapy. The lack of gender and age therapies table could be filled thanks to morpho-radiology data analysis application.
Resumo:
The present work concerns with the study of debris flows and, in particular, with the related hazard in the Alpine Environment. During the last years several methodologies have been developed to evaluate hazard associated to such a complex phenomenon, whose velocity, impacting force and inappropriate temporal prediction are responsible of the related high hazard level. This research focuses its attention on the depositional phase of debris flows through the application of a numerical model (DFlowz), and on hazard evaluation related to watersheds morphometric, morphological and geological characterization. The main aims are to test the validity of DFlowz simulations and assess sources of errors in order to understand how the empirical uncertainties influence the predictions; on the other side the research concerns with the possibility of performing hazard analysis starting from the identification of susceptible debris flow catchments and definition of their activity level. 25 well documented debris flow events have been back analyzed with the model DFlowz (Berti and Simoni, 2007): derived form the implementation of the empirical relations between event volume and planimetric and cross section inundated areas, the code allows to delineate areas affected by an event by taking into account information about volume, preferential flow path and digital elevation model (DEM) of fan area. The analysis uses an objective methodology for evaluating the accuracy of the prediction and involve the calibration of the model based on factors describing the uncertainty associated to the semi empirical relationships. The general assumptions on which the model is based have been verified although the predictive capabilities are influenced by the uncertainties of the empirical scaling relationships, which have to be necessarily taken into account and depend mostly on errors concerning deposited volume estimation. In addition, in order to test prediction capabilities of physical-based models, some events have been simulated through the use of RAMMS (RApid Mass MovementS). The model, which has been developed by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) in Birmensdorf and the Swiss Federal Institute for Snow and Avalanche Research (SLF) takes into account a one-phase approach based on Voellmy rheology (Voellmy, 1955; Salm et al., 1990). The input file combines the total volume of the debris flow located in a release area with a mean depth. The model predicts the affected area, the maximum depth and the flow velocity in each cell of the input DTM. Relatively to hazard analysis related to watersheds characterization, the database collected by the Alto Adige Province represents an opportunity to examine debris-flow sediment dynamics at the regional scale and analyze lithologic controls. With the aim of advancing current understandings about debris flow, this study focuses on 82 events in order to characterize the topographic conditions associated with their initiation , transportation and deposition, seasonal patterns of occurrence and examine the role played by bedrock geology on sediment transfer.
Resumo:
This work presents a comprehensive methodology for the reduction of analytical or numerical stochastic models characterized by uncertain input parameters or boundary conditions. The technique, based on the Polynomial Chaos Expansion (PCE) theory, represents a versatile solution to solve direct or inverse problems related to propagation of uncertainty. The potentiality of the methodology is assessed investigating different applicative contexts related to groundwater flow and transport scenarios, such as global sensitivity analysis, risk analysis and model calibration. This is achieved by implementing a numerical code, developed in the MATLAB environment, presented here in its main features and tested with literature examples. The procedure has been conceived under flexibility and efficiency criteria in order to ensure its adaptability to different fields of engineering; it has been applied to different case studies related to flow and transport in porous media. Each application is associated with innovative elements such as (i) new analytical formulations describing motion and displacement of non-Newtonian fluids in porous media, (ii) application of global sensitivity analysis to a high-complexity numerical model inspired by a real case of risk of radionuclide migration in the subsurface environment, and (iii) development of a novel sensitivity-based strategy for parameter calibration and experiment design in laboratory scale tracer transport.
Resumo:
Nanotechnologies are rapidly expanding because of the opportunities that the new materials offer in many areas such as the manufacturing industry, food production, processing and preservation, and in the pharmaceutical and cosmetic industry. Size distribution of the nanoparticles determines their properties and is a fundamental parameter that needs to be monitored from the small-scale synthesis up to the bulk production and quality control of nanotech products on the market. A consequence of the increasing number of applications of nanomaterial is that the EU regulatory authorities are introducing the obligation for companies that make use of nanomaterials to acquire analytical platforms for the assessment of the size parameters of the nanomaterials. In this work, Asymmetrical Flow Field-Flow Fractionation (AF4) and Hollow Fiber F4 (HF5), hyphenated with Multiangle Light Scattering (MALS) are presented as tools for a deep functional characterization of nanoparticles. In particular, it is demonstrated the applicability of AF4-MALS for the characterization of liposomes in a wide series of mediums. Afterwards the technique is used to explore the functional features of a liposomal drug vector in terms of its biological and physical interaction with blood serum components: a comprehensive approach to understand the behavior of lipid vesicles in terms of drug release and fusion/interaction with other biological species is described, together with weaknesses and strength of the method. Afterwards the size characterization, size stability, and conjugation of azidothymidine drug molecules with a new generation of metastable drug vectors, the Metal Organic Frameworks, is discussed. Lastly, it is shown the applicability of HF5-ICP-MS for the rapid screening of samples of relevant nanorisk: rather than a deep and comprehensive characterization it this time shown a quick and smart methodology that within few steps provides qualitative information on the content of metallic nanoparticles in tattoo ink samples.
Resumo:
The purpose of the first part of the research activity was to develop an aerobic cometabolic process in packed bed reactors (PBR) to treat real groundwater contaminated by trichloroethylene (TCE) and 1,1,2,2-tetrachloroethane (TeCA). In an initial screening conducted in batch bioreactors, different groundwater samples from 5 wells of the contaminated site were fed with 5 growth substrates. The work led to the selection of butane as the best growth substrate, and to the development and characterization from the site’s indigenous biomass of a suspended-cell consortium capable to degrade TCE with a 90 % mineralization of the organic chlorine. A kinetic study conducted in batch and continuous flow PBRs and led to the identification of the best carrier. A kinetic study of butane and TCE biodegradation indicated that the attached-cell consortium is characterized by a lower TCE specific degredation rates and by a lower level of mutual butane-TCE inhibition. A 31 L bioreactor was designed and set up for upscaling the experiment. The second part of the research focused on the biodegradation of 4 polymers, with and with-out chemical pre-treatments: linear low density polyethylene (LLDPE), polyethylene (PP), polystyrene (PS) and polyvinyl chloride (PVC). Initially, the 4 polymers were subjected to different chemical pre-treatments: ozonation and UV/ozonation, in gaseous and aqueous phase. It was found that, for LLDPE and PP, the coupling UV and ozone in gas phase is the most effective way to oxidize the polymers and to generate carbonyl groups on the polymer surface. In further tests, the effect of chemical pretreatment on polyner biodegrability was studied. Gas-phase ozonated and virgin polymers were incubated aerobically with: (a) a pure strain, (b) a mixed culture of bacteria; and (c) a fungal culture, together with saccharose as a co-substrate.
Resumo:
An essential role in the global energy transition is attributed to Electric Vehicles (EVs) the energy for EV traction can be generated by renewable energy sources (RES), also at a local level through distributed power plants, such as photovoltaic (PV) systems. However, EV integration with electrical systems might not be straightforward. The intermittent RES, combined with the high and uncontrolled aggregate EV charging, require an evolution toward new planning and paradigms of energy systems. In this context, this work aims to provide a practical solution for EV charging integration in electrical systems with RES. A method for predicting the power required by an EV fleet at the charging hub (CH) is developed in this thesis. The proposed forecasting method considers the main parameters on which charging demand depends. The results of the EV charging forecasting method are deeply analyzed under different scenarios. To reduce the EV load intermittency, methods for managing the charging power of EVs are proposed. The main target was to provide Charging Management Systems (CMS) that modulate EV charging to optimize specific performance indicators such as system self-consumption, peak load reduction, and PV exploitation. Controlling the EV charging power to achieve specific optimization goals is also known as Smart Charging (SC). The proposed techniques are applied to real-world scenarios demonstrating performance improvements in using SC strategies. A viable alternative to maximize integration with intermittent RES generation is the integration of energy storage. Battery Energy Storage Systems (BESS) may be a buffer between peak load and RES production. A sizing algorithm for PV+BESS integration in EV charging hubs is provided. The sizing optimization aims to optimize the system's energy and economic performance. The results provide an overview of the optimal size that the PV+BESS plant should have to improve whole system performance in different scenarios.
Resumo:
The main focus of this work is to define a numerical methodology to simulate an aerospike engine and then to analyse the performance of DemoP1, which is a small aerospike demonstrator built by Pangea Aerospace. The aerospike is a promising solution to build more efficient engine than the actual one. Its main advantage is the expansion adaptation that allows to reach the optimal expansion in a wide range of ambient pressures delivering more thrust than an equivalent bell-shaped nozzle. The main drawbacks are the cooling system design and the spike manufacturing but nowadays, these issues seem to be overcome with the use of the additive manufacturing method. The simulations are performed with dbnsTurbFoam which is a solver of OpenFOAM. It has been designed to simulate a supersonic compressible turbulent flow. This work is divided in four chapters. The first one is a short introduction. The second one shows a brief summary of the theoretical performance of the aerospike. The third one introduces the numerical methodology to simulate a compressible supersonic flow. In the fourth chapter, the solver has been verified with an experiment found in literature. And in the fifth chapter, the simulations on DemoP1 engine are illustrated.