910 resultados para Context analysis
Resumo:
Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.
Resumo:
The research activity described in this thesis is focused mainly on the study of finite-element techniques applied to thermo-fluid dynamic problems of plant components and on the study of dynamic simulation techniques applied to integrated building design in order to enhance the energy performance of the building. The first part of this doctorate thesis is a broad dissertation on second law analysis of thermodynamic processes with the purpose of including the issue of the energy efficiency of buildings within a wider cultural context which is usually not considered by professionals in the energy sector. In particular, the first chapter includes, a rigorous scheme for the deduction of the expressions for molar exergy and molar flow exergy of pure chemical fuels. The study shows that molar exergy and molar flow exergy coincide when the temperature and pressure of the fuel are equal to those of the environment in which the combustion reaction takes place. A simple method to determine the Gibbs free energy for non-standard values of the temperature and pressure of the environment is then clarified. For hydrogen, carbon dioxide, and several hydrocarbons, the dependence of the molar exergy on the temperature and relative humidity of the environment is reported, together with an evaluation of molar exergy and molar flow exergy when the temperature and pressure of the fuel are different from those of the environment. As an application of second law analysis, a comparison of the thermodynamic efficiency of a condensing boiler and of a heat pump is also reported. The second chapter presents a study of borehole heat exchangers, that is, a polyethylene piping network buried in the soil which allows a ground-coupled heat pump to exchange heat with the ground. After a brief overview of low-enthalpy geothermal plants, an apparatus designed and assembled by the author to carry out thermal response tests is presented. Data obtained by means of in situ thermal response tests are reported and evaluated by means of a finite-element simulation method, implemented through the software package COMSOL Multyphysics. The simulation method allows the determination of the precise value of the effective thermal properties of the ground and of the grout, which are essential for the design of borehole heat exchangers. In addition to the study of a single plant component, namely the borehole heat exchanger, in the third chapter is presented a thorough process for the plant design of a zero carbon building complex. The plant is composed of: 1) a ground-coupled heat pump system for space heating and cooling, with electricity supplied by photovoltaic solar collectors; 2) air dehumidifiers; 3) thermal solar collectors to match 70% of domestic hot water energy use, and a wood pellet boiler for the remaining domestic hot water energy use and for exceptional winter peaks. This chapter includes the design methodology adopted: 1) dynamic simulation of the building complex with the software package TRNSYS for evaluating the energy requirements of the building complex; 2) ground-coupled heat pumps modelled by means of TRNSYS; and 3) evaluation of the total length of the borehole heat exchanger by an iterative method developed by the author. An economic feasibility and an exergy analysis of the proposed plant, compared with two other plants, are reported. The exergy analysis was performed by considering the embodied energy of the components of each plant and the exergy loss during the functioning of the plants.
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
The central objective of research in Information Retrieval (IR) is to discover new techniques to retrieve relevant information in order to satisfy an Information Need. The Information Need is satisfied when relevant information can be provided to the user. In IR, relevance is a fundamental concept which has changed over time, from popular to personal, i.e., what was considered relevant before was information for the whole population, but what is considered relevant now is specific information for each user. Hence, there is a need to connect the behavior of the system to the condition of a particular person and his social context; thereby an interdisciplinary sector called Human-Centered Computing was born. For the modern search engine, the information extracted for the individual user is crucial. According to the Personalized Search (PS), two different techniques are necessary to personalize a search: contextualization (interconnected conditions that occur in an activity), and individualization (characteristics that distinguish an individual). This movement of focus to the individual's need undermines the rigid linearity of the classical model overtaken the ``berry picking'' model which explains that the terms change thanks to the informational feedback received from the search activity introducing the concept of evolution of search terms. The development of Information Foraging theory, which observed the correlations between animal foraging and human information foraging, also contributed to this transformation through attempts to optimize the cost-benefit ratio. This thesis arose from the need to satisfy human individuality when searching for information, and it develops a synergistic collaboration between the frontiers of technological innovation and the recent advances in IR. The search method developed exploits what is relevant for the user by changing radically the way in which an Information Need is expressed, because now it is expressed through the generation of the query and its own context. As a matter of fact the method was born under the pretense to improve the quality of search by rewriting the query based on the contexts automatically generated from a local knowledge base. Furthermore, the idea of optimizing each IR system has led to develop it as a middleware of interaction between the user and the IR system. Thereby the system has just two possible actions: rewriting the query, and reordering the result. Equivalent actions to the approach was described from the PS that generally exploits information derived from analysis of user behavior, while the proposed approach exploits knowledge provided by the user. The thesis went further to generate a novel method for an assessment procedure, according to the "Cranfield paradigm", in order to evaluate this type of IR systems. The results achieved are interesting considering both the effectiveness achieved and the innovative approach undertaken together with the several applications inspired using a local knowledge base.
Resumo:
The continuous advancements and enhancements of wireless systems are enabling new compelling scenarios where mobile services can adapt according to the current execution context, represented by the computational resources available at the local device, current physical location, people in physical proximity, and so forth. Such services called context-aware require the timely delivery of all relevant information describing the current context, and that introduces several unsolved complexities, spanning from low-level context data transmission up to context data storage and replication into the mobile system. In addition, to ensure correct and scalable context provisioning, it is crucial to integrate and interoperate with different wireless technologies (WiFi, Bluetooth, etc.) and modes (infrastructure-based and ad-hoc), and to use decentralized solutions to store and replicate context data on mobile devices. These challenges call for novel middleware solutions, here called Context Data Distribution Infrastructures (CDDIs), capable of delivering relevant context data to mobile devices, while hiding all the issues introduced by data distribution in heterogeneous and large-scale mobile settings. This dissertation thoroughly analyzes CDDIs for mobile systems, with the main goal of achieving a holistic approach to the design of such type of middleware solutions. We discuss the main functions needed by context data distribution in large mobile systems, and we claim the precise definition and clean respect of quality-based contracts between context consumers and CDDI to reconfigure main middleware components at runtime. We present the design and the implementation of our proposals, both in simulation-based and in real-world scenarios, along with an extensive evaluation that confirms the technical soundness of proposed CDDI solutions. Finally, we consider three highly heterogeneous scenarios, namely disaster areas, smart campuses, and smart cities, to better remark the wide technical validity of our analysis and solutions under different network deployments and quality constraints.
Resumo:
Neoplastic overgrowth depends on the cooperation of several mutations ultimately leading to major rearrangements in cellular behaviour. The molecular crosstalk occurring between precancerous and normal cells strongly influences the early steps of the tumourigenic process as well as later stages of the disease. Precancerous cells are often removed by cell death from normal tissues but the mechanisms responsible for such fundamental safeguard processes remain in part elusive. To gain insight into these phenomena I took advantage of the clonal analysis methods available in Drosophila for studying the phenotypes due to loss of function of the neoplastic tumour suppressor lethal giant larvae (lgl). I found that lgl mutant cells growing in wild-type imaginal wing discs are subject to the phenomenon of cell competition and are eliminated by JNK-dependent cell death because they express very low levels of dMyc oncoprotein compared to those in the surrounding tissue. Indeed, in non-competitive backgrounds lgl mutant clones are able to overgrow and upregulate dMyc, overwhelming the neighbouring tissue and forming tumourous masses that display several cancer hallmarks. These phenotypes are completely abolished by reducing dMyc abundance within mutant cells while increasing it in lgl clones growing in a competitive context re-establishes their tumourigenic potential. Similarly, the neoplastic growth observed upon the oncogenic cooperation between lgl mutation and activated Ras/Raf/MAPK signalling was found to be characterised by and dependent on the ability of cancerous cells to upregulate dMyc with respect to the adjacent normal tissue, through both transcriptional and post-transcriptional mechanisms, thereby confirming its key role in lgl-induced tumourigenesis. These results provide first evidence that the dMyc oncoprotein is required in lgl mutant tissue to promote invasive overgrowth in developing and adult epithelial tissues and that dMyc abundance inside versus outside lgl mutant clones plays a key role in driving neoplastic overgrowth.
Resumo:
The aim of this thesis is to apply multilevel regression model in context of household surveys. Hierarchical structure in this type of data is characterized by many small groups. In last years comparative and multilevel analysis in the field of perceived health have grown in size. The purpose of this thesis is to develop a multilevel analysis with three level of hierarchy for Physical Component Summary outcome to: evaluate magnitude of within and between variance at each level (individual, household and municipality); explore which covariates affect on perceived physical health at each level; compare model-based and design-based approach in order to establish informativeness of sampling design; estimate a quantile regression for hierarchical data. The target population are the Italian residents aged 18 years and older. Our study shows a high degree of homogeneity within level 1 units belonging from the same group, with an intraclass correlation of 27% in a level-2 null model. Almost all variance is explained by level 1 covariates. In fact, in our model the explanatory variables having more impact on the outcome are disability, unable to work, age and chronic diseases (18 pathologies). An additional analysis are performed by using novel procedure of analysis :"Linear Quantile Mixed Model", named "Multilevel Linear Quantile Regression", estimate. This give us the possibility to describe more generally the conditional distribution of the response through the estimation of its quantiles, while accounting for the dependence among the observations. This has represented a great advantage of our models with respect to classic multilevel regression. The median regression with random effects reveals to be more efficient than the mean regression in representation of the outcome central tendency. A more detailed analysis of the conditional distribution of the response on other quantiles highlighted a differential effect of some covariate along the distribution.
Resumo:
The importance of the banks and financial markets relies on the fact that they promote economic efficiency by allocating savings efficiently to profitable investment opportunities.An efficient banking system is a key determinant for the financial stability.The theory of market failure forms the basis for understanding financial regulation.Following the detrimental economic and financial consequences in theaftermath of the crisis, academics and policymakers started to focus their attention on the construction of an appropriate regulatory and supervisory framework of the banking sector. This dissertation aims at understanding the impact of regulations and supervision on banks’ performance focusing on two emerging market economies, Turkey and Russia. It aims at examining the way in which regulations matter for financial stability and banking performance from a law & economics perspective. A review of the theory of banking regulation, particularly as applied to emerging economies, shows that the efficiency of certain solutions regarding banking regulation is open to debate. Therefore, in the context of emerging countries, whether a certain approach is efficient or not will be presented as an empirical question to which this dissertation will try to find an answer.
Resumo:
Complex Networks analysis turn out to be a very promising field of research, testified by many research projects and works that span different fields. Those analysis have been usually focused on characterize a single aspect of the system and a study that considers many informative axes along with a network evolve is lacking. We propose a new multidimensional analysis that is able to inspect networks in the two most important dimensions, space and time. To achieve this goal, we studied them singularly and investigated how the variation of the constituting parameters drives changes to the network as a whole. By focusing on space dimension, we characterized spatial alteration in terms of abstraction levels. We proposed a novel algorithm that, by applying a fuzziness function, can reconstruct networks under different level of details. We verified that statistical indicators depend strongly on the granularity with which a system is described and on the class of networks. We keep fixed the space axes and we isolated the dynamics behind networks evolution process. We detected new instincts that trigger social networks utilization and spread the adoption of novel communities. We formalized this enhanced social network evolution by adopting special nodes (called sirens) that, thanks to their ability to attract new links, were able to construct efficient connection patterns. We simulated the dynamics of the system by considering three well-known growth models. Applying this framework to real and synthetic networks, we showed that the sirens, even when used for a limited time span, effectively shrink the time needed to get a network in mature state. In order to provide a concrete context of our findings, we formalized the cost of setting up such enhancement and provided the best combinations of system's parameters, such as number of sirens, time span of utilization and attractiveness.
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.
Resumo:
The evaluation of the farmers’ communities’ approach to the Slow Food vision, their perception of the Slow Food role in supporting their activity and their appreciation and expectations from participating in the event of Mother Earth were studied. The Unified Theory of Acceptance and Use of Technology (UTAUT) model was adopted in an agro-food sector context. A survey was conducted, 120 questionnaires from farmers attending the Mother Earth in Turin in 2010 were collected. The descriptive statistical analysis showed that both Slow Food membership and participation to Mother Earth Meeting were much appreciated for the support provided to their business and the contribution to a more sustainable and fair development. A positive social, environmental and psychological impact on farmers also resulted. Results showed also an interesting perspective on the possible universality of the Slow Food and Mother Earth values. Farmers declared that Slow Food is supporting them by preserving the biodiversity and orienting them to the use of local resources and reducing the chemical inputs. Many farmers mentioned the language/culture and administration/bureaucratic issues as an obstacle to be a member in the movement and to participate to the event. Participation to Mother Earth gives an opportunity to exchange information with other farmers’ communities and to participate to seminars and debates, helpful for their business development. The absolute majority of positive answers associated to the farmers’ willingness to relate to Slow Food and participate to the next Mother Earth editions negatively influenced the UTAUT model results. A factor analysis showed that the variables associated to the UTAUT model constructs Performance Expectancy and Effort Expectancy were consistent, able to explain the construct variability, and their measurement reliable. Their inclusion in a simplest Technology Acceptance Model could be considered in future researches.
Resumo:
The use of guided ultrasonic waves (GUW) has increased considerably in the fields of non-destructive (NDE) testing and structural health monitoring (SHM) due to their ability to perform long range inspections, to probe hidden areas as well as to provide a complete monitoring of the entire waveguide. Guided waves can be fully exploited only once their dispersive properties are known for the given waveguide. In this context, well stated analytical and numerical methods are represented by the Matrix family methods and the Semi Analytical Finite Element (SAFE) methods. However, while the former are limited to simple geometries of finite or infinite extent, the latter can model arbitrary cross-section waveguides of finite domain only. This thesis is aimed at developing three different numerical methods for modelling wave propagation in complex translational invariant systems. First, a classical SAFE formulation for viscoelastic waveguides is extended to account for a three dimensional translational invariant static prestress state. The effect of prestress, residual stress and applied loads on the dispersion properties of the guided waves is shown. Next, a two-and-a-half Boundary Element Method (2.5D BEM) for the dispersion analysis of damped guided waves in waveguides and cavities of arbitrary cross-section is proposed. The attenuation dispersive spectrum due to material damping and geometrical spreading of cavities with arbitrary shape is shown for the first time. Finally, a coupled SAFE-2.5D BEM framework is developed to study the dispersion characteristics of waves in viscoelastic waveguides of arbitrary geometry embedded in infinite solid or liquid media. Dispersion of leaky and non-leaky guided waves in terms of speed and attenuation, as well as the radiated wavefields, can be computed. The results obtained in this thesis can be helpful for the design of both actuation and sensing systems in practical application, as well as to tune experimental setup.
Resumo:
This thesis proposes an integrated holistic approach to the study of neuromuscular fatigue in order to encompass all the causes and all the consequences underlying the phenomenon. Starting from the metabolic processes occurring at the cellular level, the reader is guided toward the physiological changes at the motorneuron and motor unit level and from this to the more general biomechanical alterations. In Chapter 1 a list of the various definitions for fatigue spanning several contexts has been reported. In Chapter 2, the electrophysiological changes in terms of motor unit behavior and descending neural drive to the muscle have been studied extensively as well as the biomechanical adaptations induced. In Chapter 3 a study based on the observation of temporal features extracted from sEMG signals has been reported leading to the need of a more robust and reliable indicator during fatiguing tasks. Therefore, in Chapter 4, a novel bi-dimensional parameter is proposed. The study on sEMG-based indicators opened a scenario also on neurophysiological mechanisms underlying fatigue. For this purpose, in Chapter 5, a protocol designed for the analysis of motor unit-related parameters during prolonged fatiguing contractions is presented. In particular, two methodologies have been applied to multichannel sEMG recordings of isometric contractions of the Tibialis Anterior muscle: the state-of-the-art technique for sEMG decomposition and a coherence analysis on MU spike trains. The importance of a multi-scale approach has been finally highlighted in the context of the evaluation of cycling performance, where fatigue is one of the limiting factors. In particular, the last chapter of this thesis can be considered as a paradigm: physiological, metabolic, environmental, psychological and biomechanical factors influence the performance of a cyclist and only when all of these are kept together in a novel integrative way it is possible to derive a clear model and make correct assessments.
Resumo:
The main goal of this thesis is to facilitate the process of industrial automated systems development applying formal methods to ensure the reliability of systems. A new formulation of distributed diagnosability problem in terms of Discrete Event Systems theory and automata framework is presented, which is then used to enforce the desired property of the system, rather then just verifying it. This approach tackles the state explosion problem with modeling patterns and new algorithms, aimed for verification of diagnosability property in the context of the distributed diagnosability problem. The concepts are validated with a newly developed software tool.
Resumo:
Geochemical mapping is a valuable tool for the control of territory that can be used not only in the identification of mineral resources and geological, agricultural and forestry studies but also in the monitoring of natural resources by giving solutions to environmental and economic problems. Stream sediments are widely used in the sampling campaigns carried out by the world's governments and research groups for their characteristics of broad representativeness of rocks and soils, for ease of sampling and for the possibility to conduct very detailed sampling In this context, the environmental role of stream sediments provides a good basis for the implementation of environmental management measures, in fact the composition of river sediments is an important factor in understanding the complex dynamics that develop within catchment basins therefore they represent a critical environmental compartment: they can persistently incorporate pollutants after a process of contamination and release into the biosphere if the environmental conditions change. It is essential to determine whether the concentrations of certain elements, in particular heavy metals, can be the result of natural erosion of rocks containing high concentrations of specific elements or are generated as residues of human activities related to a certain study area. This PhD thesis aims to extract from an extensive database on stream sediments of the Romagna rivers the widest spectrum of informations. The study involved low and high order stream in the mountain and hilly area, but also the sediments of the floodplain area, where intensive agriculture is active. The geochemical signals recorded by the stream sediments will be interpreted in order to reconstruct the natural variability related to bedrock and soil contribution, the effects of the river dynamics, the anomalous sites, and with the calculation of background values be able to evaluate their level of degradation and predict the environmental risk.