260 resultados para INFORMATICA


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) are getting wide-spread attention since they became easily accessible with their low costs. One of the key elements of WSNs is distributed sensing. When the precise location of a signal of interest is unknown across the monitored region, distributing many sensors randomly/uniformly may yield with a better representation of the monitored random process than a traditional sensor deployment. In a typical WSN application the data sensed by nodes is usually sent to one (or more) central device, denoted as sink, which collects the information and can either act as a gateway towards other networks (e.g. Internet), where data can be stored, or be processed in order to command the actuators to perform special tasks. In such a scenario, a dense sensor deployment may create bottlenecks when many nodes competing to access the channel. Even though there are mitigation methods on the channel access, concurrent (parallel) transmissions may occur. In this study, always on the scope of monitoring applications, the involved development progress of two industrial projects with dense sensor deployments (eDIANA Project funded by European Commission and Centrale Adritica Project funded by Coop Italy) and the measurement results coming from several different test-beds evoked the necessity of a mathematical analysis on concurrent transmissions. To the best of our knowledge, in the literature there is no mathematical analysis of concurrent transmission in 2.4 GHz PHY of IEEE 802.15.4. In the thesis, experience stories of eDIANA and Centrale Adriatica Projects and a mathematical analysis of concurrent transmissions starting from O-QPSK chip demodulation to the packet reception rate with several different types of theoretical demodulators, are presented. There is a very good agreement between the measurements so far in the literature and the mathematical analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in biomedical signal acquisition systems for motion analysis have led to lowcost and ubiquitous wearable sensors which can be used to record movement data in different settings. This implies the potential availability of large amounts of quantitative data. It is then crucial to identify and to extract the information of clinical relevance from the large amount of available data. This quantitative and objective information can be an important aid for clinical decision making. Data mining is the process of discovering such information in databases through data processing, selection of informative data, and identification of relevant patterns. The databases considered in this thesis store motion data from wearable sensors (specifically accelerometers) and clinical information (clinical data, scores, tests). The main goal of this thesis is to develop data mining tools which can provide quantitative information to the clinician in the field of movement disorders. This thesis will focus on motor impairment in Parkinson's disease (PD). Different databases related to Parkinson subjects in different stages of the disease were considered for this thesis. Each database is characterized by the data recorded during a specific motor task performed by different groups of subjects. The data mining techniques that were used in this thesis are feature selection (a technique which was used to find relevant information and to discard useless or redundant data), classification, clustering, and regression. The aims were to identify high risk subjects for PD, characterize the differences between early PD subjects and healthy ones, characterize PD subtypes and automatically assess the severity of symptoms in the home setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Myocardial perfusion quantification by means of Contrast-Enhanced Cardiac Magnetic Resonance images relies on time consuming frame-by-frame manual tracing of regions of interest. In this Thesis, a novel automated technique for myocardial segmentation and non-rigid registration as a basis for perfusion quantification is presented. The proposed technique is based on three steps: reference frame selection, myocardial segmentation and non-rigid registration. In the first step, the reference frame in which both endo- and epicardial segmentation will be performed is chosen. Endocardial segmentation is achieved by means of a statistical region-based level-set technique followed by a curvature-based regularization motion. Epicardial segmentation is achieved by means of an edge-based level-set technique followed again by a regularization motion. To take into account the changes in position, size and shape of myocardium throughout the sequence due to out of plane respiratory motion, a non-rigid registration algorithm is required. The proposed non-rigid registration scheme consists in a novel multiscale extension of the normalized cross-correlation algorithm in combination with level-set methods. The myocardium is then divided into standard segments. Contrast enhancement curves are computed measuring the mean pixel intensity of each segment over time, and perfusion indices are extracted from each curve. The overall approach has been tested on synthetic and real datasets. For validation purposes, the sequences have been manually traced by an experienced interpreter, and contrast enhancement curves as well as perfusion indices have been computed. Comparisons between automatically extracted and manually obtained contours and enhancement curves showed high inter-technique agreement. Comparisons of perfusion indices computed using both approaches against quantitative coronary angiography and visual interpretation demonstrated that the two technique have similar diagnostic accuracy. In conclusion, the proposed technique allows fast, automated and accurate measurement of intra-myocardial contrast dynamics, and may thus address the strong clinical need for quantitative evaluation of myocardial perfusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tracking activities during daily life and assessing movement parameters is essential for complementing the information gathered in confined environments such as clinical and physical activity laboratories for the assessment of mobility. Inertial measurement units (IMUs) are used as to monitor the motion of human movement for prolonged periods of time and without space limitations. The focus in this study was to provide a robust, low-cost and an unobtrusive solution for evaluating human motion using a single IMU. First part of the study focused on monitoring and classification of the daily life activities. A simple method that analyses the variations in signal was developed to distinguish two types of activity intervals: active and inactive. Neural classifier was used to classify active intervals; the angle with respect to gravity was used to classify inactive intervals. Second part of the study focused on extraction of gait parameters using a single inertial measurement unit (IMU) attached to the pelvis. Two complementary methods were proposed for gait parameters estimation. First method was a wavelet based method developed for the estimation of gait events. Second method was developed for estimating step and stride length during level walking using the estimations of the previous method. A special integration algorithm was extended to operate on each gait cycle using a specially designed Kalman filter. The developed methods were also applied on various scenarios. Activity monitoring method was used in a PRIN’07 project to assess the mobility levels of individuals living in a urban area. The same method was applied on volleyball players to analyze the fitness levels of them by monitoring their daily life activities. The methods proposed in these studies provided a simple, unobtrusive and low-cost solution for monitoring and assessing activities outside of controlled environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tesi ha ad oggetto lo studio e l’approfondimento delle forme di promozione commerciale presenti in Rete caratterizzate, più che da una normale evoluzione, da continue metamorfosi che ridefiniscono ogni giorno il concetto di pubblicità. L’intento è quello di analizzare il quadro giuridico applicabile alla pubblicità via Web, a fronte della varità di forme e di modalità che essa può assumere. Nel lavoro vengono passate in rassegna le caratteristiche che differenziano la pubblicità commerciale on-line rispetto a quella tradizionale; tra le quali, particolare rilievo assume la capacità d’istaurare una relazione – diretta e non mediata – tra impresa e consumatore. Nel prosieguo viene affrontato il problema dell’individuazione, stante il carattere a-territoriale della Rete, della legge applicabile al web advertising, per poi passare ad una ricognizione delle norme europee ed italiane in materia, senza trascurare quelle emanate in sede di autodisciplina. Ampio spazio è dedicato, infine, all’esame delle diverse e più recenti tecniche di promozione pubblicitaria, di cui sono messi in evidenza gli aspetti tecnico-informatici, imprescindibili ai fini di una corretta valutazione del tema giuridico. In particolare, vengono approfonditi il servizio di posizionamento a pagamento offerto dai principali motori di ricerca (keywords advertising) e gli strumenti di tracciamento dei “comportamenti” on-line degli utenti, che consentono la realizzazione di campagne pubblicitarie mirate (on-line behavioural advertising). Il Web, infatti, non offre più soltanto la possibilità di superare barriere spaziali, linguistiche o temporali e di ampliare la propria sfera di notorietà, ma anche di raggiungere l’utente “interessato” e, pertanto, potenziale acquirente. Di queste nuove realtà pubblicitarie vengono vagliati gli aspetti più critici ed esaminata la disciplina giuridica eventualmente applicabile anche alla luce delle principali decisioni giurisprudenziali nazionali ed europee in materia, nonché delle esperienze giuridiche nord-americane e di tipo autoregolamentare.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Graphene, that is a monolayer of carbon atoms arranged in a honeycomb lattice, has been isolated only recently from graphite. This material shows very attractive physical properties, like superior carrier mobility, current carrying capability and thermal conductivity. In consideration of that, graphene has been the object of large investigation as a promising candidate to be used in nanometer-scale devices for electronic applications. In this work, graphene nanoribbons (GNRs), that are narrow strips of graphene, for which a band-gap is induced by the quantum confinement of carriers in the transverse direction, have been studied. As experimental GNR-FETs are still far from being ideal, mainly due to the large width and edge roughness, an accurate description of the physical phenomena occurring in these devices is required to have valuable predictions about the performance of these novel structures. A code has been developed to this purpose and used to investigate the performance of 1 to 15-nm wide GNR-FETs. Due to the importance of an accurate description of the quantum effects in the operation of graphene devices, a full-quantum transport model has been adopted: the electron dynamics has been described by a tight-binding (TB) Hamiltonian model and transport has been solved within the formalism of the non-equilibrium Green's functions (NEGF). Both ballistic and dissipative transport are considered. The inclusion of the electron-phonon interaction has been taken into account in the self-consistent Born approximation. In consideration of their different energy band-gap, narrow GNRs are expected to be suitable for logic applications, while wider ones could be promising candidates as channel material for radio-frequency applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il progetto di ricerca si situa nell’ambito dell’informatica giudiziaria settore che studia i sistemi informativi implementati negli uffici giudiziari allo scopo di migliorare l’efficienza del servizio, fornire una leva per la riduzione dei lunghi tempi processuali, al fine ultimo di garantire al meglio i diritti riconosciuti ai cittadini e accrescere la competitività del Paese. Oggetto di studio specifico del progetto di ricerca è l’utilizzo delle ICT nel processo penale. Si tratta di una realtà meno studiata rispetto al processo civile, eppure la crisi di efficienza del processo non è meno sentita in tale area: l’arretrato da smaltire al 30 giugno del 2011 è stato quantificato in 3,4 milioni di processi penali, e il tempo medio di definizione degli stessi è di quattro anni e nove mesi. Guardare al processo penale con gli occhi della progettazione dei sistemi informativi è vedere un fluire ininterrotto di informazioni che include realtà collocate a monte e a valle del processo stesso: dalla trasmissione della notizia di reato alla esecuzione della pena. In questa prospettiva diventa evidente l’importanza di una corretta gestione delle informazioni: la quantità, l’accuratezza, la rapidità di accesso alle stesse sono fattori così cruciali per il processo penale che l’efficienza del sistema informativo e la qualità della giustizia erogata sono fortemente interrelate. Il progetto di ricerca è orientato a individuare quali siano le condizioni in cui l’efficienza può essere effettivamente raggiunta e, soprattutto, a verificare quali siano le scelte tecnologiche che possono preservare, o anche potenziare, i principi e le garanzie del processo penale. Nel processo penale, infatti, sono coinvolti diritti fondamentali dell’individuo quali la libertà personale, la dignità, la riservatezza, diritti fondamentali che vengono tutelati attraverso un ampia gamma di diritti processuali quali la presunzione di innocenza, il diritto di difesa, il diritto al contraddittorio, la finalità di rieducazione della pena.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'obiettivo della ricerca è di compiere un'analisi dell'impatto della cosiddetta cultura "open" alla luce dell'attuale condizione del World Wide Web. Si prenderà in considerazione, in particolare, la genesi del movimento a partire dalle basi di cultura hacker e la relativa evoluzione nella filosofia del software libero, con il fine ultimo di identificare il ruolo attuale del modello open source nello scenario esistente. L'introduzione al concetto di Open Access completerà la ricerca anche considerando la recente riaffermazione della conoscenza come bene comune all'interno della Società dell'Informazione

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last few years, a new generation of Business Intelligence (BI) tools called BI 2.0 has emerged to meet the new and ambitious requirements of business users. BI 2.0 not only introduces brand new topics, but in some cases it re-examines past challenges according to new perspectives depending on the market changes and needs. In this context, the term pervasive BI has gained increasing interest as an innovative and forward-looking perspective. This thesis investigates three different aspects of pervasive BI: personalization, timeliness, and integration. Personalization refers to the capacity of BI tools to customize the query result according to the user who takes advantage of it, facilitating the fruition of BI information by different type of users (e.g., front-line employees, suppliers, customers, or business partners). In this direction, the thesis proposes a model for On-Line Analytical Process (OLAP) query personalization to reduce the query result to the most relevant information for the specific user. Timeliness refers to the timely provision of business information for decision-making. In this direction, this thesis defines a new Data Warehuose (DW) methodology, Four-Wheel-Drive (4WD), that combines traditional development approaches with agile methods; the aim is to accelerate the project development and reduce the software costs, so as to decrease the number of DW project failures and favour the BI tool penetration even in small and medium companies. Integration refers to the ability of BI tools to allow users to access information anywhere it can be found, by using the device they prefer. To this end, this thesis proposes Business Intelligence Network (BIN), a peer-to-peer data warehousing architecture, where a user can formulate an OLAP query on its own system and retrieve relevant information from both its local system and the DWs of the net, preserving its autonomy and independency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research activity characterizing the present thesis was mainly centered on the design, development and validation of methodologies for the estimation of stationary and time-varying connectivity between different regions of the human brain during specific complex cognitive tasks. Such activity involved two main aspects: i) the development of a stable, consistent and reproducible procedure for functional connectivity estimation with a high impact on neuroscience field and ii) its application to real data from healthy volunteers eliciting specific cognitive processes (attention and memory). In particular the methodological issues addressed in the present thesis consisted in finding out an approach to be applied in neuroscience field able to: i) include all the cerebral sources in connectivity estimation process; ii) to accurately describe the temporal evolution of connectivity networks; iii) to assess the significance of connectivity patterns; iv) to consistently describe relevant properties of brain networks. The advancement provided in this thesis allowed finding out quantifiable descriptors of cognitive processes during a high resolution EEG experiment involving subjects performing complex cognitive tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates two distinct research topics. The main topic (Part I) is the computational modelling of cardiomyocytes derived from human stem cells, both embryonic (hESC-CM) and induced-pluripotent (hiPSC-CM). The aim of this research line lies in developing models of the electrophysiology of hESC-CM and hiPSC-CM in order to integrate the available experimental data and getting in-silico models to be used for studying/making new hypotheses/planning experiments on aspects not fully understood yet, such as the maturation process, the functionality of the Ca2+ hangling or why the hESC-CM/hiPSC-CM action potentials (APs) show some differences with respect to APs from adult cardiomyocytes. Chapter I.1 introduces the main concepts about hESC-CMs/hiPSC-CMs, the cardiac AP, and computational modelling. Chapter I.2 presents the hESC-CM AP model, able to simulate the maturation process through two developmental stages, Early and Late, based on experimental and literature data. Chapter I.3 describes the hiPSC-CM AP model, able to simulate the ventricular-like and atrial-like phenotypes. This model was used to assess which currents are responsible for the differences between the ventricular-like AP and the adult ventricular AP. The secondary topic (Part II) consists in the study of texture descriptors for biological image processing. Chapter II.1 provides an overview on important texture descriptors such as Local Binary Pattern or Local Phase Quantization. Moreover the non-binary coding and the multi-threshold approach are here introduced. Chapter II.2 shows that the non-binary coding and the multi-threshold approach improve the classification performance of cellular/sub-cellular part images, taken from six datasets. Chapter II.3 describes the case study of the classification of indirect immunofluorescence images of HEp2 cells, used for the antinuclear antibody clinical test. Finally the general conclusions are reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This Ph.D. dissertation reports on the work performed at the Wireless Communication Laboratory - University of Bologna and National Research Council - as well as, for six months, at the Fraunhofer Institute for Integrated Circuit (IIS) in Nürnberg. The work of this thesis is in the area of wireless communications, especially with regards to cooperative communications aspects in narrow-band and ultra-wideband systems, cooperative links characterization, network geometry, power allocation techniques,and synchronization between nodes. The underpinning of this work is devoted to developing a general framework for design and analysis of wireless cooperative communication systems, which depends on propagation environment, transmission technique, diversity method, power allocation for various scenarios and relay positions. The optimal power allocation for minimizing the bit error probability at the destination is derived. In addition, a syncronization algorithm for master-slave communications is proposed with the aim of jointly compensate the clock drift and offset of wireless nodes composing the network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complex Networks analysis turn out to be a very promising field of research, testified by many research projects and works that span different fields. Those analysis have been usually focused on characterize a single aspect of the system and a study that considers many informative axes along with a network evolve is lacking. We propose a new multidimensional analysis that is able to inspect networks in the two most important dimensions, space and time. To achieve this goal, we studied them singularly and investigated how the variation of the constituting parameters drives changes to the network as a whole. By focusing on space dimension, we characterized spatial alteration in terms of abstraction levels. We proposed a novel algorithm that, by applying a fuzziness function, can reconstruct networks under different level of details. We verified that statistical indicators depend strongly on the granularity with which a system is described and on the class of networks. We keep fixed the space axes and we isolated the dynamics behind networks evolution process. We detected new instincts that trigger social networks utilization and spread the adoption of novel communities. We formalized this enhanced social network evolution by adopting special nodes (called sirens) that, thanks to their ability to attract new links, were able to construct efficient connection patterns. We simulated the dynamics of the system by considering three well-known growth models. Applying this framework to real and synthetic networks, we showed that the sirens, even when used for a limited time span, effectively shrink the time needed to get a network in mature state. In order to provide a concrete context of our findings, we formalized the cost of setting up such enhancement and provided the best combinations of system's parameters, such as number of sirens, time span of utilization and attractiveness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.