934 resultados para Computer-aided diagnosis
Resumo:
Le succès écologique des organismes dépend principalement de leur phénotype. Une composante important du phénotype est la morphologie fonctionnelle car elle influence la performance d’un organisme donné dans un milieu donné et donc reflète son écologie. Des disparités dans la morphologie fonctionnelle ou dans le développement entre espèces peuvent donc mener à des différences écologiques. Ce projet évalue le rôle des mécanismes de variation morphologique dans la production de différences écologiques entre espèces au sein des poissons hybrides du complexe Chrosomus eos-neogaeus. En utilisant la microtomodensitométrie à rayons X et la morphométrie géométrique 3D, la forme des éléments des mâchoires est décrite pour comparer la variation morphologique et les différences développementales entre les membres du complexe C. eos neogaeus. Les hybrides présentent autant de variation phénotypique que les espèces parentales et présentent des phénotypes nouveaux, dit transgressifs. Les hybrides présentent aussi des différences marquées avec les espèces parentales dans leur allométrie et dans leur intégration phénotypique. Finalement, ceux-ci semblent être plastiques et en mesure de modifier leur phénotype pour occuper plusieurs environnements. L’entièreté de ces résultats suggère que des changements dans le développement des hybrides entraînent une différenciation phénotypique et écologique avec les espèces parentales.
Resumo:
Conventional rockmass characterization and analysis methods for geotechnical assessment in mining, civil tunnelling, and other excavations consider only the intact rock properties and the discrete fractures that are present and form blocks within rockmasses. Field logging and classification protocols are based on historically useful but highly simplified design techniques, including direct empirical design and empirical strength assessment for simplified ground reaction and support analysis. As modern underground excavations go deeper and enter into more high stress environments with complex excavation geometries and associated stress paths, healed structures within initially intact rock blocks such as sedimentary nodule boundaries and hydrothermal veins, veinlets and stockwork (termed intrablock structure) are having an increasing influence on rockmass behaviour and should be included in modern geotechnical design. Due to the reliance on geotechnical classification methods which predate computer aided analysis, these complexities are ignored in conventional design. Given the comparatively complex, sophisticated and powerful numerical simulation and analysis techniques now practically available to the geotechnical engineer, this research is driven by the need for enhanced characterization of intrablock structure for application to numerical methods. Intrablock structure governs stress-driven behaviour at depth, gravity driven disintegration for large shallow spans, and controls ultimate fragmentation. This research addresses the characterization of intrablock structure and the understanding of its behaviour at laboratory testing and excavation scales, and presents new methodologies and tools to incorporate intrablock structure into geotechnical design practice. A new field characterization tool, the Composite Geological Strength Index, is used for outcrop or excavation face evaluation and provides direct input to continuum numerical models with implicit rockmass structure. A brittle overbreak estimation tool for complex rockmasses is developed using field observations. New methods to evaluate geometrical and mechanical properties of intrablock structure are developed. Finally, laboratory direct shear testing protocols for interblock structure are critically evaluated and extended to intrablock structure for the purpose of determining input parameters for numerical models with explicit structure.
Resumo:
In 2017, Chronic Respiratory Diseases accounted for almost four million deaths worldwide. Unfortunately, current treatments are not definitive for such diseases. This unmet medical need forces the scientific community to increase efforts in the identification of new therapeutic solutions. PI3K delta plays a key role in mechanisms that promote airway chronic inflammation underlying Asthma and COPD. The first part of this project was dedicated to the identification of novel PI3K delta inhibitors. A first SAR expansion of a Hit, previously identified by a HTS campaign, was carried out. A library of 43 analogues was synthesised taking advantage of an efficient synthetic approach. This allowed the identification of an improved Hit of nanomolar enzymatic potency and moderate selectivity for PI3K delta over other PI3K isoforms. However, this compound exhibited low potency in cell-based assays. Low cellular potency was related to sub optimal phys-chem and ADME properties. The analysis of the X-ray crystal structure of this compound in human PI3K delta guided a second tailored SAR expansion that led to improved cellular potency and solubility. The second part of the thesis was focused on the rational design and synthesis of new macrocyclic Rho-associated protein kinases (ROCKs) inhibitors. Inhibition of these kinases has been associated with vasodilating effects. Therefore, ROCKs could represent attractive targets for the treatment of pulmonary arterial hypertension (PAH). Known ROCK inhibitors suffer from low selectivity across the kinome. The design of macrocyclic inhibitors was considered a promising strategy to obtain improved selectivity. Known inhibitors from literature were evaluated for opportunities of macrocyclization using a knowledge-based approach supported by Computer Aided Drug Design (CADD). The identification of a macrocyclic ROCK inhibitor with enzymatic activity in the low micro molar range against ROCK II represented a promising result that validated this innovative approach in the design of new ROCKs inhibitors.
Resumo:
A fianco ai metodi più tradizionali, fin ora utilizzati, le tecnologie additive hanno subito negli ultimi anni una notevole evoluzione nella produzione di componenti. Esse permettono un ampio di range di applicazioni utilizzando materiali differenti in base al settore di applicazione. In particolare, la stampa 3D FDM (Fused Deposition Modeling) rappresenta uno dei processi tecnologici additivi più diffusi ed economicamente più competitivi. Gli attuali metodi di analisi agli elementi finiti (FEM) e le tecnologie CAE (Computer-Aided Engineering) non sono in grado di studiare modelli 3D di componenti stampati, dal momento che il risultato finale dipende dai parametri di processo e ambientali. Per questo motivo, è necessario uno studio approfondito della meso struttura del componente stampato per estendere l’analisi FEM anche a questa tipologia di componenti. Lo scopo del lavoro proposto è di creare un elemento omogeneo che rappresenti accuratamente il comportamento di un componente realizzato in stampa 3D FDM, questo avviene attraverso la definizione e l’analisi di un volume rappresentativo (RVE). Attraverso la tecnica dell’omogeneizzazione, il volume definito riassume le principali caratteristiche meccaniche della struttura stampata, permettendo nuove analisi e ottimizzazioni. Questo approccio permette di realizzare delle analisi FEM sui componenti da stampare e di predire le proprietà meccaniche dei componenti a partire da determinati parametri di stampa, permettendo così alla tecnologia FDM di diventare sempre di più uno dei principali processi industriali a basso costo.
Resumo:
Recent research trends in computer-aided drug design have shown an increasing interest towards the implementation of advanced approaches able to deal with large amount of data. This demand arose from the awareness of the complexity of biological systems and from the availability of data provided by high-throughput technologies. As a consequence, drug research has embraced this paradigm shift exploiting approaches such as that based on networks. Indeed, the process of drug discovery can benefit from the implementation of network-based methods at different steps from target identification to drug repurposing. From this broad range of opportunities, this thesis is focused on three main topics: (i) chemical space networks (CSNs), which are designed to represent and characterize bioactive compound data sets; (ii) drug-target interactions (DTIs) prediction through a network-based algorithm that predicts missing links; (iii) COVID-19 drug research which was explored implementing COVIDrugNet, a network-based tool for COVID-19 related drugs. The main highlight emerged from this thesis is that network-based approaches can be considered useful methodologies to tackle different issues in drug research. In detail, CSNs are valuable coordinate-free, graphically accessible representations of structure-activity relationships of bioactive compounds data sets especially for medium-large libraries of molecules. DTIs prediction through the random walk with restart algorithm on heterogeneous networks can be a helpful method for target identification. COVIDrugNet is an example of the usefulness of network-based approaches for studying drugs related to a specific condition, i.e., COVID-19, and the same ‘systems-based’ approaches can be used for other diseases. To conclude, network-based tools are proving to be suitable in many applications in drug research and provide the opportunity to model and analyze diverse drug-related data sets, even large ones, also integrating different multi-domain information.
Resumo:
Nowadays, product development in all its phases plays a fundamental role in the industrial chain. The need for a company to compete at high levels, the need to be quick in responding to market demands and therefore to be able to engineer the product quickly and with a high level of quality, has led to the need to get involved in new more advanced methods/ processes. In recent years, we are moving away from the concept of 2D-based design and production and approaching the concept of Model Based Definition. By using this approach, increasingly complex systems turn out to be easier to deal with but above all cheaper in obtaining them. Thanks to the Model Based Definition it is possible to share data in a lean and simple way to the entire engineering and production chain of the product. The great advantage of this approach is precisely the uniqueness of the information. In this specific thesis work, this approach has been exploited in the context of tolerances with the aid of CAD / CAT software. Tolerance analysis or dimensional variation analysis is a way to understand how sources of variation in part size and assembly constraints propagate between parts and assemblies and how that range affects the ability of a project to meet its requirements. It is critically important to note how tolerance directly affects the cost and performance of products. Worst Case Analysis (WCA) and Statistical analysis (RSS) are the two principal methods in DVA. The thesis aims to show the advantages of using statistical dimensional analysis by creating and examining various case studies, using PTC CREO software for CAD modeling and CETOL 6σ for tolerance analysis. Moreover, it will be provided a comparison between manual and 3D analysis, focusing the attention to the information lost in the 1D case. The results obtained allow us to highlight the need to use this approach from the early stages of the product design cycle.
Resumo:
La segmentazione prevede la partizione di un'immagine in aree strutturalmente o semanticamente coerenti. Nell'imaging medico, è utilizzata per identificare, contornandole, Regioni di Interesse (ROI) clinico, quali lesioni tumorali, oggetto di approfondimento tramite analisi semiautomatiche e automatiche, o bersaglio di trattamenti localizzati. La segmentazione di lesioni tumorali, assistita o automatica, consiste nell’individuazione di pixel o voxel, in immagini o volumi, appartenenti al tumore. La tecnica assistita prevede che il medico disegni la ROI, mentre quella automatica è svolta da software addestrati, tra cui i sistemi Computer Aided Detection (CAD). Mediante tecniche di visione artificiale, dalle ROI si estraggono caratteristiche numeriche, feature, con valore diagnostico, predittivo, o prognostico. L’obiettivo di questa Tesi è progettare e sviluppare un software di segmentazione assistita che permetta al medico di disegnare in modo semplice ed efficace una o più ROI in maniera organizzata e strutturata per futura elaborazione ed analisi, nonché visualizzazione. Partendo da Aliza, applicativo open-source, visualizzatore di esami radiologici in formato DICOM, è stata estesa l’interfaccia grafica per gestire disegno, organizzazione e memorizzazione automatica delle ROI. Inoltre, è stata implementata una procedura automatica di elaborazione ed analisi di ROI disegnate su lesioni tumorali prostatiche, per predire, di ognuna, la probabilità di cancro clinicamente non-significativo e significativo (con prognosi peggiore). Per tale scopo, è stato addestrato un classificatore lineare basato su Support Vector Machine, su una popolazione di 89 pazienti con 117 lesioni (56 clinicamente significative), ottenendo, in test, accuratezza = 77%, sensibilità = 86% e specificità = 69%. Il sistema sviluppato assiste il radiologo, fornendo una seconda opinione, non vincolante, adiuvante nella definizione del quadro clinico e della prognosi, nonché delle scelte terapeutiche.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Nursing diagnoses associated with alterations of urinary elimination require different interventions, Nurses, who are not specialists, require support to diagnose and manage patients with disturbances of urine elimination. The aim of this study was to present a model based on fuzzy logic for differential diagnosis of alterations in urinary elimination, considering nursing diagnosis approved by the North American Nursing Diagnosis Association, 2001-2002. Fuzzy relations and the maximum-minimum composition approach were used to develop the system. The model performance was evaluated with 195 cases from the database of a previous study, resulting in 79.0% of total concordance and 19.5% of partial concordance, when compared with the panel of experts. Total discordance was observed in only three cases (1.5%). The agreement between model and experts was excellent (kappa = 0.98, P < .0001) or substantial (kappa = 0.69, P < .0001) when considering the overestimative accordance (accordance was considered when at least one diagnosis was equal) and the underestimative discordance (discordance was considered when at least one diagnosis was different), respectively. The model herein presented showed good performance and a simple theoretical structure, therefore demanding few computational resources.
Resumo:
In this paper, we propose a method based on association rule-mining to enhance the diagnosis of medical images (mammograms). It combines low-level features automatically extracted from images and high-level knowledge from specialists to search for patterns. Our method analyzes medical images and automatically generates suggestions of diagnoses employing mining of association rules. The suggestions of diagnosis are used to accelerate the image analysis performed by specialists as well as to provide them an alternative to work on. The proposed method uses two new algorithms, PreSAGe and HiCARe. The PreSAGe algorithm combines, in a single step, feature selection and discretization, and reduces the mining complexity. Experiments performed on PreSAGe show that this algorithm is highly suitable to perform feature selection and discretization in medical images. HiCARe is a new associative classifier. The HiCARe algorithm has an important property that makes it unique: it assigns multiple keywords per image to suggest a diagnosis with high values of accuracy. Our method was applied to real datasets, and the results show high sensitivity (up to 95%) and accuracy (up to 92%), allowing us to claim that the use of association rules is a powerful means to assist in the diagnosing task.
Resumo:
The article describes an attempt to improve student learning outcomes in a computer networks course by making lectures more active learning experiences. Quick quizzes, group and individual exercises, the review of student questions, as well as multiple breaks, were incorporated into the weekly three-hour lectures. Student responses to the modified lectures was overwhelmingly positive: over 85% of respondents agreed that the lectures aided understanding, with large majorities of the respondents finding the individual activities useful to their learning. Although student examination performance improved over the previous year, performance on an examination question that was designed to examine deep understanding remained unchanged.