13 resultados para Images - Computational methods

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il progresso nella tecnica di recupero e di rinforzo nelle strutture metalliche con i polimeri fibro-rinforzati FRP (fibre reinforced polymers). 1.1 Introduzione nelle problematiche ricorrenti delle strutture metalliche. Le strutture moderne di una certa importanza, come i grattacieli o i ponti, hanno tempi e costi di costruzione molto elevati ed è allora di importanza fondamentale la loro durabilità, cioè la lunga vita utile e i bassi costi di manutenzione; manutenzione intesa anche come modo di restare a livelli prestazionali predefiniti. La definizione delle prestazioni comprende la capacità portante, la durabilità, la funzionalità e l’aspetto estetico. Se il livello prestazionale diventa troppo basso, diventa allora necessario intervenire per ripristinare le caratteristiche iniziali della struttura. Strutture con una lunga vita utile, come per la maggior parte delle strutture civili ed edilizie, dovranno soddisfare esigenze nuove o modificate: i mezzi di trasporto ad esempio sono diventati più pesanti e più diffusi, la velocità dei veicoli al giorno d'oggi è aumentata e ciò comporta anche maggiori carichi di tipo dinamico.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The multimodal biology activity of ergot alkaloids is known by humankind since middle ages. Synthetically modified ergot alkaloids are used for the treatment of various medical conditions. Despite the great progress in organic syntheses, the total synthesis of ergot alkaloids remains a great challenge due to the complexity of their polycyclic structure with multiple stereogenic centres. This project has developed a new domino reaction between indoles bearing a Michael acceptor at the 4 position and nitroethene, leading to potential ergot alkaloid precursors in highly enantioenriched form. The reaction was optimised and applied to a large variety of substrate with good results. Even if unfortunately all attempts to further modify the obtained polycyclic structure failed, it was found a reaction able to produce the diastereoisomer of the polycyclic product in excellent yields. The compounds synthetized were characterized by NMR and ESIMS analysis confirming the structure and their enantiomeric excess was determined by chiral stationary phase HPLC. The mechanism of the reaction was evaluated by DFT calculations, showing the formation of a key bicoordinated nitronate intermediate, and fully accounting for the results observed with all substrates. The relative and absolute configuration of the adducts were determined by a combination of NMR, ECD and computational methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present dissertation aims at simulating the construction of lexicographic layouts for an Italian combinatory dictionary based on real linguistic data, extracted from corpora by using computational methods. This work is based on the assumption that the intuition of the native speaker, or the lexicographer, who manually extracts and classifies all the relevant data, are not adequate to provide sufficient information on the meaning and use of words. Therefore, a study of the real use of language is required and this is particularly true for dictionaries that collect the combinatory behaviour of words, where the task of the lexicographer is to identify typical combinations where a word occurs. This study is conducted in the framework of the CombiNet project aimed at studying Italian Word Combinationsand and at building an online, corpus-based combinatory lexicographic resource for the Italian language. This work is divided into three chapters. Chapter 1 describes the criteria considered for the classification of word combinations according to the work of Ježek (2011). Chapter 1 also contains a brief comparison between the most important Italian combinatory dictionaries and the BBI Dictionary of Word Combinations in order to describe how word combinations are considered in these lexicographic resources. Chapter 2 describes the main computational methods used for the extraction of word combinations from corpora, taking into account the advantages and disadvantages of the two methods. Chapter 3 mainly focuses on the practical word carried out in the framework of the CombiNet project, with reference to the tools and resources used (EXTra, LexIt and "La Repubblica" corpus). Finally, the data extracted and the lexicographic layout of the lemmas to be included in the combinatory dictionary are commented, namely the words "acqua" (water), "braccio" (arm) and "colpo" (blow, shot, stroke).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The benzoquinone was found as an effective co-catalyst in the ruthenium/NaOEt-catalyzed Guerbet reaction. The co-catalyst behavior has therefore been investigated through experimental and computational methods. The reaction products distribution shows that the reaction speed is improved by the benzoquinone supplement since the beginning of the process, having a minimal effect on the selectivity toward alcoholic species. DFT calculations were performed to investigate two hypotheses for the kinetic effects: i) a hydrogen storage mechanism or ii) a basic co-catalysis of 4-hydroxiphenolate. The most promising results were found for the latter hypothesis, where a new mixed mechanism for the aldol condensation step of the Guerbet process involves the hydroquinone (i.e. the reduced form of benzoquinone) as proton source instead of ethanol. This mechanism was found to be energetically more favorable than an aldol condensation in absence of additive, suggesting that the hydroquinone derived from benzoquinone could be the key species affecting the kinetics of the overall process. To verify this theoretical hypothesis, new phenol derivatives were tested as additives in the Guerbet reaction. The outcomes confirmed that an aromatic acid (stronger than ethanol) could improve the reaction kinetics. Lastly, theoretical products distributions were simulated and compared to the experimental one, using the DFT computations to build the kinetic models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cellular basis of cardiac pacemaking activity, and specifically the quantitative contributions of particular mechanisms, is still debated. Reliable computational models of sinoatrial nodal (SAN) cells may provide mechanistic insights, but competing models are built from different data sets and with different underlying assumptions. To understand quantitative differences between alternative models, we performed thorough parameter sensitivity analyses of the SAN models of Maltsev & Lakatta (2009) and Severi et al (2012). Model parameters were randomized to generate a population of cell models with different properties, simulations performed with each set of random parameters generated 14 quantitative outputs that characterized cellular activity, and regression methods were used to analyze the population behavior. Clear differences between the two models were observed at every step of the analysis. Specifically: (1) SR Ca2+ pump activity had a greater effect on SAN cell cycle length (CL) in the Maltsev model; (2) conversely, parameters describing the funny current (If) had a greater effect on CL in the Severi model; (3) changes in rapid delayed rectifier conductance (GKr) had opposite effects on action potential amplitude in the two models; (4) within the population, a greater percentage of model cells failed to exhibit action potentials in the Maltsev model (27%) compared with the Severi model (7%), implying greater robustness in the latter; (5) confirming this initial impression, bifurcation analyses indicated that smaller relative changes in GKr or Na+-K+ pump activity led to failed action potentials in the Maltsev model. Overall, the results suggest experimental tests that can distinguish between models and alternative hypotheses, and the analysis offers strategies for developing anti-arrhythmic pharmaceuticals by predicting their effect on the pacemaking activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we study a polyenergetic and multimaterial model for the breast image reconstruction in Digital Tomosynthesis, taking into consideration the variety of the materials forming the object and the polyenergetic nature of the X-rays beam. The modelling of the problem leads to the resolution of a high-dimensional nonlinear least-squares problem that, due to its nature of inverse ill-posed problem, needs some kind of regularization. We test two main classes of methods: the Levenberg-Marquardt method (together with the Conjugate Gradient method for the computation of the descent direction) and two limited-memory BFGS-like methods (L-BFGS). We perform some experiments for different values of the regularization parameter (constant or varying at each iteration), tolerances and stop conditions. Finally, we analyse the performance of the several methods comparing relative errors, iterations number, times and the qualities of the reconstructed images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis project is to automatically localize HCC tumors in the human liver and subsequently predict if the tumor will undergo microvascular infiltration (MVI), the initial stage of metastasis development. The input data for the work have been partially supplied by Sant'Orsola Hospital and partially downloaded from online medical databases. Two Unet models have been implemented for the automatic segmentation of the livers and the HCC malignancies within it. The segmentation models have been evaluated with the Intersection-over-Union and the Dice Coefficient metrics. The outcomes obtained for the liver automatic segmentation are quite good (IOU = 0.82; DC = 0.35); the outcomes obtained for the tumor automatic segmentation (IOU = 0.35; DC = 0.46) are, instead, affected by some limitations: it can be state that the algorithm is almost always able to detect the location of the tumor, but it tends to underestimate its dimensions. The purpose is to achieve the CT images of the HCC tumors, necessary for features extraction. The 14 Haralick features calculated from the 3D-GLCM, the 120 Radiomic features and the patients' clinical information are collected to build a dataset of 153 features. Now, the goal is to build a model able to discriminate, based on the features given, the tumors that will undergo MVI and those that will not. This task can be seen as a classification problem: each tumor needs to be classified either as “MVI positive” or “MVI negative”. Techniques for features selection are implemented to identify the most descriptive features for the problem at hand and then, a set of classification models are trained and compared. Among all, the models with the best performances (around 80-84% ± 8-15%) result to be the XGBoost Classifier, the SDG Classifier and the Logist Regression models (without penalization and with Lasso, Ridge or Elastic Net penalization).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation starts by providing a description of the phenomena related to the increasing importance recently acquired by satellite applications. The spread of such technology comes with implications, such as an increase in maintenance cost, from which derives the interest in developing advanced techniques that favor an augmented autonomy of spacecrafts in health monitoring. Machine learning techniques are widely employed to lay a foundation for effective systems specialized in fault detection by examining telemetry data. Telemetry consists of a considerable amount of information; therefore, the adopted algorithms must be able to handle multivariate data while facing the limitations imposed by on-board hardware features. In the framework of outlier detection, the dissertation addresses the topic of unsupervised machine learning methods. In the unsupervised scenario, lack of prior knowledge of the data behavior is assumed. In the specific, two models are brought to attention, namely Local Outlier Factor and One-Class Support Vector Machines. Their performances are compared in terms of both the achieved prediction accuracy and the equivalent computational cost. Both models are trained and tested upon the same sets of time series data in a variety of settings, finalized at gaining insights on the effect of the increase in dimensionality. The obtained results allow to claim that both models, combined with a proper tuning of their characteristic parameters, successfully comply with the role of outlier detectors in multivariate time series data. Nevertheless, under this specific context, Local Outlier Factor results to be outperforming One-Class SVM, in that it proves to be more stable over a wider range of input parameter values. This property is especially valuable in unsupervised learning since it suggests that the model is keen to adapting to unforeseen patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When it comes to designing a structure, architects and engineers want to join forces in order to create and build the most beautiful and efficient building. From finding new shapes and forms to optimizing the stability and the resistance, there is a constant link to be made between both professions. In architecture, there has always been a particular interest in creating new shapes and types of a structure inspired by many different fields, one of them being nature itself. In engineering, the selection of optimum has always dictated the way of thinking and designing structures. This mindset led through studies to the current best practices in construction. However, both disciplines were limited by the traditional manufacturing constraints at a certain point. Over the last decades, much progress was made from a technological point of view, allowing to go beyond today's manufacturing constraints. With the emergence of Wire-and-Arc Additive Manufacturing (WAAM) combined with Algorithmic-Aided Design (AAD), architects and engineers are offered new opportunities to merge architectural beauty and structural efficiency. Both technologies allow for exploring and building unusual and complex structural shapes in addition to a reduction of costs and environmental impacts. Through this study, the author wants to make use of previously mentioned technologies and assess their potential, first to design an aesthetically appreciated tree-like column with the idea of secondly proposing a new type of standardized and optimized sandwich cross-section to the construction industry. Parametric algorithms to model the dendriform column and the new sandwich cross-section are developed and presented in detail. A catalog draft of the latter and methods to establish it are then proposed and discussed. Finally, the buckling behavior of this latter is assessed considering standard steel and WAAM material properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to estimate depth through supervised deep learning-based stereo methods, it is necessary to have access to precise ground truth depth data. While the gathering of precise labels is commonly tackled by deploying depth sensors, this is not always a viable solution. For instance, in many applications in the biomedical domain, the choice of sensors capable of sensing depth at small distances with high precision on difficult surfaces (that present non-Lambertian properties) is very limited. It is therefore necessary to find alternative techniques to gather ground truth data without having to rely on external sensors. In this thesis, two different approaches have been tested to produce supervision data for biomedical images. The first aims to obtain input stereo image pairs and disparities through simulation in a virtual environment, while the second relies on a non-learned disparity estimation algorithm in order to produce noisy disparities, which are then filtered by means of hand-crafted confidence measures to create noisy labels for a subset of pixels. Among the two, the second approach, which is referred in literature as proxy-labeling, has shown the best results and has even outperformed the non-learned disparity estimation algorithm used for supervision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Privacy issues and data scarcity in PET field call for efficient methods to expand datasets via synthetic generation of new data that cannot be traced back to real patients and that are also realistic. In this thesis, machine learning techniques were applied to 1001 amyloid-beta PET images, which had undergone a diagnosis of Alzheimer’s disease: the evaluations were 540 positive, 457 negative and 4 unknown. Isomap algorithm was used as a manifold learning method to reduce the dimensions of the PET dataset; a numerical scale-free interpolation method was applied to invert the dimensionality reduction map. The interpolant was tested on the PET images via LOOCV, where the removed images were compared with the reconstructed ones with the mean SSIM index (MSSIM = 0.76 ± 0.06). The effectiveness of this measure is questioned, since it indicated slightly higher performance for a method of comparison using PCA (MSSIM = 0.79 ± 0.06), which gave clearly poor quality reconstructed images with respect to those recovered by the numerical inverse mapping. Ten synthetic PET images were generated and, after having been mixed with ten originals, were sent to a team of clinicians for the visual assessment of their realism; no significant agreements were found either between clinicians and the true image labels or among the clinicians, meaning that original and synthetic images were indistinguishable. The future perspective of this thesis points to the improvement of the amyloid-beta PET research field by increasing available data, overcoming the constraints of data acquisition and privacy issues. Potential improvements can be achieved via refinements of the manifold learning and the inverse mapping stages during the PET image analysis, by exploring different combinations in the choice of algorithm parameters and by applying other non-linear dimensionality reduction algorithms. A final prospect of this work is the search for new methods to assess image reconstruction quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Worldwide, biodiversity is decreasing due to climate change, habitat fragmentation and agricultural intensification. Bees are essential crops pollinator, but their abundance and diversity are decreasing as well. For their conservation, it is necessary to assess the status of bee population. Field data collection methods are expensive and time consuming thus, recently, new methods based on remote sensing are used. In this study we tested the possibility of using flower cover diversity estimated by UAV images (FCD-UAV) to assess bee diversity and abundance in 10 agricultural meadows in the Netherlands. In order to do so, field data of flower and bee diversity and abundance were collected during a campaign in May 2021. Furthermore, RGB images of the areas have been collected using Unmanned Aerial Vehicle (UAV) and post-processed into orthomosaics. Lastly, Random Forest machine learning algorithm was applied to estimate FCD of the species detected in each field. Resulting FCD was expressed with Shannon and Simpson diversity indices, which were successively correlated to bee Shannon and Simpson diversity indices, abundance and species richness. The results showed a positive relationship between FCD-UAV and in-situ collected data about bee diversity, evaluated with Shannon index, abundance and species richness. The strongest relationship was found between FCD (Shannon Index) and bee abundance with R2=0.52. Following, good correlations were found with bee species richness (R2=0.39) and bee diversity (R2=0.37). R2 values of the relationship between FCD (Simpson Index) and bee abundance, species richness and diversity were slightly inferior (0.45, 0.37 and 0.35, respectively). Our results suggest that the proposed method based on the coupling of UAV imagery and machine learning for the assessment of flower species diversity could be developed into valuable tools for large-scale, standardized and cost-effective monitoring of flower cover and of the habitat quality for bees.