3 resultados para Formal Methods. Component-Based Development. Competition. Model Checking
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
Osteoporosis is one of the major causes of mortality among the elderly. Nowadays, areal bone mineral density (aBMD) is used as diagnostic criteria for osteoporosis; however, this is a moderate predictor of the femur fracture risk and does not capture the effect of some anatomical and physiological properties on the bone strength estimation. Data from past research suggest that most fragility femur fractures occur in patients with aBMD values outside the pathological range. Subject-specific finite element models derived from computed tomography data are considered better tools to non-invasively assess hip fracture risk. In particular, the Bologna Biomechanical Computed Tomography (BBCT) is an In Silico methodology that uses a subject specific FE model to predict bone strength. Different studies demonstrated that the modeling pipeline can increase predictive accuracy of osteoporosis detection and assess the efficacy of new antiresorptive drugs. However, one critical aspect that must be properly addressed before using the technology in the clinical practice, is the assessment of the model credibility. The aim of this study was to define and perform verification and uncertainty quantification analyses on the BBCT methodology following the risk-based credibility assessment framework recently proposed in the VV-40 standard. The analyses focused on the main verification tests used in computational solid mechanics: force and moment equilibrium check, mesh convergence analyses, mesh quality metrics study, evaluation of the uncertainties associated to the definition of the boundary conditions and material properties mapping. Results of these analyses showed that the FE model is correctly implemented and solved. The operation that mostly affect the model results is the material properties mapping step. This work represents an important step that, together with the ongoing clinical validation activities, will contribute to demonstrate the credibility of the BBCT methodology.
Resumo:
Artificial Intelligence is reshaping the field of fashion industry in different ways. E-commerce retailers exploit their data through AI to enhance their search engines, make outfit suggestions and forecast the success of a specific fashion product. However, it is a challenging endeavour as the data they possess is huge, complex and multi-modal. The most common way to search for fashion products online is by matching keywords with phrases in the product's description which are often cluttered, inadequate and differ across collections and sellers. A customer may also browse an online store's taxonomy, although this is time-consuming and doesn't guarantee relevant items. With the advent of Deep Learning architectures, particularly Vision-Language models, ad-hoc solutions have been proposed to model both the product image and description to solve this problems. However, the suggested solutions do not exploit effectively the semantic or syntactic information of these modalities, and the unique qualities and relations of clothing items. In this work of thesis, a novel approach is proposed to address this issues, which aims to model and process images and text descriptions as graphs in order to exploit the relations inside and between each modality and employs specific techniques to extract syntactic and semantic information. The results obtained show promising performances on different tasks when compared to the present state-of-the-art deep learning architectures.
Resumo:
Planning is an important sub-field of artificial intelligence (AI) focusing on letting intelligent agents deliberate on the most adequate course of action to attain their goals. Thanks to the recent boost in the number of critical domains and systems which exploit planning for their internal procedures, there is an increasing need for planning systems to become more transparent and trustworthy. Along this line, planning systems are now required to produce not only plans but also explanations about those plans, or the way they were attained. To address this issue, a new research area is emerging in the AI panorama: eXplainable AI (XAI), within which explainable planning (XAIP) is a pivotal sub-field. As a recent domain, XAIP is far from mature. No consensus has been reached in the literature about what explanations are, how they should be computed, and what they should explain in the first place. Furthermore, existing contributions are mostly theoretical, and software implementations are rarely more than preliminary. To overcome such issues, in this thesis we design an explainable planning framework bridging the gap between theoretical contributions from literature and software implementations. More precisely, taking inspiration from the state of the art, we develop a formal model for XAIP, and the software tool enabling its practical exploitation. Accordingly, the contribution of this thesis is four-folded. First, we review the state of the art of XAIP, supplying an outline of its most significant contributions from the literature. We then generalise the aforementioned contributions into a unified model for XAIP, aimed at supporting model-based contrastive explanations. Next, we design and implement an algorithm-agnostic library for XAIP based on our model. Finally, we validate our library from a technological perspective, via an extensive testing suite. Furthermore, we assess its performance and usability through a set of benchmarks and end-to-end examples.