968 resultados para SCANNER - MODELLI 3D - BENI CULTURALI


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Si tratta della prima parte delle slide mostrate e discusse nel corso delle lezioni.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

È la seconda e ultima serie di slide dalle lezioni.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new multi-energy CT for small animals is being developed at the Physics Department of the University of Bologna, Italy. The system makes use of a set of quasi-monochromatic X-ray beams, with energy tunable in a range from 26 KeV to 72 KeV. These beams are produced by Bragg diffraction on a Highly Oriented Pyrolytic Graphite crystal. With quasi-monochromatic sources it is possible to perform multi-energy investigation in a more effective way, as compared with conventional X-ray tubes. Multi-energy techniques allow extracting physical information from the materials, such as effective atomic number, mass-thickness, density, that can be used to distinguish and quantitatively characterize the irradiated tissues. The aim of the system is the investigation and the development of new pre-clinic methods for the early detection of the tumors in small animals. An innovative technique, the Triple-Energy Radiography with Contrast Medium (TER), has been successfully implemented on our system. TER consist in combining a set of three quasi-monochromatic images of an object, in order to obtain a corresponding set of three single-tissue images, which are the mass-thickness map of three reference materials. TER can be applied to the quantitative mass-thickness-map reconstruction of a contrast medium, because it is able to remove completely the signal due to other tissues (i.e. the structural background noise). The technique is very sensitive to the contrast medium and is insensitive to the superposition of different materials. The method is a good candidate to the early detection of the tumor angiogenesis in mice. In this work we describe the tomographic system, with a particular focus on the quasi-monochromatic source. Moreover the TER method is presented with some preliminary results about small animal imaging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main problem connected to cone beam computed tomography (CT) systems for industrial applications employing 450 kV X-ray tubes is the high amount of scattered radiation which is added to the primary radiation (signal). This stray radiation leads to a significant degradation of the image quality. A better understanding of the scattering and methods to reduce its effects are therefore necessary to improve the image quality. Several studies have been carried out in the medical field at lower energies, whereas studies in industrial CT, especially for energies up to 450 kV, are lacking. Moreover, the studies reported in literature do not consider the scattered radiation generated by the CT system structure and the walls of the X-ray room (environmental scatter). In order to investigate the scattering on CT projections a GEANT4-based Monte Carlo (MC) model was developed. The model, which has been validated against experimental data, has enabled the calculation of the scattering including the environmental scatter, the optimization of an anti-scatter grid suitable for the CT system, and the optimization of the hardware components of the CT system. The investigation of multiple scattering in the CT projections showed that its contribution is 2.3 times the one of primary radiation for certain objects. The results of the environmental scatter showed that it is the major component of the scattering for aluminum box objects of front size 70 x 70 mm2 and that it strongly depends on the thickness of the object and therefore on the projection. For that reason, its correction is one of the key factors for achieving high quality images. The anti-scatter grid optimized by means of the developed MC model was found to reduce the scatter-toprimary ratio in the reconstructed images by 20 %. The object and environmental scatter calculated by means of the simulation were used to improve the scatter correction algorithm which could be patented by Empa. The results showed that the cupping effect in the corrected image is strongly reduced. The developed CT simulation is a powerful tool to optimize the design of the CT system and to evaluate the contribution of the scattered radiation to the image. Besides, it has offered a basis for a new scatter correction approach by which it has been possible to achieve images with the same spatial resolution as state-of-the-art well collimated fan-beam CT with a gain in the reconstruction time of a factor 10. This result has a high economic impact in non-destructive testing and evaluation, and reverse engineering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research in art conservation has been developed from the early 1950s, giving a significant contribution to the conservation-restoration of cultural heritage artefacts. In fact, only through a profound knowledge about the nature and conditions of constituent materials, suitable decisions on the conservation and restoration measures can thus be adopted and preservation practices enhanced. The study of ancient artworks is particularly challenging as they can be considered as heterogeneous and multilayered systems where numerous interactions between the different components as well as degradation and ageing phenomena take place. However, difficulties to physically separate the different layers due to their thickness (1-200 µm) can result in the inaccurate attribution of the identified compounds to a specific layer. Therefore, details can only be analysed when the sample preparation method leaves the layer structure intact, as for example the preparation of embedding cross sections in synthetic resins. Hence, spatially resolved analytical techniques are required not only to exactly characterize the nature of the compounds but also to obtain precise chemical and physical information about ongoing changes. This thesis focuses on the application of FTIR microspectroscopic techniques for cultural heritage materials. The first section is aimed at introducing the use of FTIR microscopy in conservation science with a particular attention to the sampling criteria and sample preparation methods. The second section is aimed at evaluating and validating the use of different FTIR microscopic analytical methods applied to the study of different art conservation issues which may be encountered dealing with cultural heritage artefacts: the characterisation of the artistic execution technique (chapter II-1), the studies on degradation phenomena (chapter II-2) and finally the evaluation of protective treatments (chapter II-3). The third and last section is divided into three chapters which underline recent developments in FTIR spectroscopy for the characterisation of paint cross sections and in particular thin organic layers: a newly developed preparation method with embedding systems in infrared transparent salts (chapter III-1), the new opportunities offered by macro-ATR imaging spectroscopy (chapter III-2) and the possibilities achieved with the different FTIR microspectroscopic techniques nowadays available (chapter III-3). In chapter II-1, FTIR microspectroscopy as molecular analysis, is presented in an integrated approach with other analytical techniques. The proposed sequence is optimized in function of the limited quantity of sample available and this methodology permits to identify the painting materials and characterise the adopted execution technique and state of conservation. Chapter II-2 describes the characterisation of the degradation products with FTIR microscopy since the investigation on the ageing processes encountered in old artefacts represents one of the most important issues in conservation research. Metal carboxylates resulting from the interaction between pigments and binding media are characterized using synthesised metal palmitates and their production is detected on copper-, zinc-, manganese- and lead- (associated with lead carbonate) based pigments dispersed either in oil or egg tempera. Moreover, significant effects seem to be obtained with iron and cobalt (acceleration of the triglycerides hydrolysis). For the first time on sienna and umber paints, manganese carboxylates are also observed. Finally in chapter II-3, FTIR microscopy is combined with further elemental analyses to characterise and estimate the performances and stability of newly developed treatments, which should better fit conservation-restoration problems. In the second part, in chapter III-1, an innovative embedding system in potassium bromide is reported focusing on the characterisation and localisation of organic substances in cross sections. Not only the identification but also the distribution of proteinaceous, lipidic or resinaceous materials, are evidenced directly on different paint cross sections, especially in thin layers of the order of 10 µm. Chapter III-2 describes the use of a conventional diamond ATR accessory coupled with a focal plane array to obtain chemical images of multi-layered paint cross sections. A rapid and simple identification of the different compounds is achieved without the use of any infrared microscope objectives. Finally, the latest FTIR techniques available are highlighted in chapter III-3 in a comparative study for the characterisation of paint cross sections. Results in terms of spatial resolution, data quality and chemical information obtained are presented and in particular, a new FTIR microscope equipped with a linear array detector, which permits reducing the spatial resolution limit to approximately 5 µm, provides very promising results and may represent a good alternative to either mapping or imaging systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monte Carlo (MC) simulation techniques are becoming very common in the Medical Physicists community. MC can be used for modeling Single Photon Emission Computed Tomography (SPECT) and for dosimetry calculations. 188Re, is a promising candidate for radiotherapeutic production and understanding the mechanisms of the radioresponse of tumor cells "in vitro" is of crucial importance as a first step before "in vivo" studies. The dosimetry of 188Re, used to target different lines of cancer cells, has been evaluated by the MC code GEANT4. The simulations estimate the average energy deposition/per event in the biological samples. The development of prototypes for medical imaging, based on LaBr3:Ce scintillation crystals coupled with a position sensitive photomultiplier, have been studied using GEANT4 simulations. Having tested, in the simulation, surface treatments different from the one applied to the crystal used in our experimental measurements, we found out that the Energy Resolution (ER) and the Spatial Resolution (SR) could be improved, in principle, by machining in a different way the lateral surfaces of the crystal. We have then studied a system able to acquire both echographic and scintigraphic images to let the medical operator obtain the complete anatomic and functional information for tumor diagnosis. The scintigraphic part of the detector is simulated by GEANT4 and first attempts to reconstruct tomographic images have been made using as method of reconstruction a back-projection standard algorithm. The proposed camera is based on slant collimators and LaBr3:Ce crystals. Within the Field of View (FOV) of the camera, it possible to distinguish point sources located in air at a distance of about 2 cm from each other. In particular conditions of uptake, tumor depth and dimension, the preliminary results show that the Signal to Noise Ratio (SNR) values obtained are higher than the standard detection limit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lo studio effettuato verte sulla ricerca delle cave storiche di pietra da taglio in provincia di Bologna, facendo partire la ricerca al 1870 circa, data in cui si hanno le prime notizie cartacee di cave bolognesi. Nella ricerca si è potuto contare sull’aiuto dei Dott. Stefano Segadelli e Maria Tersa De Nardo, geologi della regione Emilia-Romagna, che hanno messo a disposizione la propria conoscenza e le pubblicazioni della regione a questo scopo. Si è scoperto quindi che non esiste in bibliografia la localizzazione di tali cave e si è cercato tramite l’utilizzo del software ArcGIS , di georeferenziarle, correlandole di informazioni raccolte durante la ricerca. A Bologna al momento attuale non esistono cave di pietra da taglio attive, così tutte le fonti che si sono incontrate hanno fornito dati parziali, che uniti hanno permesso di ottenere una panoramica soddisfacente della situazione a inizio secolo scorso. Le fonti studiate sono state, in breve: il catasto cave della regione Emilia-Romagna, gli shape preesistenti della localizzazione delle cave, le pubblicazioni “Uso del Suolo”, oltre ai dati forniti dai vari Uffici Tecnici dei comuni nei quali erano attive le cave. I litotipi cavati in provincia sono quattro: arenaria, calcare, gesso e ofiolite. Per l’ofiolite si tratta di coltivazioni sporadiche e difficilmente ripetibili dato il rischio che può esserci di incontrare l’amianto in queste formazioni; è quindi probabile che non verranno più aperte. Il gesso era una grande risorsa a fine ‘800, con molte cave aperte nella Vena del Gesso. Questa zona è diventata il Parco dei Gessi Bolognesi, lasciando alla cava di Borgo Rivola il compito di provvedere al fabbisogno regionale. Il calcare viene per lo più usato come inerte, ma non mancano esempi di formazioni adatte a essere usate come blocchi. La vera protagonista del panorama bolognese rimane l’arenaria, che venne usata da sempre per costruire paesi e città in provincia. Le cave, molte e di ridotte dimensioni, sono molto spesso difficili da trovare a causa della conseguente rinaturalizzazione. Ci sono possibilità però di vedere riaprire cave di questo materiale a Monte Finocchia, tramite la messa in sicurezza di una frana, e forse anche tramite la volontà di sindaci di comunità montane, sensibili a questo argomento. Per avere una descrizione “viva” della situazione attuale, sono stati intervistati il Dott. Maurizio Aiuola, geologo della Provincia di Bologna, e il Geom. Massimo Romagnoli della Regione Emilia-Romagna, che hanno fornito una panoramica esauriente dei problemi che hanno portato ad avere in regione dei poli unici estrattivi anziché più cave di modeste dimensioni, e delle possibilità future. Le grandi cave sono, da parte della regione, più facilmente controllabili, essendo poche, e più facilmente ripristinabili data la disponibilità economica di chi la gestisce. Uno dei problemi emersi che contrastano l’apertura di aree estrattive minori, inoltre, è la spietata concorrenza dei materiali esteri, che costano, a parità di qualità, circa la metà del materiale italiano. Un esempio di ciò lo si è potuto esaminare nel comune di Sestola (MO), dove, grazie all’aiuto e alle spiegazioni del Geom. Edo Giacomelli si è documentato come il granito e la pietra di Luserna esteri utilizzati rispondano ai requisiti di resistenza e non gelività che un paese sottoposto ai rigori dell’inverno richiede ai lapidei, al contrario di alcune arenarie già in opera provenienti dal comune di Bagno di Romagna. Alla luce di questo esempio si è proceduto a calcolare brevemente l’ LCA di questo commercio, utilizzando con l’aiuto dell’ Ing. Cristian Chiavetta il software SimaPRO, in cui si è ipotizzato il trasporto di 1000 m3 di arenaria da Shanghai (Cina) a Bologna e da Karachi (Pakistan) a Bologna, comparandolo con le emissioni che possono esserci nel trasporto della stessa quantità di materiale dal comune di Monghidoro (BO) al centro di Bologna. Come previsto, il trasporto da paesi lontani comporta un impatto ambientale quasi non comparabile con quello locale, in termini di consumo di risorse organiche e inorganiche e la conseguente emissione di gas serra. Si è ipotizzato allora una riapertura di cave locali a fini non edilizi ma di restauro; esistono infatti molti edifici e monumenti vincolati in provincia, e quando questi devono essere restaurati, dove si sceglie di cavare il materiale necessario e rispondente a quello già in opera? Al riguardo, si è passati attraverso altre due interviste ai Professori Francesco Eleuteri, architetto presso la Soprintendenza dei Beni Culturali a Bologna e Gian Carlo Grillini, geologo-petrografo e esperto di restauro. Ciò che è emerso è che effettivamente non esiste attualmente una panoramica soddisfacente di quello che è il patrimonio lapideo della provincia, mancando, oltre alla georeferenziazione, una caratterizzazione minero-petrografica e fisico-meccanica adeguata a poter descrivere ciò che veniva anticamente cavato; l’ipotesi di riapertura a fini restaurativi potrebbe esserci, ma non sembra essere la maggiore necessità attualmente, in quanto il restauro viene per lo più fatto senza sostituzioni o integrazioni, tranne rari casi; è pur sempre utile avere una carta alla mano che possa correlare l’edificio storico con la zona di estrazione del materiale, quindi entrambi i professori hanno auspicato una prosecuzione della ricerca. Si può concludere dicendo che la ricerca può proseguire con una migliore e più efficace localizzazione delle cave sul terreno, usando anche come fonte il sapere della popolazione locale, e di procedere con una parte pratica che riguardi la caratterizzazione minero-petrografica e fisico-meccanica. L’utilità di questi dati può esserci nel momento in cui si facciano ricerche storiche sui beni artistici presenti a Bologna, e qualora si ipotizzi una riapertura di una zona estrattiva.