11 resultados para Segmented polyurethanes
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The southern Apennines of Italy have been experienced several destructive earthquakes both in historic and recent times. The present day seismicity, characterized by small-to-moderate magnitude earthquakes, was used like a probe to obatin a deeper knowledge of the fault structures where the largest earthquakes occurred in the past. With the aim to infer a three dimensional seismic image both the problem of data quality and the selection of a reliable and robust tomographic inversion strategy have been faced. The data quality has been obtained to develop optimized procedures for the measurements of P- and S-wave arrival times, through the use of polarization filtering and to the application of a refined re-picking technique based on cross-correlation of waveforms. A technique of iterative tomographic inversion, linearized, damped combined with a strategy of multiscale inversion type has been adopted. The retrieved P-wave velocity model indicates the presence of a strong velocity variation along a direction orthogonal to the Apenninic chain. This variation defines two domains which are characterized by a relatively low and high velocity values. From the comparison between the inferred P-wave velocity model with a portion of a structural section available in literature, the high velocity body was correlated with the Apulia carbonatic platforms whereas the low velocity bodies was associated to the basinal deposits. The deduced Vp/Vs ratio shows that the ratio is lower than 1.8 in the shallower part of the model, while for depths ranging between 5 km and 12 km the ratio increases up to 2.1 in correspondence to the area of higher seismicity. This confirms that areas characterized by higher values are more prone to generate earthquakes as a response to the presence of fluids and higher pore-pressures.
Resumo:
This thesis proposes a new document model, according to which any document can be segmented in some independent components and transformed in a pattern-based projection, that only uses a very small set of objects and composition rules. The point is that such a normalized document expresses the same fundamental information of the original one, in a simple, clear and unambiguous way. The central part of my work consists of discussing that model, investigating how a digital document can be segmented, and how a segmented version can be used to implement advanced tools of conversion. I present seven patterns which are versatile enough to capture the most relevant documents’ structures, and whose minimality and rigour make that implementation possible. The abstract model is then instantiated into an actual markup language, called IML. IML is a general and extensible language, which basically adopts an XHTML syntax, able to capture a posteriori the only content of a digital document. It is compared with other languages and proposals, in order to clarify its role and objectives. Finally, I present some systems built upon these ideas. These applications are evaluated in terms of users’ advantages, workflow improvements and impact over the overall quality of the output. In particular, they cover heterogeneous content management processes: from web editing to collaboration (IsaWiki and WikiFactory), from e-learning (IsaLearning) to professional printing (IsaPress).
Resumo:
Research for new biocompatible and easily implantable materials continuously proposes new molecules and new substances with biological, chemical and physical characteristics, that are more and more adapted to aesthetic and reconstructive surgery and to the development of biomedical devices such as cardiovascular prostheses. Two classes of polymeric biomaterials seem to meet better these requirements: “hydrogels” , which includes polyalkylimide (PAI) and polyvinylalcohol (PVA) and “elastomers”, which includes polyurethanes (PUs). The first ones in the last decade have had a great application for soft tissue augmentation, due to their similarity to this tissue for their high water content, elasticity and oxygen permeability (Dini et al., 2005). The second ones, on the contrary, are widely used in cardiovascular applications (catheters, vascular grafts, ventricular assist devices, total artificial hearts) due to their good mechanical properties and hemocompatibility (Zdrahala R.J. and Zdrahala I.J., 1999). In the biocompatibility evaluation of these synthetic polymers, that is important for its potential use in clinical applications, a fundamental aspect is the knowledge of the polymers cytotoxicity and the effect of their interaction with cells, in particular with the cell populations involved in the inflammatory responses, i.e. monocyte/macrophages. In consideration of what above said, the aim of this study is the comprehension of the in vitro effect of PAI, PVA and PU on three cell lines that represent three different stages of macrophagic differentiation: U937 pro-monocytes, THP-1 monocytes and RAW 264.7 macrophages. Cytotoxicity was evaluated by measuring the rate of viability with MTT, Neutral Red and morphological analysis at light microscope in time-course dependent experiments. The influence of these polymers on monocyte/macrophage activation in terms of cells adhesion, monocyte differentiation in macrophages, antigens distribution, aspecific phagocytosis, fluid-phase endocitosis, pro-inflammatory cytokine (TNF-α, IL-1β, IL-6) and nitric oxide (NO) release was evaluated. In conclusion, our studies have indicated that the three different polymeric biomaterials are highly biocompatible, since they scarcely affected viability of U937, THP-1 and RAW 264.7 cells. Moreover, we have found that even though hydrogels and polyurethane influences monocyte/macrophage differentiation (depending on the particular type of cell and polymer), they are immunocompatible since they not induced significantly high cytokine release. For these reasons their clinical applications are strongly encouraged.
Resumo:
The OPERA experiment aims at the direct observation of ν_mu -> ν_tau oscillations in the CNGS (CERN Neutrinos to Gran Sasso) neutrino beam produced at CERN; since the ν_e contamination in the CNGS beam is low, OPERA will also be able to study the sub-dominant oscillation channel ν_mu -> ν_e. OPERA is a large scale hybrid apparatus divided in two supermodules, each equipped with electronic detectors, an iron spectrometer and a highly segmented ~0.7 kton target section made of Emulsion Cloud Chamber (ECC) units. During my research work in the Bologna Lab. I have taken part to the set-up of the automatic scanning microscopes studying and tuning the scanning system performances and efficiencies with emulsions exposed to a test beam at CERN in 2007. Once the triggered bricks were distributed to the collaboration laboratories, my work was centered on the procedure used for the localization and the reconstruction of neutrino events.
Resumo:
The ALICE experiment at the LHC has been designed to cope with the experimental conditions and observables of a Quark Gluon Plasma reaction. One of the main assets of the ALICE experiment with respect to the other LHC experiments is the particle identification. The large Time-Of-Flight (TOF) detector is the main particle identification detector of the ALICE experiment. The overall time resolution, better that 80 ps, allows the particle identification over a large momentum range (up to 2.5 GeV/c for pi/K and 4 GeV/c for K/p). The TOF makes use of the Multi-gap Resistive Plate Chamber (MRPC), a detector with high efficiency, fast response and intrinsic time resoltion better than 40 ps. The TOF detector embeds a highly-segmented trigger system that exploits the fast rise time and the relatively low noise of the MRPC strips, in order to identify several event topologies. This work aims to provide detailed description of the TOF trigger system. The results achieved in the 2009 cosmic-ray run at CERN are presented to show the performances and readiness of TOF trigger system. The proposed trigger configuration for the proton-proton and Pb-Pb beams are detailed as well with estimates of the efficiencies and purity samples.
Resumo:
In this study new tomographic models of Colombia were calculated. I used the seismicity recorded by the Colombian seismic network during the period 2006-2009. In this time period, the improvement of the seismic network yields more stable hypocentral results with respect to older data set and allows to compute new 3D Vp and Vp/Vs models. The final dataset consists of 10813 P- and 8614 S-arrival times associated to 1405 earthquakes. Tests with synthetic data and resolution analysis indicate that velocity models are well constrained in central, western and southwestern Colombia to a depth of 160 km; the resolution is poor in the northern Colombia and close to Venezuela due to a lack of seismic stations and seismicity. The tomographic models and the relocated seismicity indicate the existence of E-SE subducting Nazca lithosphere beneath central and southern Colombia. The North-South changes in Wadati-Benioff zone, Vp & Vp/Vs pattern and volcanism, show that the downgoing plate is segmented by slab tears E-W directed, suggesting the presence of three sectors. Earthquakes in the northernmost sector represent most of the Colombian seimicity and concentrated on 100-170 km depth interval, beneath the Eastern Cordillera. Here a massive dehydration is inferred, resulting from a delay in the eclogitization of a thickened oceanic crust in a flat-subduction geometry. In this sector a cluster of intermediate-depth seismicity (Bucaramanga Nest) is present beneath the elbow of the Eastern Cordillera, interpreted as the result of massive and highly localized dehydration phenomenon caused by a hyper-hydrous oceanic crust. The central and southern sectors, although different in Vp pattern show, conversely, a continuous, steep and more homogeneous Wadati-Benioff zone with overlying volcanic areas. Here a "normalthickened" oceanic crust is inferred, allowing for a gradual and continuous metamorphic reactions to take place with depth, enabling the fluid migration towards the mantle wedge.
Resumo:
The Zero Degree Calorimeter (ZDC) of the ATLAS experiment at CERN is placed in the TAN of the LHC collider, covering the pseudorapidity region higher than 8.3. It is composed by 2 calorimeters, each one longitudinally segmented in 4 modules, located at 140 m from the IP exactly on the beam axis. The ZDC can detect neutral particles during pp collisions and it is a tool for diffractive physics. Here we present results on the forward photon energy distribution obtained using p-p collision data at sqrt{s} = 7 TeV. First the pi0 reconstruction will be used for the detector calibration with photons, then we will show results on the forward photon energy distribution in p-p collisions and the same distribution, but obtained using MC generators. Finally a comparison between data and MC will be shown.
Resumo:
Since the first subdivisions of the brain into macro regions, it has always been thought a priori that, given the heterogeneity of neurons, different areas host specific functions and process unique information in order to generate a behaviour. Moreover, the various sensory inputs coming from different sources (eye, skin, proprioception) flow from one macro area to another, being constantly computed and updated. Therefore, especially for non-contiguous cortical areas, it is not expected to find the same information. From this point of view, it would be inconceivable that the motor and the parietal cortices, diversified by the information encoded and by the anatomical position in the brain, could show very similar neural dynamics. With the present thesis, by analyzing the population activity of parietal areas V6A and PEc with machine learning methods, we argue that a simplified view of the brain organization do not reflect the actual neural processes. We reliably detected a number of neural states that were tightly linked to distinct periods of the task sequence, i.e. the planning and execution of movement and the holding of target as already observed in motor cortices. The states before and after the movement could be further segmented into two states related to different stages of movement planning and arm posture processing. Rather unexpectedly, we found that activity during the movement could be parsed into two states of equal duration temporally linked to the acceleration and deceleration phases of the arm. Our findings suggest that, at least during arm reaching in 3D space, the posterior parietal cortex (PPC) shows low-level population neural dynamics remarkably similar to those found in the motor cortices. In addition, the present findings suggest that computational processes in PPC could be better understood if studied using a dynamical system approach rather than studying a mosaic of single units.
Resumo:
Synthetic polymers constitute a wide class of materials which has enhanced the quality of human life, providing comforts and innovations. Anyway, the increasing production and the incorrect waste management, are leading to the occurrence of polymers in the environment, generating concern. To understand the extent of this issue, analytical investigation holds an essential position. Standardised methods have not established yet, and additional studies are required to improve the present knowledge. The main aim of this thesis was to provide comprehensive information about the potential of pyrolysis coupled with gas-chromatography and mass spectrometry (Py-GC-MS) for polymers investigation, from their characterisation to their identification and quantification in complex matrices. Water-soluble (poly(dimethylsiloxanes), PDMS bearing poly(ethylene glycol), PEG, side chains) and water-insoluble polymers (microplastics, MPs, and bioplastics) were studied. The different studies revealed the possibility to identify heterogeneous classes of polymers, fingerprinting the presence of PDMS copolymers and distinguishing chemically different polyurethanes (PURs). The occurrence of secondary reactions in pyrolysis of polymer mixtures was observed as possible drawback. Pyrolysis products indicative of secondary reactions and their reaction mechanisms were identified. Py-GC-MS also revealed its fundamental role for the identification of polymers composing commercial bioplastics items based. The results aided to identify chemicals that have the potential to migrate in sea waters. Investigations of environmental samples demonstrated the capability of Py-GC-MS to provide reliable, reproducible and comparable results about polymers in complex matrices (PEG-PDMS in sewage sludges and PURs and other MPs in road dusts and spider webs). Criticisms were especially found in quantitation, such as the retrieval reference materials, the construction of reliable calibration protocols and the occurrence of bias due to interferences between pyrolysis products. This thesis pursues the greater purpose to develop harmonised and standardised methods for environmental investigations of polymers, that is fundamental to assess the real state of the environment.
Resumo:
Background There is a wide variation of recurrence risk of Non-small-cell lung cancer (NSCLC) within the same Tumor Node Metastasis (TNM) stage, suggesting that other parameters are involved in determining this probability. Radiomics allows extraction of quantitative information from images that can be used for clinical purposes. The primary objective of this study is to develop a radiomic prognostic model that predicts a 3 year disease free-survival (DFS) of resected Early Stage (ES) NSCLC patients. Material and Methods 56 pre-surgery non contrast Computed Tomography (CT) scans were retrieved from the PACS of our institution and anonymized. Then they were automatically segmented with an open access deep learning pipeline and reviewed by an experienced radiologist to obtain 3D masks of the NSCLC. Images and masks underwent to resampling normalization and discretization. From the masks hundreds Radiomic Features (RF) were extracted using Py-Radiomics. Hence, RF were reduced to select the most representative features. The remaining RF were used in combination with Clinical parameters to build a DFS prediction model using Leave-one-out cross-validation (LOOCV) with Random Forest. Results and Conclusion A poor agreement between the radiologist and the automatic segmentation algorithm (DICE score of 0.37) was found. Therefore, another experienced radiologist manually segmented the lesions and only stable and reproducible RF were kept. 50 RF demonstrated a high correlation with the DFS but only one was confirmed when clinicopathological covariates were added: Busyness a Neighbouring Gray Tone Difference Matrix (HR 9.610). 16 clinical variables (which comprised TNM) were used to build the LOOCV model demonstrating a higher Area Under the Curve (AUC) when RF were included in the analysis (0.67 vs 0.60) but the difference was not statistically significant (p=0,5147).