908 resultados para group-theoretical methods
Resumo:
This research aimed for an extended knowledge and understanding of young people in stigmatized areas and their construction of group identity. With a focus on Roma youths in Konik, Montenegro, and their involvement in hip-hop we wanted to explore what this culture meant to them in relation to their context. An ethnographic approach was used in collecting the empirical data through observations, interpreting music lyrics and conducting qualitative semi-structured interviews. Five young Roma boys from Konik, all involved in hip-hop, were interviewed. Theoretical perspectives on identity, youth culture and stigmatization were central. In addition, Bourdieu’s theory regarding cultural capital was emphasized and connected to youths and hip-hop. The empirical material showed that involvement in hip-hop provided the Roma youths with a group identity that they referred to in positive terms. Contextual factors of stigmatization excluded the Roma group from the majority population and the engagement in hip-hop created a possibility for the youths to be someone. The cultural capital gained through hip-hop was not used to verify and legitimate an authentic Roma identity. It was rather a way for them to create boundaries towards the negative elements in their community.
Resumo:
This thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods. Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules. Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments. Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV. Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed. Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).
Resumo:
[EN]The age and growth of the sand sole Pegusa lascaris from the Canarian Archipelago were studied from 2107 fish collected between January 2005 and December 2007. To find an appropriate method for age determination, sagittal otoliths were observed by surface-reading and frontal section and the results were compared. The two methods did not differ significantly in estimated age but the surface-reading method is superior in terms of cost and time efficiency. The sand sole has a moderate life span, with ages up to 10 years recorded. Individuals grow quickly in their first two years, attaining approximately 48% of their maximum standard length; after the second year, their growth rate drops rapidly as energy is diverted to reproduction. Males and females show dimorphism in growth, with females reaching a slightly greater length and age than males. Von Bertalanffy, seasonalized von Bertalanfy, Gompertz, and Schnute growth models were fitted to length-at-age data. Akaike weights for the seasonalized von Bertalanffy growth model indicated that the probability of choosing the correct model from the group of models used was >0.999 for males and females. The seasonalized von Bertalanffy growth parameters estimated were: L? = 309 mm standard length, k = 0.166 yr?1, t0 = ?1.88 yr, C = 0.347, and ts = 0.578 for males; and L? = 318 mm standard length, k = 0.164 yr?1, t0 = ?1.653 yr, C = 0.820, and ts = 0.691 for females. Fish standard length and otolith radius are closely correlated (R2 = 0.902). The relation between standard length and otolith radius is described by a power function (a = 85.11, v = 0.906)
Resumo:
Slope failure occurs in many areas throughout the world and it becomes an important problem when it interferes with human activity, in which disasters provoke loss of life and property damage. In this research we investigate the slope failure through the centrifuge modeling, where a reduced-scale model, N times smaller than the full-scale (prototype), is used whereas the acceleration is increased by N times (compared with the gravity acceleration) to preserve the stress and the strain behavior. The aims of this research “Centrifuge modeling of sandy slopes” are in extreme synthesis: 1) test the reliability of the centrifuge modeling as a tool to investigate the behavior of a sandy slope failure; 2) understand how the failure mechanism is affected by changing the slope angle and obtain useful information for the design. In order to achieve this scope we arranged the work as follows: Chapter one: centrifuge modeling of slope failure. In this chapter we provide a general view about the context in which we are working on. Basically we explain what is a slope failure, how it happens and which are the tools available to investigate this phenomenon. Afterwards we introduce the technology used to study this topic, that is the geotechnical centrifuge. Chapter two: testing apparatus. In the first section of this chapter we describe all the procedures and facilities used to perform a test in the centrifuge. Then we explain the characteristics of the soil (Nevada sand), like the dry unit weight, water content, relative density, and its strength parameters (c,φ), which have been calculated in laboratory through the triaxial test. Chapter three: centrifuge tests. In this part of the document are presented all the results from the tests done in centrifuge. When we talk about results we refer to the acceleration at failure for each model tested and its failure surface. In our case study we tested models with the same soil and geometric characteristics but different angles. The angles tested in this research were: 60°, 75° and 90°. Chapter four: slope stability analysis. We introduce the features and the concept of the software: ReSSA (2.0). This software allows us to calculate the theoretical failure surfaces of the prototypes. Then we show in this section the comparisons between the experimental failure surfaces of the prototype, traced in the laboratory, and the one calculated by the software. Chapter five: conclusion. The conclusion of the research presents the results obtained in relation to the two main aims, mentioned above.
Resumo:
Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.
Resumo:
The main problem connected to cone beam computed tomography (CT) systems for industrial applications employing 450 kV X-ray tubes is the high amount of scattered radiation which is added to the primary radiation (signal). This stray radiation leads to a significant degradation of the image quality. A better understanding of the scattering and methods to reduce its effects are therefore necessary to improve the image quality. Several studies have been carried out in the medical field at lower energies, whereas studies in industrial CT, especially for energies up to 450 kV, are lacking. Moreover, the studies reported in literature do not consider the scattered radiation generated by the CT system structure and the walls of the X-ray room (environmental scatter). In order to investigate the scattering on CT projections a GEANT4-based Monte Carlo (MC) model was developed. The model, which has been validated against experimental data, has enabled the calculation of the scattering including the environmental scatter, the optimization of an anti-scatter grid suitable for the CT system, and the optimization of the hardware components of the CT system. The investigation of multiple scattering in the CT projections showed that its contribution is 2.3 times the one of primary radiation for certain objects. The results of the environmental scatter showed that it is the major component of the scattering for aluminum box objects of front size 70 x 70 mm2 and that it strongly depends on the thickness of the object and therefore on the projection. For that reason, its correction is one of the key factors for achieving high quality images. The anti-scatter grid optimized by means of the developed MC model was found to reduce the scatter-toprimary ratio in the reconstructed images by 20 %. The object and environmental scatter calculated by means of the simulation were used to improve the scatter correction algorithm which could be patented by Empa. The results showed that the cupping effect in the corrected image is strongly reduced. The developed CT simulation is a powerful tool to optimize the design of the CT system and to evaluate the contribution of the scattered radiation to the image. Besides, it has offered a basis for a new scatter correction approach by which it has been possible to achieve images with the same spatial resolution as state-of-the-art well collimated fan-beam CT with a gain in the reconstruction time of a factor 10. This result has a high economic impact in non-destructive testing and evaluation, and reverse engineering.
Resumo:
In this thesis, numerical methods aiming at determining the eigenfunctions, their adjoint and the corresponding eigenvalues of the two-group neutron diffusion equations representing any heterogeneous system are investigated. First, the classical power iteration method is modified so that the calculation of modes higher than the fundamental mode is possible. Thereafter, the Explicitly-Restarted Arnoldi method, belonging to the class of Krylov subspace methods, is touched upon. Although the modified power iteration method is a computationally-expensive algorithm, its main advantage is its robustness, i.e. the method always converges to the desired eigenfunctions without any need from the user to set up any parameter in the algorithm. On the other hand, the Arnoldi method, which requires some parameters to be defined by the user, is a very efficient method for calculating eigenfunctions of large sparse system of equations with a minimum computational effort. These methods are thereafter used for off-line analysis of the stability of Boiling Water Reactors. Since several oscillation modes are usually excited (global and regional oscillations) when unstable conditions are encountered, the characterization of the stability of the reactor using for instance the Decay Ratio as a stability indicator might be difficult if the contribution from each of the modes are not separated from each other. Such a modal decomposition is applied to a stability test performed at the Swedish Ringhals-1 unit in September 2002, after the use of the Arnoldi method for pre-calculating the different eigenmodes of the neutron flux throughout the reactor. The modal decomposition clearly demonstrates the excitation of both the global and regional oscillations. Furthermore, such oscillations are found to be intermittent with a time-varying phase shift between the first and second azimuthal modes.
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
This thesis evaluated in vivo and in vitro enamel permeability in different physiological and clinical conditions by means of SEM inspection of replicas of enamel surface obtained from polyvinyl siloxane impressions subsequently later cast in polyether impression ma-terial. This technique, not invasive and risk-free, allows the evaluation of fluid outflow from enamel surface and is able to detect the presence of small quantities of fluid, visu-alized as droplets. Fluid outflow on enamel surface represents enamel permeability. This property has a paramount importance in enamel physiolgy and pathology although its ef-fective role in adhesion, caries pathogenesis and prevention today is still not fully under-stood. The aim of the studies proposed was to evaluate enamel permeability changes in differ-ent conditions and to correlate the findings with the actual knowledge about enamel physiology, caries pathogenesis, fluoride and etchinhg treatments. To obtain confirmed data the replica technique has been supported by others specific techniques such as Ra-man and IR spectroscopy and EDX analysis. The first study carried out visualized fluid movement through dental enamel in vivo con-firmed that enamel is a permeable substrate and demonstrated that age and enamel per-meability are closely related. Examined samples from subjects of different ages showed a decreasing number and size of droplets with increasing age: freshly erupted permanent teeth showed many droplets covering the entire enamel surface. Droplets in permanent teeth were prominent along enamel perikymata. These results obtained through SEM inspection of replicas allowed innovative remarks in enamel physiology. An analogous testing has been developed for evaluation of enamel permeability in primary enamel. The results of this second study showed that primary enamel revealed a substantive permeability with droplets covering the entire enamel sur-face without any specific localization accordingly with histological features, without changes during aging signs of post-eruptive maturation. These results confirmed clinical data that showed a higher caries susceptibility for primary enamel and suggested a strong relationship between this one and enamel permeability. Topical fluoride application represents the gold standard for caries prevention although the mechanism of cariostatic effect of fluoride still needs to be clarified. The effects of topical fluoride application on enamel permeability were evaluated. Particularly two dif-ferent treatments (NaF and APF), with different pH, were examined. The major product of topical fluoride application was the deposition of CaF2-like globules. Replicas inspec-tion before and after both treatments at different times intervals and after specific addi-tional clinical interventions showed that such globule formed in vivo could be removed by professional toothbrushing, sonically and chemically by KOH. The results obtained in relation to enamel permeability showed that fluoride treatments temporarily reduced enamel water permeability when CaF2-like globules were removed. The in vivo perma-nence of decreased enamel permeability after CaF2 globules removal has been demon-strated for 1 h for NaF treated teeth and for at least 7 days for APF treated teeth. Important clinical consideration moved from these results. In fact the caries-preventing action of fluoride application may be due, in part, to its ability to decrease enamel water permeability and CaF2 like-globules seem to be indirectly involved in enamel protection over time maintaining low permeability. Others results obtained by metallographic microscope and SEM/EDX analyses of or-thodontic resins fluoride releasing and not demonstrated the relevance of topical fluo-ride application in decreasing the demineralization marks and modifying the chemical composition of the enamel in the treated area. These data obtained in both the experiments confirmed the efficacy of fluoride in caries prevention and contribute to clarify its mechanism of action. Adhesive dentistry is the gold standard for caries treatment and tooth rehabilitation and is founded on important chemical and physical principles involving both enamel and dentine substrates. Particularly acid etching of dental enamel enamel has usually employed in bonding pro-cedures increasing microscopic roughness. Different acids have been tested in the litera-ture suggesting several etching procedures. The acid-induced structural transformations in enamel after different etching treatments by means of Raman and IR spectroscopy analysis were evaluated and these findings were correlated with enamel permeability. Conventional etching with 37% phosphoric acid gel (H3PO4) for 30 s and etching with 15 % HCl for 120 s were investigated. Raman and IR spectroscopy showed that the treatment with both hydrochloric and phosphoric acids induced a decrease in the carbonate content of the enamel apatite. At the same time, both acids induced the formation of HPO42- ions. After H3PO4 treatment the bands due to the organic component of enamel decreased in intensity, while in-creased after HCl treatment. Replicas of H3PO4 treated enamel showed a strongly reduced permeability while replicas of HCl 15% treated samples showed a maintained permeability. A decrease of the enamel organic component, as resulted after H3PO4 treatment, involves a decrease in enamel permeability, while the increase of the organic matter (achieved by HCl treat-ment) still maintains enamel permeability. These results suggested a correlation between the amount of the organic matter, enamel permeability and caries. The results of the different studies carried out in this thesis contributed to clarify and improve the knowledge about enamel properties with important rebounds in theoretical and clinical aspects of Dentistry.
Resumo:
The theory of the 3D multipole probability tomography method (3D GPT) to image source poles, dipoles, quadrupoles and octopoles, of a geophysical vector or scalar field dataset is developed. A geophysical dataset is assumed to be the response of an aggregation of poles, dipoles, quadrupoles and octopoles. These physical sources are used to reconstruct without a priori assumptions the most probable position and shape of the true geophysical buried sources, by determining the location of their centres and critical points of their boundaries, as corners, wedges and vertices. This theory, then, is adapted to the geoelectrical, gravity and self potential methods. A few synthetic examples using simple geometries and three field examples are discussed in order to demonstrate the notably enhanced resolution power of the new approach. At first, the application to a field example related to a dipole–dipole geoelectrical survey carried out in the archaeological park of Pompei is presented. The survey was finalised to recognize remains of the ancient Roman urban network including roads, squares and buildings, which were buried under the thick pyroclastic cover fallen during the 79 AD Vesuvius eruption. The revealed anomaly structures are ascribed to wellpreserved remnants of some aligned walls of Roman edifices, buried and partially destroyed by the 79 AD Vesuvius pyroclastic fall. Then, a field example related to a gravity survey carried out in the volcanic area of Mount Etna (Sicily, Italy) is presented, aimed at imaging as accurately as possible the differential mass density structure within the first few km of depth inside the volcanic apparatus. An assemblage of vertical prismatic blocks appears to be the most probable gravity model of the Etna apparatus within the first 5 km of depth below sea level. Finally, an experimental SP dataset collected in the Mt. Somma-Vesuvius volcanic district (Naples, Italy) is elaborated in order to define location and shape of the sources of two SP anomalies of opposite sign detected in the northwestern sector of the surveyed area. The modelled sources are interpreted as the polarization state induced by an intense hydrothermal convective flow mechanism within the volcanic apparatus, from the free surface down to about 3 km of depth b.s.l..
Resumo:
This PhD thesis discusses the rationale for design and use of synthetic oligosaccharides for the development of glycoconjugate vaccines and the role of physicochemical methods in the characterization of these vaccines. The study concerns two infectious diseases that represent a serious problem for the national healthcare programs: human immunodeficiency virus (HIV) and Group A Streptococcus (GAS) infections. Both pathogens possess distinctive carbohydrate structures that have been described as suitable targets for the vaccine design. The Group A Streptococcus cell membrane polysaccharide (GAS-PS) is an attractive vaccine antigen candidate based on its conserved, constant expression pattern and the ability to confer immunoprotection in a relevant mouse model. Analysis of the immunogenic response within at-risk populations suggests an inverse correlation between high anti-GAS-PS antibody titres and GAS infection cases. Recent studies show that a chemically synthesized core polysaccharide-based antigen may represent an antigenic structural determinant of the large polysaccharide. Based on GAS-PS structural analysis, the study evaluates the potential to exploit a synthetic design approach to GAS vaccine development and compares the efficiency of synthetic antigens with the long isolated GAS polysaccharide. Synthetic GAS-PS structural analogues were specifically designed and generated to explore the impact of antigen length and terminal residue composition. For the HIV-1 glycoantigens, the dense glycan shield on the surface of the envelope protein gp120 was chosen as a target. This shield masks conserved protein epitopes and facilitates virus spread via binding to glycan receptors on susceptible host cells. The broadly neutralizing monoclonal antibody 2G12 binds a cluster of high-mannose oligosaccharides on the gp120 subunit of HIV-1 Env protein. This oligomannose epitope has been a subject to the synthetic vaccine development. The cluster nature of the 2G12 epitope suggested that multivalent antigen presentation was important to develop a carbohydrate based vaccine candidate. I describe the development of neoglycoconjugates displaying clustered HIV-1 related oligomannose carbohydrates and their immunogenic properties.
Resumo:
This thesis is focused on the development of heteronuclear correlation methods in solid-state NMR spectroscopy, where the spatial dependence of the dipolar coupling is exploited to obtain structural and dynamical information in solids. Quantitative results on dipolar coupling constants are extracted by means of spinning sideband analysis in the indirect dimension of the two-dimensional experiments. The principles of sideband analysis were established and are currently widely used in the group of Prof. Spiess for the special case of homonuclear 1H double-quantum spectroscopy. The generalization of these principles to the heteronuclear case is presented, with special emphasis on naturally abundant 13C-1H systems. For proton spectroscopy in the solid state, line-narrowing is of particular importance, and is here achieved by very-fast sample rotation at the magic angle (MAS), with frequencies up to 35 kHz. Therefore, the heteronuclear dipolar couplings are suppressed and have to be recoupled in order to achieve an efficient excitation of the observed multiple-quantum modes. Heteronuclear recoupling is most straightforwardly accomplished by performing the known REDOR experiment, where pi-pulses are applied every half rotor period. This experiment was modified by the insertion of an additional spectroscopic dimension, such that heteronuclear multiple-quantum experiments can be carried out, which, as shown experimentally and theoretically, closely resemble homonuclear double-quantum experiments. Variants are presented which are well-suited for the recording of high-resolution 13C-1H shift correlation and spinning-sideband spectra, by means of which spatial proximities and quantitative dipolar coupling constants, respectively, of heteronuclear spin pairs can be determined. Spectral editing of 13C spectra is shown to be feasible with these techniques. Moreover, order phenomena and dynamics in columnar mesophases with 13C in natural abundance were investigated. Two further modifications of the REDOR concept allow the correlation of 13C with quadrupolar nuclei, such as 2H. The spectroscopic handling of these nuclei is challenging in that they cover large frequency ranges, and with the new experiments it is shown how the excitation problem can be tackled or circumvented altogether, respectively. As an example, one of the techniques is used for the identification of a yet unknown motional process of the H-bonded protons in the crystalline parts of poly(vinyl alcohol).
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.