894 resultados para Moduli in modern mapping theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research aimed for an extended knowledge and understanding of young people in stigmatized areas and their construction of group identity. With a focus on Roma youths in Konik, Montenegro, and their involvement in hip-hop we wanted to explore what this culture meant to them in relation to their context. An ethnographic approach was used in collecting the empirical data through observations, interpreting music lyrics and conducting qualitative semi-structured interviews. Five young Roma boys from Konik, all involved in hip-hop, were interviewed. Theoretical perspectives on identity, youth culture and stigmatization were central. In addition, Bourdieu’s theory regarding cultural capital was emphasized and connected to youths and hip-hop. The empirical material showed that involvement in hip-hop provided the Roma youths with a group identity that they referred to in positive terms. Contextual factors of stigmatization excluded the Roma group from the majority population and the engagement in hip-hop created a possibility for the youths to be someone. The cultural capital gained through hip-hop was not used to verify and legitimate an authentic Roma identity. It was rather a way for them to create boundaries towards the negative elements in their community.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ion channels are protein molecules, embedded in the lipid bilayer of the cell membranes. They act as powerful sensing elements switching chemicalphysical stimuli into ion-fluxes. At a glance, ion channels are water-filled pores, which can open and close in response to different stimuli (gating), and one once open select the permeating ion species (selectivity). They play a crucial role in several physiological functions, like nerve transmission, muscular contraction, and secretion. Besides, ion channels can be used in technological applications for different purpose (sensing of organic molecules, DNA sequencing). As a result, there is remarkable interest in understanding the molecular determinants of the channel functioning. Nowadays, both the functional and the structural characteristics of ion channels can be experimentally solved. The purpose of this thesis was to investigate the structure-function relation in ion channels, by computational techniques. Most of the analyses focused on the mechanisms of ion conduction, and the numerical methodologies to compute the channel conductance. The standard techniques for atomistic simulation of complex molecular systems (Molecular Dynamics) cannot be routinely used to calculate ion fluxes in membrane channels, because of the high computational resources needed. The main step forward of the PhD research activity was the development of a computational algorithm for the calculation of ion fluxes in protein channels. The algorithm - based on the electrodiffusion theory - is computational inexpensive, and was used for an extensive analysis on the molecular determinants of the channel conductance. The first record of ion-fluxes through a single protein channel dates back to 1976, and since then measuring the single channel conductance has become a standard experimental procedure. Chapter 1 introduces ion channels, and the experimental techniques used to measure the channel currents. The abundance of functional data (channel currents) does not match with an equal abundance of structural data. The bacterial potassium channel KcsA was the first selective ion channels to be experimentally solved (1998), and after KcsA the structures of four different potassium channels were revealed. These experimental data inspired a new era in ion channel modeling. Once the atomic structures of channels are known, it is possible to define mathematical models based on physical descriptions of the molecular systems. These physically based models can provide an atomic description of ion channel functioning, and predict the effect of structural changes. Chapter 2 introduces the computation methods used throughout the thesis to model ion channels functioning at the atomic level. In Chapter 3 and Chapter 4 the ion conduction through potassium channels is analyzed, by an approach based on the Poisson-Nernst-Planck electrodiffusion theory. In the electrodiffusion theory ion conduction is modeled by the drift-diffusion equations, thus describing the ion distributions by continuum functions. The numerical solver of the Poisson- Nernst-Planck equations was tested in the KcsA potassium channel (Chapter 3), and then used to analyze how the atomic structure of the intracellular vestibule of potassium channels affects the conductance (Chapter 4). As a major result, a correlation between the channel conductance and the potassium concentration in the intracellular vestibule emerged. The atomic structure of the channel modulates the potassium concentration in the vestibule, thus its conductance. This mechanism explains the phenotype of the BK potassium channels, a sub-family of potassium channels with high single channel conductance. The functional role of the intracellular vestibule is also the subject of Chapter 5, where the affinity of the potassium channels hEag1 (involved in tumour-cell proliferation) and hErg (important in the cardiac cycle) for several pharmaceutical drugs was compared. Both experimental measurements and molecular modeling were used in order to identify differences in the blocking mechanism of the two channels, which could be exploited in the synthesis of selective blockers. The experimental data pointed out the different role of residue mutations in the blockage of hEag1 and hErg, and the molecular modeling provided a possible explanation based on different binding sites in the intracellular vestibule. Modeling ion channels at the molecular levels relates the functioning of a channel to its atomic structure (Chapters 3-5), and can also be useful to predict the structure of ion channels (Chapter 6-7). In Chapter 6 the structure of the KcsA potassium channel depleted from potassium ions is analyzed by molecular dynamics simulations. Recently, a surprisingly high osmotic permeability of the KcsA channel was experimentally measured. All the available crystallographic structure of KcsA refers to a channel occupied by potassium ions. To conduct water molecules potassium ions must be expelled from KcsA. The structure of the potassium-depleted KcsA channel and the mechanism of water permeation are still unknown, and have been investigated by numerical simulations. Molecular dynamics of KcsA identified a possible atomic structure of the potassium-depleted KcsA channel, and a mechanism for water permeation. The depletion from potassium ions is an extreme situation for potassium channels, unlikely in physiological conditions. However, the simulation of such an extreme condition could help to identify the structural conformations, so the functional states, accessible to potassium ion channels. The last chapter of the thesis deals with the atomic structure of the !- Hemolysin channel. !-Hemolysin is the major determinant of the Staphylococcus Aureus toxicity, and is also the prototype channel for a possible usage in technological applications. The atomic structure of !- Hemolysin was revealed by X-Ray crystallography, but several experimental evidences suggest the presence of an alternative atomic structure. This alternative structure was predicted, combining experimental measurements of single channel currents and numerical simulations. This thesis is organized in two parts, in the first part an overview on ion channels and on the numerical methods adopted throughout the thesis is provided, while the second part describes the research projects tackled in the course of the PhD programme. The aim of the research activity was to relate the functional characteristics of ion channels to their atomic structure. In presenting the different research projects, the role of numerical simulations to analyze the structure-function relation in ion channels is highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Higher-order process calculi are formalisms for concurrency in which processes can be passed around in communications. Higher-order (or process-passing) concurrency is often presented as an alternative paradigm to the first order (or name-passing) concurrency of the pi-calculus for the description of mobile systems. These calculi are inspired by, and formally close to, the lambda-calculus, whose basic computational step ---beta-reduction--- involves term instantiation. The theory of higher-order process calculi is more complex than that of first-order process calculi. This shows up in, for instance, the definition of behavioral equivalences. A long-standing approach to overcome this burden is to define encodings of higher-order processes into a first-order setting, so as to transfer the theory of the first-order paradigm to the higher-order one. While satisfactory in the case of calculi with basic (higher-order) primitives, this indirect approach falls short in the case of higher-order process calculi featuring constructs for phenomena such as, e.g., localities and dynamic system reconfiguration, which are frequent in modern distributed systems. Indeed, for higher-order process calculi involving little more than traditional process communication, encodings into some first-order language are difficult to handle or do not exist. We then observe that foundational studies for higher-order process calculi must be carried out directly on them and exploit their peculiarities. This dissertation contributes to such foundational studies for higher-order process calculi. We concentrate on two closely interwoven issues in process calculi: expressiveness and decidability. Surprisingly, these issues have been little explored in the higher-order setting. Our research is centered around a core calculus for higher-order concurrency in which only the operators strictly necessary to obtain higher-order communication are retained. We develop the basic theory of this core calculus and rely on it to study the expressive power of issues universally accepted as basic in process calculi, namely synchrony, forwarding, and polyadic communication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Arbeit beginnt mit dem Vergleich spezieller Regularisierungsmethoden in der Quantenfeldtheorie mit dem Verfahren zur störungstheoretischen Konstruktion der S-Matrix nach Epstein und Glaser. Da das Epstein-Glaser-Verfahren selbst als Regularisierungsverfahren verwandt werden kann und darüberhinaus ausschließlich auf physikalisch motivierten Postulaten basiert, liefert dieser Vergleich ein Kriterium für die Zulässigkeit anderer Regularisierungsmethoden. Zusätzlich zur Herausstellung dieser Zulässigkeit resultiert aus dieser Gegenüberstellung als weiteres wesentliches Resultat ein neues, in der Anwendung praktikables sowie konsistentes Regularisierungsverfahren, das modifizierte BPHZ-Verfahren. Dieses wird anhand von Ein-Schleifen-Diagrammen aus der QED (Elektronselbstenergie, Vakuumpolarisation und Vertexkorrektur) demonstriert. Im Gegensatz zur vielverwandten Dimensionalen Regularisierung ist dieses Verfahren uneingeschränkt auch für chirale Theorien anwendbar. Als Beispiel hierfür dient die Berechnung der im Rahmen einer axialen Erweiterung der QED-Lagrangedichte auftretenden U(1)-Anomalie. Auf der Stufe von Mehr-Schleifen-Diagrammen zeigt der Vergleich der Epstein-Glaser-Konstruktion mit dem bekannten BPHZ-Verfahren an mehreren Beispielen aus der Phi^4-Theorie, darunter das sog. Sunrise-Diagramm, daß zu deren Berechnung die nach der Waldformel des BPHZ-Verfahrens zur Regularisierung beitragenden Unterdiagramme auf eine kleinere Klasse eingeschränkt werden können. Dieses Resultat ist gleichfalls für die Praxis der Regularisierung bedeutsam, da es bereits auf der Stufe der zu berücksichtigenden Unterdiagramme zu einer Vereinfachung führt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der niederländische Astronom Willem de Sitter ist bekannt für seine inzwischen berühmte Kontroverse mit Einstein von 1916 bis 1918, worin die relativistische Kosmologie begründet wurde. In diesem Kontext wird sein Name mit dem von ihm geschaffenen kosmologischen Modell verbunden, welches er als Gegenbeispiel zu Einsteins physikalischer Intuition schuf. Obwohl diese Debatte schon in wissenschaftshistorischen Arbeiten analysiert wurde, hat de Sitters Rolle in der Rezeption und dem Verbreiten der allgemeinen Relativitätstheorie bislang in der Hauptrichtung der Einstein-Studien noch nicht die ihr zustehende Aufmerksamkeit erhalten. Die vorliegende Untersuchung zielt darauf ab, seine zentrale Wichtigkeit für die Forschung zur ART innerhalb der Leidener Community aufzuzeigen. Wie Eddington war de Sitter einer der wenigen Astronomen, die sowohl hinreichende Ausbildung als auch nötige Interessen vereinten, um zum einen die spezielle und zum anderen die allgemeine Relativitätstheorie zu verfolgen. Er befasste sich zunächst 1911 mit dem Relativitätsprinzip (Einsteins erstes Postulat der SRT); zwei Jahre später fand er einen Nachweis für die Konstanz der Lichtgeschwindigkeit (Einsteins zweites Postulat). De Sitters Interesse an Gravitationstheorien reicht sogar noch weiter zurück und lässt sich bis 1908 zurückverfolgen. Überdies verfolgte er Einsteins Versuche, einen feldtheoretischen Ansatz für die Gravitation zu konstruieren, inklusive der kontroversen Einstein-Grossmann Theorie von 1913. Diese Umstände zeigen deutlich, dass de Sitters bekannteres Werk zur ART eine Konsequenz seiner vorausgegangenen Forschungen war und kein Resultat einer plötzlichen, erst 1916 einsetzenden Beschäftigung mit Einsteins Relativitätstheorie.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In modern farm systems the economic interests make reducing the risks related to transport practice an important goal. An increasing attention is directed to the welfare of animals in transit, also considering the new existing facilities. In recent years the results coming from the study of animal farm behaviour were used as tool to assess the welfare. In this thesis were analyzed behavioural patterns, jointly with blood variables, to evaluate the stress response of piglets and young bulls during transport. Since the animal behaviour could be different between individuals and these differences can affect animal responses to aversive situations, the individual behavioural characteristics were taken in account. Regarding young bulls, selected to genetic evaluation, the individual behaviour was investigated before, during and after transport, while for piglets was adopted a tested methodology classification and behavioural tests to observe their coping characteristics. The aim of this thesis was to analyse the behavioural and physiological response of young bulls and piglets to transport practice and to investigate if coping characteristics may affect how piglets cope with aversive situations. The thesis is composed by four experimental studies. The first one aims to identify the best existent methodology classification of piglets coping style between those that were credited in literature. The second one investigated the differences in response to novel situations of piglets with different coping styles. The last studies evaluated the stress response of piglets and young bulls to road transportation. The results obtained show that transport did not affect the behaviour and homeostasis of young animals which respond in a different way from adults. However the understanding of individual behavioural characteristic and the use of behavioural patterns, in addition to blood analyses, need to be more investigated in order to be useful tools to assess the animal response in aversive situation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The general theme of the present inquiry concerns the role of training and continuous updating of knowledge and skills in relation to the concept of employability and social vulnerability. The empirical research has affected the entire calendar year 2010, namely from 13 February 2010 to December 31, 2010: data refer to a very specific context or to the course funded by the Emilia Romagna region and targeted to employees in cassintegrazione notwithstanding domiciled in the region. The investigations were performed in a vocational training scheme accredited by the Emilia Romagna for the provision of publicly funded training courses. The quantitative data collected are limited to the region and distributed in all the provinces of Emilia Romagna; It addressed the issue of the role of continuing education throughout life and the importance of updating knowledge and skills, such as privileged instruments to address the instability of the labor market and what strategy to reduce the risk unemployment. Based on the different strategies that the employee puts in place during their professional careers, we introduce two concepts that are more common in the so-called knowledge society, namely the concept of social vulnerability and employability. In modern organizations becomes relevant knowledge they bring workers and the relationships that develop between people and allowing exponentially and disseminate such knowledge and skills. The knowledge thus becomes the first productive force, defined by Davenport and Prusak (1998) as "fluid combination of experience, values, contextual information and specialist knowledge that provides a framework for the evaluation and assimilation of new experience and new information ". Learning at work is a by stable explicit and conscious, and even enjoyable for everyone, especially outside of a training intervention. It then goes on to address the specific issue of training, under the current labor market increasingly deconstructed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesi di dottorato del dott. Wu Gongqing è il frutto di un lavoro di studio e ricerca durato tre anni e condotto usufruendo delle strutture di ricerca della Fondazione per le Scienze Religiose Giovanni XXIII di Bologna. L'obiettivo del lavoro che il candidato presenta è quello di offrire un quadro della ricezione di una delle maggiori opere di Origene, il Contra Celsum, nella cultura dell'Europa moderna. Il punto di vista scelto per condurre questa indagine è quello delle edizioni e traduzioni che il testo conobbe a partire dal 1481 sino alla fine del Settecento. La scansione del lavoro segue il susseguirsi delle diverse edizioni, con un capitolo dedicato alle edizioni umanistiche e al loro impatto sulla cultura italiana ed europea fra Quattro e Cinquecento. Seguono i capitoli dedicati alle edizioni di Hoeschel, Spencer, Bouhéreau e Delarue, Mosheim e Tamburini. In ciascun capitolo il ricercatore prende in esame le diverse edizioni e traduzioni, analizzandone le caratteristiche letterarie principali, lo stile, il rapporto con la tradizione manoscritta, la diffusione e cercando di ricondurre ciascuna di esse al proprio specifico ambito storico-culturale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The only nuclear model independent method for the determination of nuclear charge radii of short-lived radioactive isotopes is the measurement of the isotope shift. For light elements (Z < 10) extremely high accuracy in experiment and theory is required and was only reached for He and Li so far. The nuclear charge radii of the lightest elements are of great interest because they have isotopes which exhibit so-called halo nuclei. Those nuclei are characterized by a a very exotic nuclear structure: They have a compact core and an area of less dense nuclear matter that extends far from this core. Examples for halo nuclei are 6^He, 8^He, 11^Li and 11^Be that is investigated in this thesis. Furthermore these isotopes are of interest because up to now only for such systems with a few nucleons the nuclear structure can be calculated ab-initio. In the Institut für Kernchemie at the Johannes Gutenberg-Universität Mainz two approaches with different accuracy were developed. The goal of these approaches was the measurement of the isotope shifts between (7,10,11)^Be^+ and 9^Be^+ in the D1 line. The …first approach is laser spectroscopy on laser cooled Be^+ ions that are trapped in a linear Paul trap. The accessible accuracy should be in the order of some 100 kHz. In this thesis two types of linear Paul traps were developed for this purpose. Moreover, the peripheral experimental setup was simulated and constructed. It allows the efficient deceleration of fast ions with an initial energy of 60 keV down to some eV and an effcient transport into the ion trap. For one of the Paul traps the ion trapping could already be demonstrated, while the optical detection of captured 9^Be^+ ions could not be completed, because the development work was delayed by the second approach. The second approach uses the technique of collinear laser spectroscopy that was already applied in the last 30 years for measuring isotope shifts of plenty of heavier isotopes. For light elements (Z < 10), it was so far not possible to reach the accuracy that is required to extract information about nuclear charge radii. The combination of collinear laser spectroscopy with the most modern methods of frequency metrology …finally permitted the …first-time determination of the nuclear charge radii of (7,10)^Be and the one neutron halo nucleus 11^Be at the COLLAPS experiment at ISOLDE/ CERN. In the course of the work reported in this thesis it was possible to measure the absolute transition frequencies and the isotope shifts in the D1 line for the Be isotopes mentioned above with an accuracy of better than 2 MHz. Combination with the most recent calculations of the mass effect allowed the extraction of the nuclear charge radii of (7,10,11)^Be with an relative accuracy better than 1%. The nuclear charge radius decreases from 7^Be continuously to 10^Be and increases again for 11^Be. This result is compared with predictions of ab-initio nuclear models which reproduce the observed trend. Particularly the "Greens Function Monte Carlo" and the "Fermionic Molecular Dynamic" model show very good agreement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Materials that can mold the flow of elastic waves of certain energy in certain directions are called phononic materials. The present thesis deals essentially with such phononic systems, which are structured in the mesoscale (<1 µm), and with their individual components. Such systems show interesting phononic properties in the hypersonic region, i.e., at frequencies in the GHz range. It is shown that colloidal systems are excellent model systems for the realization of such phononic materials. Therefore, different structures and particle architectures are investigated by Brillouin light scattering, the inelastic scattering of light by phonons.rnThe experimental part of this work is divided into three chapters: Chapter 4 is concerned with the localized mechanical waves in the individual spherical colloidal particles, i.e., with their resonance- or eigenvibrations. The investigation of these vibrations with regard to the environment of the particles, their chemical composition, and the influence of temperature on nanoscopically structured colloids allows novel insights into the physical properties of colloids at small length scales. Furthermore, some general questions concerning light scattering on such systems, in dispute so far, are convincingly addressed.rnChapter 5 is a study of the traveling of mechanical waves in colloidal systems, consisting of ordered and disordered colloids in liquid or elastic matrix. Such systems show acoustic band gaps, which can be explained geometrically (Bragg gap) or by the interaction of the acoustic band with the eigenvibrations of the individual spheres (hybridization gap).rnWhile the latter has no analogue in photonics, the presence of strong phonon scatterers, when a large elastic mismatch between the composite components exists, can largely impact phonon propagation in analogy to strong multiple light scattering systems. The former is exemplified in silica based phononic structures that opens the door to new ways of sound propagation manipulation.rnChapter 6 describes the first measurement of the elastic moduli in newly fabricated by physical vapor deposition so-called ‘stable organic glasses’. rnIn brief, this thesis explores novel phenomena in colloid-based hypersonic phononic structures, utilizing a versatile microfabrication technique along with different colloid architectures provided by material science, and applying a non-destructive optical experimental tool to record dispersion diagrams.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I moderni motori a combustione interna diventano sempre più complessi L'introduzione della normativa antinquinamento EURO VI richiederà una significativa riduzione degli inquinanti allo scarico. La maggiore criticità è rappresentata dalla riduzione degli NOx per i motori Diesel da aggiungersi a quelle già in vigore con le precedenti normative. Tipicamente la messa a punto di una nuova motorizzazione prevede una serie di test specifici al banco prova. Il numero sempre maggiore di parametri di controllo della combustione, sorti come conseguenza della maggior complessità meccanica del motore stesso, causa un aumento esponenziale delle prove da eseguire per caratterizzare l'intero sistema. L'obiettivo di questo progetto di dottorato è quello di realizzare un sistema di analisi della combustione in tempo reale in cui siano implementati diversi algoritmi non ancora presenti nelle centraline moderne. Tutto questo facendo particolare attenzione alla scelta dell'hardware su cui implementare gli algoritmi di analisi. Creando una piattaforma di Rapid Control Prototyping (RCP) che sfrutti la maggior parte dei sensori presenti in vettura di serie; che sia in grado di abbreviare i tempi e i costi della sperimentazione sui motopropulsori, riducendo la necessità di effettuare analisi a posteriori, su dati precedentemente acquisiti, a fronte di una maggior quantità di calcoli effettuati in tempo reale. La soluzione proposta garantisce l'aggiornabilità, la possibilità di mantenere al massimo livello tecnologico la piattaforma di calcolo, allontanandone l'obsolescenza e i costi di sostituzione. Questa proprietà si traduce nella necessità di mantenere la compatibilità tra hardware e software di generazioni differenti, rendendo possibile la sostituzione di quei componenti che limitano le prestazioni senza riprogettare il software.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this research is to improve the comprehension of the processes controlling the formation of caves and karst-like morphologies in quartz-rich lithologies (more than 90% quartz), like quartz-sandstones and metamorphic quartzites. In the scientific community the processes actually most retained to be responsible of these formations are explained in the “Arenisation Theory”. This implies a slow but pervasive dissolution of the quartz grain/mineral boundaries increasing the general porosity until the rock becomes incohesive and can be easily eroded by running waters. The loose sands produced by the weathering processes are then evacuated to the surface through processes of piping due to the infiltration of waters from the fracture network or the bedding planes. To deal with these problems we adopted a multidisciplinary approach through the exploration and the study of several cave systems in different tepuis. The first step was to build a theoretical model of the arenisation process, considering the most recent knowledge about the dissolution kinetics of quartz, the intergranular/grain boundaries diffusion processes, the primary diffusion porosity, in the simplified conditions of an open fracture crossed by a continuous flow of undersatured water. The results of the model were then compared with the world’s widest dataset (more than 150 analyses) of water geochemistry collected till now on the tepui, in superficial and cave settings. All these studies allowed verifying the importance and the effectiveness of the arenisation process that is confirmed to be the main process responsible of the primary formation of these caves and of the karst-like superficial morphologies. The numerical modelling and the field observations allowed evaluating a possible age of the cave systems around 20-30 million of years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ho studiato la possibilità di soluzione per il problema cosmologico dei moduli (CMP) presente a causa della compattificazione delle dimensioni extra tramite un periodo di inflazione a basse energie (Thermal Inflation). L'elaborato consta di cinque capitoli. Il primo introduce il lettore alla problematica dei moduli partendo dalla teoria Kaluza-Klein. Il secondo riguarda interamente il CMP e altri problemi cosmologici associati ai moduli. Nel terzo viene descritta la thermal inflation e le condizioni di funzionamento. Nel quarto capitolo viene preso in esame il problema di stabilizzazione dei moduli nella teoria di stringa tipo IIB: vengono descritti sia il meccanismo KKTL che il LVS. L'ultimo capitolo consiste nel calcolo della diluizione dei moduli, enunciata prima in un contesto generale e infine applicata al LVS, tramite la thermal inflation. Viene altresì presa in esame la possibilità di due epoche di thermal inflation, al fine di ottenere una diluizione più efficiente dei moduli. In LVS sono presenti due moduli, differenti per massa e vita media. Il più leggero è soggetto al CMP e si trova che, anche dopo due periodi di thermal inflation vi è ancora un numero eccessivo di tali campi, in quanto se da un lato la thermal inflation ne diliusca la densità iniziale, dall'altro ne causa una forte riproduzione, dovuta essenzialmente alle caratteristiche del modulo