936 resultados para Short range order correlations
Resumo:
Electronic devices based on organic semiconductors have gained increased attention in nanotechnology, especially applicable to the field of field-effect transistors and photovoltaic. A promising class of materials in this reseach field are polycyclic aromatic hydrocarbons (PAHs). Alkyl substitution of these graphenes results in the selforganization into one-dimensional columnar superstructures and provides solubility and processibility. The nano-phase separation between the π-stacking aromatic cores and the disordered peripheral alkyl chains leads to the formation of thermotropic mesophases. Hexa-peri-hexabenzocoronenes (HBC), as an example for a PAH, exhibits some of the highest values for the charge carrier mobility for mesogens, which makes them promising candidates for electronic devices. Prerequisites for efficient charge carrier transport between electrodes are a high purity of the material to reduce possible trapping sites for charge carriers and a pronounced and defect-free, long-range order. Appropriate processing techniques are required to induce a high degree of aligned structures in the discotic material over macroscopic dimensions. Highly-ordered supramolecular structures of different discotics, in particular, of HBC derivatives have been obtained by solution processing using the zone-casting technique, zone-melting or simple extrusion. Simplicity and fabrication of highly oriented columnar structures over long-range are the most essential advantages of these zone-processing methods. A close relation between the molecular design, self-aggregation and the processing conditions has been revealed. The long-range order achieved by the zone-casting proved to be suitable for field effect transistors (FET).
Resumo:
Synthetic Biology is a relatively new discipline, born at the beginning of the New Millennium, that brings the typical engineering approach (abstraction, modularity and standardization) to biotechnology. These principles aim to tame the extreme complexity of the various components and aid the construction of artificial biological systems with specific functions, usually by means of synthetic genetic circuits implemented in bacteria or simple eukaryotes like yeast. The cell becomes a programmable machine and its low-level programming language is made of strings of DNA. This work was performed in collaboration with researchers of the Department of Electrical Engineering of the University of Washington in Seattle and also with a student of the Corso di Laurea Magistrale in Ingegneria Biomedica at the University of Bologna: Marilisa Cortesi. During the collaboration I contributed to a Synthetic Biology project already started in the Klavins Laboratory. In particular, I modeled and subsequently simulated a synthetic genetic circuit that was ideated for the implementation of a multicelled behavior in a growing bacterial microcolony. In the first chapter the foundations of molecular biology are introduced: structure of the nucleic acids, transcription, translation and methods to regulate gene expression. An introduction to Synthetic Biology completes the section. In the second chapter is described the synthetic genetic circuit that was conceived to make spontaneously emerge, from an isogenic microcolony of bacteria, two different groups of cells, termed leaders and followers. The circuit exploits the intrinsic stochasticity of gene expression and intercellular communication via small molecules to break the symmetry in the phenotype of the microcolony. The four modules of the circuit (coin flipper, sender, receiver and follower) and their interactions are then illustrated. In the third chapter is derived the mathematical representation of the various components of the circuit and the several simplifying assumptions are made explicit. Transcription and translation are modeled as a single step and gene expression is function of the intracellular concentration of the various transcription factors that act on the different promoters of the circuit. A list of the various parameters and a justification for their value closes the chapter. In the fourth chapter are described the main characteristics of the gro simulation environment, developed by the Self Organizing Systems Laboratory of the University of Washington. Then, a sensitivity analysis performed to pinpoint the desirable characteristics of the various genetic components is detailed. The sensitivity analysis makes use of a cost function that is based on the fraction of cells in each one of the different possible states at the end of the simulation and the wanted outcome. Thanks to a particular kind of scatter plot, the parameters are ranked. Starting from an initial condition in which all the parameters assume their nominal value, the ranking suggest which parameter to tune in order to reach the goal. Obtaining a microcolony in which almost all the cells are in the follower state and only a few in the leader state seems to be the most difficult task. A small number of leader cells struggle to produce enough signal to turn the rest of the microcolony in the follower state. It is possible to obtain a microcolony in which the majority of cells are followers by increasing as much as possible the production of signal. Reaching the goal of a microcolony that is split in half between leaders and followers is comparatively easy. The best strategy seems to be increasing slightly the production of the enzyme. To end up with a majority of leaders, instead, it is advisable to increase the basal expression of the coin flipper module. At the end of the chapter, a possible future application of the leader election circuit, the spontaneous formation of spatial patterns in a microcolony, is modeled with the finite state machine formalism. The gro simulations provide insights into the genetic components that are needed to implement the behavior. In particular, since both the examples of pattern formation rely on a local version of Leader Election, a short-range communication system is essential. Moreover, new synthetic components that allow to reliably downregulate the growth rate in specific cells without side effects need to be developed. In the appendix are listed the gro code utilized to simulate the model of the circuit, a script in the Python programming language that was used to split the simulations on a Linux cluster and the Matlab code developed to analyze the data.
Resumo:
The objective of this work is to characterize the genome of the chromosome 1 of A.thaliana, a small flowering plants used as a model organism in studies of biology and genetics, on the basis of a recent mathematical model of the genetic code. I analyze and compare different portions of the genome: genes, exons, coding sequences (CDS), introns, long introns, intergenes, untranslated regions (UTR) and regulatory sequences. In order to accomplish the task, I transformed nucleotide sequences into binary sequences based on the definition of the three different dichotomic classes. The descriptive analysis of binary strings indicate the presence of regularities in each portion of the genome considered. In particular, there are remarkable differences between coding sequences (CDS and exons) and non-coding sequences, suggesting that the frame is important only for coding sequences and that dichotomic classes can be useful to recognize them. Then, I assessed the existence of short-range dependence between binary sequences computed on the basis of the different dichotomic classes. I used three different measures of dependence: the well-known chi-squared test and two indices derived from the concept of entropy i.e. Mutual Information (MI) and Sρ, a normalized version of the “Bhattacharya Hellinger Matusita distance”. The results show that there is a significant short-range dependence structure only for the coding sequences whose existence is a clue of an underlying error detection and correction mechanism. No doubt, further studies are needed in order to assess how the information carried by dichotomic classes could discriminate between coding and noncoding sequence and, therefore, contribute to unveil the role of the mathematical structure in error detection and correction mechanisms. Still, I have shown the potential of the approach presented for understanding the management of genetic information.
Resumo:
A novel nanosized and addressable sensing platform based on membrane coated plasmonic particles for detection of protein adsorption using dark field scattering spectroscopy of single particles has been established. To this end, a detailed analysis of the deposition of gold nanorods on differently functionalized substrates is performed in relation to various factors (such as the pH, ionic strength, concentration of colloidal suspension, incubation time) in order to find the optimal conditions for obtaining a homogenous distribution of particles at the desired surface number density. The possibility of successfully draping lipid bilayers over the gold particles immobilized on glass substrates depends on the careful adjustment of parameters such as membrane curvature and adhesion properties and is demonstrated with complementary techniques such as phase imaging AFM, fluorescence microscopy (including FRAP) and single particle spectroscopy. The functionality and sensitivity of the proposed sensing platform is unequivocally certified by the resonance shifts of the plasmonic particles that were individually interrogated with single particle spectroscopy upon the adsorption of streptavidin to biotinylated lipid membranes. This new detection approach that employs particles as nanoscopic reporters for biomolecular interactions insures a highly localized sensitivity that offers the possibility to screen lateral inhomogeneities of native membranes. As an alternative to the 2D array of gold nanorods, short range ordered arrays of nanoholes in optically transparent gold films or regular arrays of truncated tetrahedron shaped particles are built by means of colloidal nanolithography on transparent substrates. Technical issues mainly related to the optimization of the mask deposition conditions are successfully addressed such that extended areas of homogenously nanostructured gold surfaces are achieved. Adsorption of the proteins annexin A1 and prothrombin on multicomponent lipid membranes as well as the hydrolytic activity of the phospholipase PLA2 were investigated with classical techniques such as AFM, ellipsometry and fluorescence microscopy. At first, the issues of lateral phase separation in membranes of various lipid compositions and the dependency of the domains configuration (sizes and shapes) on the membrane content are addressed. It is shown that the tendency for phase segregation of gel and fluid phase lipid mixtures is accentuated in the presence of divalent calcium ions for membranes containing anionic lipids as compared to neutral bilayers. Annexin A1 adsorbs preferentially and irreversibly on preformed phosphatidylserine (PS) enriched lipid domains but, dependent on the PS content of the bilayer, the protein itself may induce clustering of the anionic lipids into areas with high binding affinity. Corroborated evidence from AFM and fluorescence experiments confirm the hypothesis of a specifically increased hydrolytic activity of PLA2 on the highly curved regions of membranes due to a facilitated access of lipase to the cleavage sites of the lipids. The influence of the nanoscale gold surface topography on the adhesion of lipid vesicles is unambiguously demonstrated and this reveals, at least in part, an answer for the controversial question existent in the literature about the behavior of lipid vesicles interacting with bare gold substrates. The possibility of formation monolayers of lipid vesicles on chemically untreated gold substrates decorated with gold nanorods opens new perspectives for biosensing applications that involve the radiative decay engineering of the plasmonic particles.
Resumo:
Self-organising pervasive ecosystems of devices are set to become a major vehicle for delivering infrastructure and end-user services. The inherent complexity of such systems poses new challenges to those who want to dominate it by applying the principles of engineering. The recent growth in number and distribution of devices with decent computational and communicational abilities, that suddenly accelerated with the massive diffusion of smartphones and tablets, is delivering a world with a much higher density of devices in space. Also, communication technologies seem to be focussing on short-range device-to-device (P2P) interactions, with technologies such as Bluetooth and Near-Field Communication gaining greater adoption. Locality and situatedness become key to providing the best possible experience to users, and the classic model of a centralised, enormously powerful server gathering and processing data becomes less and less efficient with device density. Accomplishing complex global tasks without a centralised controller responsible of aggregating data, however, is a challenging task. In particular, there is a local-to-global issue that makes the application of engineering principles challenging at least: designing device-local programs that, through interaction, guarantee a certain global service level. In this thesis, we first analyse the state of the art in coordination systems, then motivate the work by describing the main issues of pre-existing tools and practices and identifying the improvements that would benefit the design of such complex software ecosystems. The contribution can be divided in three main branches. First, we introduce a novel simulation toolchain for pervasive ecosystems, designed for allowing good expressiveness still retaining high performance. Second, we leverage existing coordination models and patterns in order to create new spatial structures. Third, we introduce a novel language, based on the existing ``Field Calculus'' and integrated with the aforementioned toolchain, designed to be usable for practical aggregate programming.
Resumo:
Diese Arbeit stellt eine ausführliche Studie fundamentaler Eigenschaften der Kalzit CaCO3(10.4) und verwandter Mineraloberflächen dar, welche nicht nur durch die Verwendung von Nichtkontakt Rasterkraftmikroskopie, sondern hauptsächlich durch die Messung von Kraftfeldern ermöglicht wurde. Die absolute Oberflächenorientierung sowie der hierfür zugrundeliegende Prozess auf atomarer Skala konnten erfolgreich für die Kalzit (10.4) Oberfläche identifiziert werden.rnDie Adsorption chiraler Moleküle auf Kalzit ist relevant im Bereich der Biomineralisation, was ein Verständnis der Oberflächensymmetrie unumgänglich macht. Die Messung des Oberflächenkraftfeldes auf atomarer Ebene ist hierfür ein zentraler Aspekt. Eine solche Kraftkarte beleuchtet nicht nur die für die Biomineralisation wichtige Wechselwirkung der Oberfläche mit Molekülen, sondern enthält auch die Möglichkeit, Prozesse auf atomarer Skala und damit Oberflächeneigenschaften zu identifizieren.rnDie Einführung eines höchst flexiblen Messprotokolls gewährleistet die zuverlässige und kommerziell nicht erhältliche Messung des Oberflächenkraftfeldes. Die Konversion der rohen ∆f Daten in die vertikale Kraft Fz ist jedoch kein trivialer Vorgang, insbesondere wenn Glätten der Daten in Frage kommt. Diese Arbeit beschreibt detailreich, wie Fz korrekt für die experimentellen Bedingungen dieser Arbeit berechnet werden können. Weiterhin ist beschrieben, wie Lateralkräfte Fy und Dissipation Γ erhalten wurden, um das volle Potential dieser Messmethode auszureizen.rnUm Prozesse auf atomarer Skala auf Oberflächen zu verstehen sind die kurzreichweitigen, chemischen Kräfte Fz,SR von größter Wichtigkeit. Langreichweitige Beiträge müssen hierzu an Fz angefittet und davon abgezogen werden. Dies ist jedoch eine fehleranfällige Aufgabe, die in dieser Arbeit dadurch gemeistert werden konnte, dass drei unabhängige Kriterien gefunden wurden, die den Beginn zcut von Fz,SR bestimmen, was für diese Aufgabe von zentraler Bedeutung ist. Eine ausführliche Fehleranalyse zeigt, dass als Kriterium die Abweichung der lateralen Kräfte voneinander vertrauenswürdige Fz,SR liefert. Dies ist das erste Mal, dass in einer Studie ein Kriterium für die Bestimmung von zcut gegeben werden konnte, vervollständigt mit einer detailreichen Fehleranalyse.rnMit der Kenntniss von Fz,SR und Fy war es möglich, eine der fundamentalen Eigenschaften der CaCO3(10.4) Oberfläche zu identifizieren: die absolute Oberflächenorientierung. Eine starke Verkippung der abgebildeten Objekte
Resumo:
Short range nucleon-nucleon correlations in nuclei (NN SRC) carry important information on nuclear structure and dynamics. NN SRC have been extensively probed through two-nucleon knock- out reactions in both pion and electron scattering experiments. We report here on the detection of two-nucleon knock-out events from neutrino interactions and discuss their topological features as possibly involving NN SRC content in the target argon nuclei. The ArgoNeuT detector in the Main Injector neutrino beam at Fermilab has recorded a sample of 30 fully reconstructed charged current events where the leading muon is accompanied by a pair of protons at the interaction vertex, 19 of which have both protons above the Fermi momentum of the Ar nucleus. Out of these 19 events, four are found with the two protons in a strictly back-to-back high momenta configuration directly observed in the final state and can be associated to nucleon Resonance pionless mechanisms involving a pre-existing short range correlated np pair in the nucleus. Another fraction (four events) of the remaining 15 events have a reconstructed back-to-back configuration of a np pair in the initial state, a signature compatible with one-body Quasi Elastic interaction on a neutron in a SRC pair. The detection of these two subsamples of the collected (mu- + 2p) events suggests that mechanisms directly involving nucleon-nucleon SRC pairs in the nucleus are active and can be efficiently explored in neutrino-argon interactions with the LAr TPC technology.
Resumo:
A bitopic ligand, 4-(3,5-dimethylpyrazol-4-yl)-1,2,4-triazole (Hpz-tr) (1), containing two different heterocyclic moieties was employed for the design of copper(II)–molybdate solids under hydrothermal conditions. In the multicomponent CuII/Hpz-tr/MoVI system, a diverse set of coordination hybrids, [Cu(Hpz-tr)2SO4]·3H2O (2), [Cu(Hpz-tr)Mo3O10] (3), [Cu4(OH)4(Hpz-tr)4Mo8O26]·6H2O (4), [Cu(Hpz-tr)2Mo4O13] (5), and [Mo2O6(Hpz-tr)]·H2O (6), was prepared and characterized. A systematic investigation of these systems in the form of a ternary crystallization diagram approach was utilized to show the influence of the molar ratios of starting reagents, the metal (CuII and MoVI) sources, the temperature, etc., on the reaction products outcome. Complexes 2–4 dominate throughout a wide crystallization range of the composition triangle, while the other two compounds 5 and 6 crystallize as minor phases in a narrow concentration range. In the crystal structures of 2–6, the organic ligand behaves as a short [N–N]-triazole linker between metal centers Cu···Cu in 2–4, Cu···Mo in 5, and Mo···Mo in 6, while the pyrazolyl function remains uncoordinated. This is the reason for the exceptional formation of low-dimensional coordination motifs: 1D for 2, 4, and 6 and 2D for 3 and 5. In all cases, the pyrazolyl group is involved in H bonding (H-donor/H-acceptor) and is responsible for π–π stacking, thus connecting the chain and layer structures in more complicated H-bonding architectures. These compounds possess moderate thermal stability up to 250–300 °C. The magnetic measurements were performed for 2–4, revealing in all three cases antiferromagnetic exchange interactions between neighboring CuII centers and long-range order with a net moment below Tc of 13 K for compound 4.
Resumo:
Recent works (Evelpidou et al., 2012) suggest that the modern tidal notch is disappearing worldwide due sea level rise over the last century. In order to assess this hypothesis, we measured modern tidal notches in several of sites along the Mediterranean coasts. We report observations on tidal notches cut along carbonate coasts from 73 sites from Italy, France, Croatia, Montenegro, Greece, Malta and Spain, plus additional observations carried outside the Mediterranean. At each site, we measured notch width and depth, and we described the characteristics of the biological rim at the base of the notch. We correlated these parameters with wave energy, tide gauge datasets and rock lithology. Our results suggest that, considering 'the development of tidal notches the consequence of midlittoral bioerosion' (as done in Evelpidou et al., 2012) is a simplification that can lead to misleading results, such as stating that notches are disappearing. Important roles in notch formation can be also played by wave action, rate of karst dissolution, salt weathering and wetting and drying cycles. Of course notch formation can be augmented and favoured also by bioerosion which can, in particular cases, be the main process of notch formation and development. Our dataset shows that notches are carved by an ensemble rather than by a single process, both today and in the past, and that it is difficult, if not impossible, to disentangle them and establish which one is prevailing. We therefore show that tidal notches are still forming, challenging the hypothesis that sea level rise has drowned them.
Resumo:
RESUMEN La dispersión del amoniaco (NH3) emitido por fuentes agrícolas en medias distancias, y su posterior deposición en el suelo y la vegetación, pueden llevar a la degradación de ecosistemas vulnerables y a la acidificación de los suelos. La deposición de NH3 suele ser mayor junto a la fuente emisora, por lo que los impactos negativos de dichas emisiones son generalmente mayores en esas zonas. Bajo la legislación comunitaria, varios estados miembros emplean modelos de dispersión inversa para estimar los impactos de las emisiones en las proximidades de las zonas naturales de especial conservación. Una revisión reciente de métodos para evaluar impactos de NH3 en distancias medias recomendaba la comparación de diferentes modelos para identificar diferencias importantes entre los métodos empleados por los distintos países de la UE. En base a esta recomendación, esta tesis doctoral compara y evalúa las predicciones de las concentraciones atmosféricas de NH3 de varios modelos bajo condiciones, tanto reales como hipotéticas, que plantean un potencial impacto sobre ecosistemas (incluidos aquellos bajo condiciones de clima Mediterráneo). En este sentido, se procedió además a la comparación y evaluación de varias técnicas de modelización inversa para inferir emisiones de NH3. Finalmente, se ha desarrollado un modelo matemático simple para calcular las concentraciones de NH3 y la velocidad de deposición de NH3 en ecosistemas vulnerables cercanos a una fuente emisora. La comparativa de modelos supuso la evaluación de cuatro modelos de dispersión (ADMS 4.1; AERMOD v07026; OPS-st v3.0.3 y LADD v2010) en un amplio rango de casos hipotéticos (dispersión de NH3 procedente de distintos tipos de fuentes agrícolas de emisión). La menor diferencia entre las concentraciones medias estimadas por los distintos modelos se obtuvo para escenarios simples. La convergencia entre las predicciones de los modelos fue mínima para el escenario relativo a la dispersión de NH3 procedente de un establo ventilado mecánicamente. En este caso, el modelo ADMS predijo concentraciones significativamente menores que los otros modelos. Una explicación de estas diferencias podríamos encontrarla en la interacción de diferentes “penachos” y “capas límite” durante el proceso de parametrización. Los cuatro modelos de dispersión fueron empleados para dos casos reales de dispersión de NH3: una granja de cerdos en Falster (Dinamarca) y otra en Carolina del Norte (EEUU). Las concentraciones medias anuales estimadas por los modelos fueron similares para el caso americano (emisión de granjas ventiladas de forma natural y balsa de purines). La comparación de las predicciones de los modelos con concentraciones medias anuales medidas in situ, así como la aplicación de los criterios establecidos para la aceptación estadística de los modelos, permitió concluir que los cuatro modelos se comportaron aceptablemente para este escenario. No ocurrió lo mismo en el caso danés (nave ventilada mecánicamente), en donde el modelo LADD no dio buenos resultados debido a la ausencia de procesos de “sobreelevacion de penacho” (plume-rise). Los modelos de dispersión dan a menudo pobres resultados en condiciones de baja velocidad de viento debido a que la teoría de dispersión en la que se basan no es aplicable en estas condiciones. En situaciones de frecuente descenso en la velocidad del viento, la actual guía de modelización propone usar un modelo que sea eficaz bajo dichas condiciones, máxime cuando se realice una valoración que tenga como objeto establecer una política de regularización. Esto puede no ser siempre posible debido a datos meteorológicos insuficientes, en cuyo caso la única opción sería utilizar un modelo más común, como la versión avanzada de los modelos Gausianos ADMS o AERMOD. Con el objetivo de evaluar la idoneidad de estos modelos para condiciones de bajas velocidades de viento, ambos modelos fueron utilizados en un caso con condiciones Mediterráneas. Lo que supone sucesivos periodos de baja velocidad del viento. El estudio se centró en la dispersión de NH3 procedente de una granja de cerdos en Segovia (España central). Para ello la concentración de NH3 media mensual fue medida en 21 localizaciones en torno a la granja. Se realizaron también medidas de concentración de alta resolución en una única localización durante una campaña de una semana. En este caso, se evaluaron dos estrategias para mejorar la respuesta del modelo ante bajas velocidades del viento. La primera se basó en “no zero wind” (NZW), que sustituyó periodos de calma con el mínimo límite de velocidad del viento y “accumulated calm emissions” (ACE), que forzaban al modelo a calcular las emisiones totales en un periodo de calma y la siguiente hora de no-calma. Debido a las importantes incertidumbres en los datos de entrada del modelo (inputs) (tasa de emisión de NH3, velocidad de salida de la fuente, parámetros de la capa límite, etc.), se utilizó el mismo caso para evaluar la incertidumbre en la predicción del modelo y valorar como dicha incertidumbre puede ser considerada en evaluaciones del modelo. Un modelo dinámico de emisión, modificado para el caso de clima Mediterráneo, fue empleado para estimar la variabilidad temporal en las emisiones de NH3. Así mismo, se realizó una comparativa utilizando las emisiones dinámicas y la tasa constante de emisión. La incertidumbre predicha asociada a la incertidumbre de los inputs fue de 67-98% del valor medio para el modelo ADMS y entre 53-83% del valor medio para AERMOD. La mayoría de esta incertidumbre se debió a la incertidumbre del ratio de emisión en la fuente (50%), seguida por la de las condiciones meteorológicas (10-20%) y aquella asociada a las velocidades de salida (5-10%). El modelo AERMOD predijo mayores concentraciones que ADMS y existieron más simulaciones que alcanzaron los criterios de aceptabilidad cuando se compararon las predicciones con las concentraciones medias anuales medidas. Sin embargo, las predicciones del modelo ADMS se correlacionaron espacialmente mejor con las mediciones. El uso de valores dinámicos de emisión estimados mejoró el comportamiento de ADMS, haciendo empeorar el de AERMOD. La aplicación de estrategias destinadas a mejorar el comportamiento de este último tuvo efectos contradictorios similares. Con el objeto de comparar distintas técnicas de modelización inversa, varios modelos (ADMS, LADD y WindTrax) fueron empleados para un caso no agrícola, una colonia de pingüinos en la Antártida. Este caso fue empleado para el estudio debido a que suponía la oportunidad de obtener el primer factor de emisión experimental para una colonia de pingüinos antárticos. Además las condiciones eran propicias desde el punto de vista de la casi total ausencia de concentraciones ambiente (background). Tras el trabajo de modelización existió una concordancia suficiente entre las estimaciones obtenidas por los tres modelos. De este modo se pudo definir un factor de emisión de para la colonia de 1.23 g NH3 por pareja criadora por día (con un rango de incertidumbre de 0.8-2.54 g NH3 por pareja criadora por día). Posteriores aplicaciones de técnicas de modelización inversa para casos agrícolas mostraron también un buen compromiso estadístico entre las emisiones estimadas por los distintos modelos. Con todo ello, es posible concluir que la modelización inversa es una técnica robusta para estimar tasas de emisión de NH3. Modelos de selección (screening) permiten obtener una rápida y aproximada estimación de los impactos medioambientales, siendo una herramienta útil para evaluaciones de impactos en tanto que permite eliminar casos que presentan un riesgo potencial de daño bajo. De esta forma, lo recursos del modelo pueden Resumen (Castellano) destinarse a casos en donde la posibilidad de daño es mayor. El modelo de Cálculo Simple de los Límites de Impacto de Amoniaco (SCAIL) se desarrolló para obtener una estimación de la concentración media de NH3 y de la tasa de deposición seca asociadas a una fuente agrícola. Está técnica de selección, basada en el modelo LADD, fue evaluada y calibrada con diferentes bases de datos y, finalmente, validada utilizando medidas independientes de concentraciones realizadas cerca de las fuentes. En general SCAIL dio buenos resultados de acuerdo a los criterios estadísticos establecidos. Este trabajo ha permitido definir situaciones en las que las concentraciones predichas por modelos de dispersión son similares, frente a otras en las que las predicciones difieren notablemente entre modelos. Algunos modelos nos están diseñados para simular determinados escenarios en tanto que no incluyen procesos relevantes o están más allá de los límites de su aplicabilidad. Un ejemplo es el modelo LADD que no es aplicable en fuentes con velocidad de salida significativa debido a que no incluye una parametrización de sobreelevacion del penacho. La evaluación de un esquema simple combinando la sobreelevacion del penacho y una turbulencia aumentada en la fuente mejoró el comportamiento del modelo. Sin embargo más pruebas son necesarias para avanzar en este sentido. Incluso modelos que son aplicables y que incluyen los procesos relevantes no siempre dan similares predicciones. Siendo las razones de esto aún desconocidas. Por ejemplo, AERMOD predice mayores concentraciones que ADMS para dispersión de NH3 procedente de naves de ganado ventiladas mecánicamente. Existe evidencia que sugiere que el modelo ADMS infraestima concentraciones en estas situaciones debido a un elevado límite de velocidad de viento. Por el contrario, existen evidencias de que AERMOD sobreestima concentraciones debido a sobreestimaciones a bajas Resumen (Castellano) velocidades de viento. Sin embrago, una modificación simple del pre-procesador meteorológico parece mejorar notablemente el comportamiento del modelo. Es de gran importancia que estas diferencias entre las predicciones de los modelos sean consideradas en los procesos de evaluación regulada por los organismos competentes. Esto puede ser realizado mediante la aplicación del modelo más útil para cada caso o, mejor aún, mediante modelos múltiples o híbridos. ABSTRACT Short-range atmospheric dispersion of ammonia (NH3) emitted by agricultural sources and its subsequent deposition to soil and vegetation can lead to the degradation of sensitive ecosystems and acidification of the soil. Atmospheric concentrations and dry deposition rates of NH3 are generally highest near the emission source and so environmental impacts to sensitive ecosystems are often largest at these locations. Under European legislation, several member states use short-range atmospheric dispersion models to estimate the impact of ammonia emissions on nearby designated nature conservation sites. A recent review of assessment methods for short-range impacts of NH3 recommended an intercomparison of the different models to identify whether there are notable differences to the assessment approaches used in different European countries. Based on this recommendation, this thesis compares and evaluates the atmospheric concentration predictions of several models used in these impact assessments for various real and hypothetical scenarios, including Mediterranean meteorological conditions. In addition, various inverse dispersion modelling techniques for the estimation of NH3 emissions rates are also compared and evaluated and a simple screening model to calculate the NH3 concentration and dry deposition rate at a sensitive ecosystem located close to an NH3 source was developed. The model intercomparison evaluated four atmospheric dispersion models (ADMS 4.1; AERMOD v07026; OPS-st v3.0.3 and LADD v2010) for a range of hypothetical case studies representing the atmospheric dispersion from several agricultural NH3 source types. The best agreement between the mean annual concentration predictions of the models was found for simple scenarios with area and volume sources. The agreement between the predictions of the models was worst for the scenario representing the dispersion from a mechanically ventilated livestock house, for which ADMS predicted significantly smaller concentrations than the other models. The reason for these differences appears to be due to the interaction of different plume-rise and boundary layer parameterisations. All four dispersion models were applied to two real case studies of dispersion of NH3 from pig farms in Falster (Denmark) and North Carolina (USA). The mean annual concentration predictions of the models were similar for the USA case study (emissions from naturally ventilated pig houses and a slurry lagoon). The comparison of model predictions with mean annual measured concentrations and the application of established statistical model acceptability criteria concluded that all four models performed acceptably for this case study. This was not the case for the Danish case study (mechanically ventilated pig house) for which the LADD model did not perform acceptably due to the lack of plume-rise processes in the model. Regulatory dispersion models often perform poorly in low wind speed conditions due to the model dispersion theory being inapplicable at low wind speeds. For situations with frequent low wind speed periods, current modelling guidance for regulatory assessments is to use a model that can handle these conditions in an acceptable way. This may not always be possible due to insufficient meteorological data and so the only option may be to carry out the assessment using a more common regulatory model, such as the advanced Gaussian models ADMS or AERMOD. In order to assess the suitability of these models for low wind conditions, they were applied to a Mediterranean case study that included many periods of low wind speed. The case study was the dispersion of NH3 emitted by a pig farm in Segovia, Central Spain, for which mean monthly atmospheric NH3 concentration measurements were made at 21 locations surrounding the farm as well as high-temporal-resolution concentration measurements at one location during a one-week campaign. Two strategies to improve the model performance for low wind speed conditions were tested. These were ‘no zero wind’ (NZW), which replaced calm periods with the minimum threshold wind speed of the model and ‘accumulated calm emissions’ (ACE), which forced the model to emit the total emissions during a calm period during the first subsequent non-calm hour. Due to large uncertainties in the model input data (NH3 emission rates, source exit velocities, boundary layer parameters), the case study was also used to assess model prediction uncertainty and assess how this uncertainty can be taken into account in model evaluations. A dynamic emission model modified for the Mediterranean climate was used to estimate the temporal variability in NH3 emission rates and a comparison was made between the simulations using the dynamic emissions and a constant emission rate. Prediction uncertainty due to model input uncertainty was 67-98% of the mean value for ADMS and between 53-83% of the mean value for AERMOD. Most of this uncertainty was due to source emission rate uncertainty (~50%), followed by uncertainty in the meteorological conditions (~10-20%) and uncertainty in exit velocities (~5-10%). AERMOD predicted higher concentrations than ADMS and more of the simulations met the model acceptability criteria when compared with the annual mean measured concentrations. However, the ADMS predictions were better correlated spatially with the measurements. The use of dynamic emission estimates improved the performance of ADMS but worsened the performance of AERMOD and the application of strategies to improved model performance had similar contradictory effects. In order to compare different inverse modelling techniques, several models (ADMS, LADD and WindTrax) were applied to a non-agricultural case study of a penguin colony in Antarctica. This case study was used since it gave the opportunity to provide the first experimentally-derived emission factor for an Antarctic penguin colony and also had the advantage of negligible background concentrations. There was sufficient agreement between the emission estimates obtained from the three models to define an emission factor for the penguin colony (1.23 g NH3 per breeding pair per day with an uncertainty range of 0.8-2.54 g NH3 per breeding pair per day). This emission estimate compared favourably to the value obtained using a simple micrometeorological technique (aerodynamic gradient) of 0.98 g ammonia per breeding pair per day (95% confidence interval: 0.2-2.4 g ammonia per breeding pair per day). Further application of the inverse modelling techniques for a range of agricultural case studies also demonstrated good agreement between the emission estimates. It is concluded, therefore, that inverse dispersion modelling is a robust technique for estimating NH3 emission rates. Screening models that can provide a quick and approximate estimate of environmental impacts are a useful tool for impact assessments because they can be used to filter out cases that potentially have a minimal environmental impact allowing resources to be focussed on more potentially damaging cases. The Simple Calculation of Ammonia Impact Limits (SCAIL) model was developed as a screening model to provide an estimate of the mean NH3 concentration and dry deposition rate downwind of an agricultural source. This screening tool, based on the LADD model, was evaluated and calibrated with several experimental datasets and then validated using independent concentration measurements made near sources. Overall SCAIL performed acceptably according to established statistical criteria. This work has identified situations where the concentration predictions of dispersion models are similar and other situations where the predictions are significantly different. Some models are simply not designed to simulate certain scenarios since they do not include the relevant processes or are beyond the limits of their applicability. An example is the LADD model that is not applicable to sources with significant exit velocity since the model does not include a plume-rise parameterisation. The testing of a simple scheme combining a momentum-driven plume rise and increased turbulence at the source improved model performance, but more testing is required. Even models that are applicable and include the relevant process do not always give similar predictions and the reasons for this need to be investigated. AERMOD for example predicts higher concentrations than ADMS for dispersion from mechanically ventilated livestock housing. There is evidence to suggest that ADMS underestimates concentrations in these situations due to a high wind speed threshold. Conversely, there is also evidence that AERMOD overestimates concentrations in these situations due to overestimation at low wind speeds. However, a simple modification to the meteorological pre-processor appears to improve the performance of the model. It is important that these differences between the predictions of these models are taken into account in regulatory assessments. This can be done by applying the most suitable model for the assessment in question or, better still, using multiple or hybrid models.
Resumo:
This paper presents a W-band high-resolution radar sensor for short-range applications. Low-cost technologies have been properly selected in order to implement a versatile and easily scalable radar system. A large operational bandwidth of 9 GHz, required for obtaining high-range resolution, is attained by means of a frequency multiplication-based architecture. The system characterization to identify the performance-limiting stages and the subsequent design optimization are presented. The assessment of system performance for several representative applications has been carried out.
Resumo:
Esta tesis está incluida dentro del campo del campo de Multiband Orthogonal Frequency Division Multiplexing Ultra Wideband (MB-OFDM UWB), el cual ha adquirido una gran importancia en las comunicaciones inalámbricas de alta tasa de datos en la última década. UWB surgió con el objetivo de satisfacer la creciente demanda de conexiones inalámbricas en interiores y de uso doméstico, con bajo coste y alta velocidad. La disponibilidad de un ancho de banda grande, el potencial para alta velocidad de transmisión, baja complejidad y bajo consumo de energía, unido al bajo coste de implementación, representa una oportunidad única para que UWB se convierta en una solución ampliamente utilizada en aplicaciones de Wireless Personal Area Network (WPAN). UWB está definido como cualquier transmisión que ocupa un ancho de banda de más de 20% de su frecuencia central, o más de 500 MHz. En 2002, la Comisión Federal de Comunicaciones (FCC) definió que el rango de frecuencias de transmisión de UWB legal es de 3.1 a 10.6 GHz, con una energía de transmisión de -41.3 dBm/Hz. Bajo las directrices de FCC, el uso de la tecnología UWB puede aportar una enorme capacidad en las comunicaciones de corto alcance. Considerando las ecuaciones de capacidad de Shannon, incrementar la capacidad del canal requiere un incremento lineal en el ancho de banda, mientras que un aumento similar de la capacidad de canal requiere un aumento exponencial en la energía de transmisión. En los últimos años, s diferentes desarrollos del UWB han sido extensamente estudiados en diferentes áreas, entre los cuales, el protocolo de comunicaciones inalámbricas MB-OFDM UWB está considerado como la mejor elección y ha sido adoptado como estándar ISO/IEC para los WPANs. Combinando la modulación OFDM y la transmisión de datos utilizando las técnicas de salto de frecuencia, el sistema MB-OFDM UWB es capaz de soportar tasas de datos con que pueden variar de los 55 a los 480 Mbps, alcanzando una distancia máxima de hasta 10 metros. Se esperara que la tecnología MB-OFDM tenga un consumo energético muy bajo copando un are muy reducida en silicio, proporcionando soluciones de bajo coste que satisfagan las demandas del mercado. Para cumplir con todas estas expectativas, el desarrollo y la investigación del MBOFDM UWB deben enfrentarse a varios retos, como son la sincronización de alta sensibilidad, las restricciones de baja complejidad, las estrictas limitaciones energéticas, la escalabilidad y la flexibilidad. Tales retos requieren un procesamiento digital de la señal de última generación, capaz de desarrollar sistemas que puedan aprovechar por completo las ventajas del espectro UWB y proporcionar futuras aplicaciones inalámbricas en interiores. Esta tesis se centra en la completa optimización de un sistema de transceptor de banda base MB-OFDM UWB digital, cuyo objetivo es investigar y diseñar un subsistema de comunicación inalámbrica para la aplicación de las Redes de Sensores Inalámbricas Visuales. La complejidad inherente de los procesadores FFT/IFFT y el sistema de sincronización así como la alta frecuencia de operación para todos los elementos de procesamiento, se convierten en el cuello de la botella para el diseño y la implementación del sistema de UWB digital en base de banda basado en MB-OFDM de baja energía. El objetivo del transceptor propuesto es conseguir baja energía y baja complejidad bajo la premisa de un alto rendimiento. Las optimizaciones están realizadas tanto a nivel algorítmico como a nivel arquitectural para todos los elementos del sistema. Una arquitectura hardware eficiente en consumo se propone en primer lugar para aquellos módulos correspondientes a núcleos de computación. Para el procesado de la Transformada Rápida de Fourier (FFT/IFFT), se propone un algoritmo mixed-radix, basado en una arquitectura con pipeline y se ha desarrollado un módulo de Decodificador de Viterbi (VD) equilibrado en coste-velocidad con el objetivo de reducir el consumo energético e incrementar la velocidad de procesamiento. También se ha implementado un correlador signo-bit simple basado en la sincronización del tiempo de símbolo es presentado. Este correlador es usado para detectar y sincronizar los paquetes de OFDM de forma robusta y precisa. Para el desarrollo de los subsitemas de procesamiento y realizar la integración del sistema completo se han empleado tecnologías de última generación. El dispositivo utilizado para el sistema propuesto es una FPGA Virtex 5 XC5VLX110T del fabricante Xilinx. La validación el propuesta para el sistema transceptor se ha implementado en dicha placa de FPGA. En este trabajo se presenta un algoritmo, y una arquitectura, diseñado con filosofía de co-diseño hardware/software para el desarrollo de sistemas de FPGA complejos. El objetivo principal de la estrategia propuesta es de encontrar una metodología eficiente para el diseño de un sistema de FPGA configurable optimizado con el empleo del mínimo esfuerzo posible en el sistema de procedimiento de verificación, por tanto acelerar el periodo de desarrollo del sistema. La metodología de co-diseño presentada tiene la ventaja de ser fácil de usar, contiene todos los pasos desde la propuesta del algoritmo hasta la verificación del hardware, y puede ser ampliamente extendida para casi todos los tipos de desarrollos de FPGAs. En este trabajo se ha desarrollado sólo el sistema de transceptor digital de banda base por lo que la comprobación de señales transmitidas a través del canal inalámbrico en los entornos reales de comunicación sigue requiriendo componentes RF y un front-end analógico. No obstante, utilizando la metodología de co-simulación hardware/software citada anteriormente, es posible comunicar el sistema de transmisor y el receptor digital utilizando los modelos de canales propuestos por IEEE 802.15.3a, implementados en MATLAB. Por tanto, simplemente ajustando las características de cada modelo de canal, por ejemplo, un incremento del retraso y de la frecuencia central, podemos estimar el comportamiento del sistema propuesto en diferentes escenarios y entornos. Las mayores contribuciones de esta tesis son: • Se ha propuesto un nuevo algoritmo 128-puntos base mixto FFT usando la arquitectura pipeline multi-ruta. Los complejos multiplicadores para cada etapa de procesamiento son diseñados usando la arquitectura modificada shiftadd. Los sistemas word length y twiddle word length son comparados y seleccionados basándose en la señal para cuantización del SQNR y el análisis de energías. • El desempeño del procesador IFFT es analizado bajo diferentes situaciones aritméticas de bloques de punto flotante (BFP) para el control de desbordamiento, por tanto, para encontrar la arquitectura perfecta del algoritmo IFFT basado en el procesador FFT propuesto. • Para el sistema de receptor MB-OFDM UWB se ha empleado una sincronización del tiempo innovadora, de baja complejidad y esquema de compensación, que consiste en funciones de Detector de Paquetes (PD) y Estimación del Offset del tiempo. Simplificando el cross-correlation y maximizar las funciones probables solo a sign-bit, la complejidad computacional se ve reducida significativamente. • Se ha propuesto un sistema de decodificadores Viterbi de 64 estados de decisión-débil usando velocidad base-4 de arquitectura suma-comparaselecciona. El algoritmo Two-pointer Even también es introducido en la unidad de rastreador de origen con el objetivo de conseguir la eficiencia en el hardware. • Se han integrado varias tecnologías de última generación en el completo sistema transceptor basebanda , con el objetivo de implementar un sistema de comunicación UWB altamente optimizado. • Un diseño de flujo mejorado es propuesto para el complejo sistema de implementación, el cual puede ser usado para diseños de Cadena de puertas de campo programable general (FPGA). El diseño mencionado no sólo reduce dramáticamente el tiempo para la verificación funcional, sino también provee un análisis automático como los errores del retraso del output para el sistema de hardware implementado. • Un ambiente de comunicación virtual es establecido para la validación del propuesto sistema de transceptores MB-OFDM. Este método es provisto para facilitar el uso y la conveniencia de analizar el sistema digital de basebanda sin parte frontera analógica bajo diferentes ambientes de comunicación. Esta tesis doctoral está organizada en seis capítulos. En el primer capítulo se encuentra una breve introducción al campo del UWB, tanto relacionado con el proyecto como la motivación del desarrollo del sistema de MB-OFDM. En el capítulo 2, se presenta la información general y los requisitos del protocolo de comunicación inalámbrica MBOFDM UWB. En el capítulo 3 se habla de la arquitectura del sistema de transceptor digital MB-OFDM de banda base . El diseño del algoritmo propuesto y la arquitectura para cada elemento del procesamiento está detallado en este capítulo. Los retos de diseño del sistema que involucra un compromiso de discusión entre la complejidad de diseño, el consumo de energía, el coste de hardware, el desempeño del sistema, y otros aspectos. En el capítulo 4, se ha descrito la co-diseñada metodología de hardware/software. Cada parte del flujo del diseño será detallado con algunos ejemplos que se ha hecho durante el desarrollo del sistema. Aprovechando esta estrategia de diseño, el procedimiento de comunicación virtual es llevado a cabo para probar y analizar la arquitectura del transceptor propuesto. Los resultados experimentales de la co-simulación y el informe sintético de la implementación del sistema FPGA son reflejados en el capítulo 5. Finalmente, en el capítulo 6 se incluye las conclusiones y los futuros proyectos, y también los resultados derivados de este proyecto de doctorado. ABSTRACT In recent years, the Wireless Visual Sensor Network (WVSN) has drawn great interest in wireless communication research area. They enable a wealth of new applications such as building security control, image sensing, and target localization. However, nowadays wireless communication protocols (ZigBee, Wi-Fi, and Bluetooth for example) cannot fully satisfy the demands of high data rate, low power consumption, short range, and high robustness requirements. New communication protocol is highly desired for such kind of applications. The Ultra Wideband (UWB) wireless communication protocol, which has increased in importance for high data rate wireless communication field, are emerging as an important topic for WVSN research. UWB has emerged as a technology that offers great promise to satisfy the growing demand for low-cost, high-speed digital wireless indoor and home networks. The large bandwidth available, the potential for high data rate transmission, and the potential for low complexity and low power consumption, along with low implementation cost, all present a unique opportunity for UWB to become a widely adopted radio solution for future Wireless Personal Area Network (WPAN) applications. UWB is defined as any transmission that occupies a bandwidth of more than 20% of its center frequency, or more than 500 MHz. In 2002, the Federal Communications Commission (FCC) has mandated that UWB radio transmission can legally operate in the range from 3.1 to 10.6 GHz at a transmitter power of -41.3 dBm/Hz. Under the FCC guidelines, the use of UWB technology can provide enormous capacity over short communication ranges. Considering Shannon’s capacity equations, increasing the channel capacity requires linear increasing in bandwidth, whereas similar channel capacity increases would require exponential increases in transmission power. In recent years, several different UWB developments has been widely studied in different area, among which, the MB-OFDM UWB wireless communication protocol is considered to be the leading choice and has recently been adopted in the ISO/IEC standard for WPANs. By combing the OFDM modulation and data transmission using frequency hopping techniques, the MB-OFDM UWB system is able to support various data rates, ranging from 55 to 480 Mbps, over distances up to 10 meters. The MB-OFDM technology is expected to consume very little power and silicon area, as well as provide low-cost solutions that can satisfy consumer market demands. To fulfill these expectations, MB-OFDM UWB research and development have to cope with several challenges, which consist of high-sensitivity synchronization, low- complexity constraints, strict power limitations, scalability, and flexibility. Such challenges require state-of-the-art digital signal processing expertise to develop systems that could fully take advantages of the UWB spectrum and support future indoor wireless applications. This thesis focuses on fully optimization for the MB-OFDM UWB digital baseband transceiver system, aiming at researching and designing a wireless communication subsystem for the Wireless Visual Sensor Networks (WVSNs) application. The inherent high complexity of the FFT/IFFT processor and synchronization system, and high operation frequency for all processing elements, becomes the bottleneck for low power MB-OFDM based UWB digital baseband system hardware design and implementation. The proposed transceiver system targets low power and low complexity under the premise of high performance. Optimizations are made at both algorithm and architecture level for each element of the transceiver system. The low-power hardwareefficient structures are firstly proposed for those core computation modules, i.e., the mixed-radix algorithm based pipelined architecture is proposed for the Fast Fourier Transform (FFT/IFFT) processor, and the cost-speed balanced Viterbi Decoder (VD) module is developed, in the aim of lowering the power consumption and increasing the processing speed. In addition, a low complexity sign-bit correlation based symbol timing synchronization scheme is presented so as to detect and synchronize the OFDM packets robustly and accurately. Moreover, several state-of-the-art technologies are used for developing other processing subsystems and an entire MB-OFDM digital baseband transceiver system is integrated. The target device for the proposed transceiver system is Xilinx Virtex 5 XC5VLX110T FPGA board. In order to validate the proposed transceiver system in the FPGA board, a unified algorithm-architecture-circuit hardware/software co-design environment for complex FPGA system development is presented in this work. The main objective of the proposed strategy is to find an efficient methodology for designing a configurable optimized FPGA system by using as few efforts as possible in system verification procedure, so as to speed up the system development period. The presented co-design methodology has the advantages of easy to use, covering all steps from algorithm proposal to hardware verification, and widely spread for almost all kinds of FPGA developments. Because only the digital baseband transceiver system is developed in this thesis, the validation of transmitting signals through wireless channel in real communication environments still requires the analog front-end and RF components. However, by using the aforementioned hardware/software co-simulation methodology, the transmitter and receiver digital baseband systems get the opportunity to communicate with each other through the channel models, which are proposed from the IEEE 802.15.3a research group, established in MATLAB. Thus, by simply adjust the characteristics of each channel model, e.g. mean excess delay and center frequency, we can estimate the transmission performance of the proposed transceiver system through different communication situations. The main contributions of this thesis are: • A novel mixed radix 128-point FFT algorithm by using multipath pipelined architecture is proposed. The complex multipliers for each processing stage are designed by using modified shift-add architectures. The system wordlength and twiddle word-length are compared and selected based on Signal to Quantization Noise Ratio (SQNR) and power analysis. • IFFT processor performance is analyzed under different Block Floating Point (BFP) arithmetic situations for overflow control, so as to find out the perfect architecture of IFFT algorithm based on the proposed FFT processor. • An innovative low complex timing synchronization and compensation scheme, which consists of Packet Detector (PD) and Timing Offset Estimation (TOE) functions, for MB-OFDM UWB receiver system is employed. By simplifying the cross-correlation and maximum likelihood functions to signbit only, the computational complexity is significantly reduced. • A 64 state soft-decision Viterbi Decoder system by using high speed radix-4 Add-Compare-Select architecture is proposed. Two-pointer Even algorithm is also introduced into the Trace Back unit in the aim of hardware-efficiency. • Several state-of-the-art technologies are integrated into the complete baseband transceiver system, in the aim of implementing a highly-optimized UWB communication system. • An improved design flow is proposed for complex system implementation which can be used for general Field-Programmable Gate Array (FPGA) designs. The design method not only dramatically reduces the time for functional verification, but also provides automatic analysis such as errors and output delays for the implemented hardware systems. • A virtual communication environment is established for validating the proposed MB-OFDM transceiver system. This methodology is proved to be easy for usage and convenient for analyzing the digital baseband system without analog frontend under different communication environments. This PhD thesis is organized in six chapters. In the chapter 1 a brief introduction to the UWB field, as well as the related work, is done, along with the motivation of MBOFDM system development. In the chapter 2, the general information and requirement of MB-OFDM UWB wireless communication protocol is presented. In the chapter 3, the architecture of the MB-OFDM digital baseband transceiver system is presented. The design of the proposed algorithm and architecture for each processing element is detailed in this chapter. Design challenges of such system involve trade-off discussions among design complexity, power consumption, hardware cost, system performance, and some other aspects. All these factors are analyzed and discussed. In the chapter 4, the hardware/software co-design methodology is proposed. Each step of this design flow will be detailed by taking some examples that we met during system development. Then, taking advantages of this design strategy, the Virtual Communication procedure is carried out so as to test and analyze the proposed transceiver architecture. Experimental results from the co-simulation and synthesis report of the implemented FPGA system are given in the chapter 5. The chapter 6 includes conclusions and future work, as well as the results derived from this PhD work.
Resumo:
Las transformaciones martensíticas (MT) se definen como un cambio en la estructura del cristal para formar una fase coherente o estructuras de dominio multivariante, a partir de la fase inicial con la misma composición, debido a pequeños intercambios o movimientos atómicos cooperativos. En el siglo pasado se han descubierto MT en diferentes materiales partiendo desde los aceros hasta las aleaciones con memoria de forma, materiales cerámicos y materiales inteligentes. Todos muestran propiedades destacables como alta resistencia mecánica, memoria de forma, efectos de superelasticidad o funcionalidades ferroicas como la piezoelectricidad, electro y magneto-estricción etc. Varios modelos/teorías se han desarrollado en sinergia con el desarrollo de la física del estado sólido para entender por qué las MT generan microstructuras muy variadas y ricas que muestran propiedades muy interesantes. Entre las teorías mejor aceptadas se encuentra la Teoría Fenomenológica de la Cristalografía Martensítica (PTMC, por sus siglas en inglés) que predice el plano de hábito y las relaciones de orientación entre la austenita y la martensita. La reinterpretación de la teoría PTMC en un entorno de mecánica del continuo (CM-PTMC) explica la formación de los dominios de estructuras multivariantes, mientras que la teoría de Landau con dinámica de inercia desentraña los mecanismos físicos de los precursores y otros comportamientos dinámicos. La dinámica de red cristalina desvela la reducción de la dureza acústica de las ondas de tensión de red que da lugar a transformaciones débiles de primer orden en el desplazamiento. A pesar de las diferencias entre las teorías estáticas y dinámicas dado su origen en diversas ramas de la física (por ejemplo mecánica continua o dinámica de la red cristalina), estas teorías deben estar inherentemente conectadas entre sí y mostrar ciertos elementos en común en una perspectiva unificada de la física. No obstante las conexiones físicas y diferencias entre las teorías/modelos no se han tratado hasta la fecha, aun siendo de importancia crítica para la mejora de modelos de MT y para el desarrollo integrado de modelos de transformaciones acopladas de desplazamiento-difusión. Por lo tanto, esta tesis comenzó con dos objetivos claros. El primero fue encontrar las conexiones físicas y las diferencias entre los modelos de MT mediante un análisis teórico detallado y simulaciones numéricas. El segundo objetivo fue expandir el modelo de Landau para ser capaz de estudiar MT en policristales, en el caso de transformaciones acopladas de desplazamiento-difusión, y en presencia de dislocaciones. Comenzando con un resumen de los antecedente, en este trabajo se presentan las bases físicas de los modelos actuales de MT. Su capacidad para predecir MT se clarifica mediante el ansis teórico y las simulaciones de la evolución microstructural de MT de cúbicoatetragonal y cúbicoatrigonal en 3D. Este análisis revela que el modelo de Landau con representación irreducible de la deformación transformada es equivalente a la teoría CM-PTMC y al modelo de microelasticidad para predecir los rasgos estáticos durante la MT, pero proporciona una mejor interpretación de los comportamientos dinámicos. Sin embargo, las aplicaciones del modelo de Landau en materiales estructurales están limitadas por su complejidad. Por tanto, el primer resultado de esta tesis es el desarrollo del modelo de Landau nolineal con representación irreducible de deformaciones y de la dinámica de inercia para policristales. La simulación demuestra que el modelo propuesto es consistente fcamente con el CM-PTMC en la descripción estática, y también permite una predicción del diagrama de fases con la clásica forma ’en C’ de los modos de nucleación martensítica activados por la combinación de temperaturas de enfriamiento y las condiciones de tensión aplicada correlacionadas con la transformación de energía de Landau. Posteriomente, el modelo de Landau de MT es integrado con un modelo de transformación de difusión cuantitativa para elucidar la relajación atómica y la difusión de corto alcance de los elementos durante la MT en acero. El modelo de transformaciones de desplazamiento y difusión incluye los efectos de la relajación en borde de grano para la nucleación heterogenea y la evolución espacio-temporal de potenciales de difusión y movilidades químicas mediante el acoplamiento de herramientas de cálculo y bases de datos termo-cinéticos de tipo CALPHAD. El modelo se aplica para estudiar la evolución microstructural de aceros al carbono policristalinos procesados por enfriamiento y partición (Q&P) en 2D. La microstructura y la composición obtenida mediante la simulación se comparan con los datos experimentales disponibles. Los resultados muestran el importante papel jugado por las diferencias en movilidad de difusión entre la fase austenita y martensita en la distibución de carbono en las aceros. Finalmente, un modelo multi-campo es propuesto mediante la incorporación del modelo de dislocación en grano-grueso al modelo desarrollado de Landau para incluir las diferencias morfológicas entre aceros y aleaciones con memoria de forma con la misma ruptura de simetría. La nucleación de dislocaciones, la formación de la martensita ’butterfly’, y la redistribución del carbono después del revenido son bien representadas en las simulaciones 2D del estudio de la evolución de la microstructura en aceros representativos. Con dicha simulación demostramos que incluyendo las dislocaciones obtenemos para dichos aceros, una buena comparación frente a los datos experimentales de la morfología de los bordes de macla, la existencia de austenita retenida dentro de la martensita, etc. Por tanto, basado en un modelo integral y en el desarrollo de códigos durante esta tesis, se ha creado una herramienta de modelización multiescala y multi-campo. Dicha herramienta acopla la termodinámica y la mecánica del continuo en la macroescala con la cinética de difusión y los modelos de campo de fase/Landau en la mesoescala, y también incluye los principios de la cristalografía y de la dinámica de red cristalina en la microescala. ABSTRACT Martensitic transformation (MT), in a narrow sense, is defined as the change of the crystal structure to form a coherent phase, or multi-variant domain structures out from a parent phase with the same composition, by small shuffles or co-operative movements of atoms. Over the past century, MTs have been discovered in different materials from steels to shape memory alloys, ceramics, and smart materials. They lead to remarkable properties such as high strength, shape memory/superelasticity effects or ferroic functionalities including piezoelectricity, electro- and magneto-striction, etc. Various theories/models have been developed, in synergy with development of solid state physics, to understand why MT can generate these rich microstructures and give rise to intriguing properties. Among the well-established theories, the Phenomenological Theory of Martensitic Crystallography (PTMC) is able to predict the habit plane and the orientation relationship between austenite and martensite. The re-interpretation of the PTMC theory within a continuum mechanics framework (CM-PTMC) explains the formation of the multivariant domain structures, while the Landau theory with inertial dynamics unravels the physical origins of precursors and other dynamic behaviors. The crystal lattice dynamics unveils the acoustic softening of the lattice strain waves leading to the weak first-order displacive transformation, etc. Though differing in statics or dynamics due to their origins in different branches of physics (e.g. continuum mechanics or crystal lattice dynamics), these theories should be inherently connected with each other and show certain elements in common within a unified perspective of physics. However, the physical connections and distinctions among the theories/models have not been addressed yet, although they are critical to further improving the models of MTs and to develop integrated models for more complex displacivediffusive coupled transformations. Therefore, this thesis started with two objectives. The first one was to reveal the physical connections and distinctions among the models of MT by means of detailed theoretical analyses and numerical simulations. The second objective was to expand the Landau model to be able to study MTs in polycrystals, in the case of displacive-diffusive coupled transformations, and in the presence of the dislocations. Starting with a comprehensive review, the physical kernels of the current models of MTs are presented. Their ability to predict MTs is clarified by means of theoretical analyses and simulations of the microstructure evolution of cubic-to-tetragonal and cubic-to-trigonal MTs in 3D. This analysis reveals that the Landau model with irreducible representation of the transformed strain is equivalent to the CM-PTMC theory and microelasticity model to predict the static features during MTs but provides better interpretation of the dynamic behaviors. However, the applications of the Landau model in structural materials are limited due its the complexity. Thus, the first result of this thesis is the development of a nonlinear Landau model with irreducible representation of strains and the inertial dynamics for polycrystals. The simulation demonstrates that the updated model is physically consistent with the CM-PTMC in statics, and also permits a prediction of a classical ’C shaped’ phase diagram of martensitic nucleation modes activated by the combination of quenching temperature and applied stress conditions interplaying with Landau transformation energy. Next, the Landau model of MT is further integrated with a quantitative diffusional transformation model to elucidate atomic relaxation and short range diffusion of elements during the MT in steel. The model for displacive-diffusive transformations includes the effects of grain boundary relaxation for heterogeneous nucleation and the spatio-temporal evolution of diffusion potentials and chemical mobility by means of coupling with a CALPHAD-type thermo-kinetic calculation engine and database. The model is applied to study for the microstructure evolution of polycrystalline carbon steels processed by the Quenching and Partitioning (Q&P) process in 2D. The simulated mixed microstructure and composition distribution are compared with available experimental data. The results show that the important role played by the differences in diffusion mobility between austenite and martensite to the partitioning in carbon steels. Finally, a multi-field model is proposed by incorporating the coarse-grained dislocation model to the developed Landau model to account for the morphological difference between steels and shape memory alloys with same symmetry breaking. The dislocation nucleation, the formation of the ’butterfly’ martensite, and the redistribution of carbon after tempering are well represented in the 2D simulations for the microstructure evolution of the representative steels. With the simulation, we demonstrate that the dislocations account for the experimental observation of rough twin boundaries, retained austenite within martensite, etc. in steels. Thus, based on the integrated model and the in-house codes developed in thesis, a preliminary multi-field, multiscale modeling tool is built up. The new tool couples thermodynamics and continuum mechanics at the macroscale with diffusion kinetics and phase field/Landau model at the mesoscale, and also includes the essentials of crystallography and crystal lattice dynamics at microscale.
Resumo:
Transcriptional repressors can be characterized by their range of action on promoters and enhancers. Short-range repressors interact over distances of 50-150 bp to inhibit, or quench, either upstream activators or the basal transcription complex. In contrast, long-range repressors act over several kilobases to silence basal promoters. We describe recent progress in characterizing the functional properties of one such long-range element in the Drosophila embryo and discuss the contrasting types of gene regulation that are made possible by short- and long-range repressors.
Resumo:
Positioned nucleosomes contribute to both the structure and the function of the chromatin fiber and can play a decisive role in controlling gene expression. We have mapped, at high resolution, the translational positions adopted by limiting amounts of core histone octamers reconstituted onto 4.4 kb of DNA comprising the entire chicken adult beta-globin gene, its enhancer, and flanking sequences. The octamer displays extensive variation in its affinity for different positioning sites, the range exhibited being about 2 orders of magnitude greater than that of the initial binding of the octamer. Strong positioning sites are located 5' and 3' of the globin gene and in the second intron but are absent from the coding regions. These sites exhibit a periodicity (approximately 200 bp) similar to the average spacing of nucleosomes on the inactive beta-globin gene in vivo, which could indicate their involvement in packaging the gene into higher-order chromatin structure. Overlapping, alternative octamer positioning sites commonly exhibit spacings of 20 and 40 bp, but not of 10 bp. These short-range periodicities could reflect features of the core particle structure contributing to the pronounced sequence-dependent manner in which the core histone octamer interacts with DNA.