985 resultados para Wormhole Framework Structures
Resumo:
This thesis is concerned with in-situ time-, temperature- and pressure-resolved synchrotron X-ray powder diffraction investigations of a variety of inorganic compounds with twodimensional layer structures and three-dimensional framework structures. In particular, phase stability, reaction kinetics, thermal expansion and compressibility at non-ambient conditions has been studied for 1) Phosphates with composition MIV(HPO4)2·nH2O (MIV = Ti, Zr); 2) Pyrophosphates and pyrovanadates with composition MIVX2O7 (MIV = Ti, Zr and X = P, V); 3) Molybdates with composition ZrMo2O8. The results are compiled in seven published papers and two manuscripts. Reaction kinetics for the hydrothermal synthesis of α-Ti(HPO4)2·H2O and intercalation of alkane diamines in α-Zr(HPO4)2·H2O was studied using time-resolved experiments. In the high-temperature transformation of γ-Ti(PO4)(H2PO4)·2H2O to TiP2O7 three intermediate phases, γ'-Ti(PO4)(H2PO4)·(2-x)H2O, β-Ti(PO4)(H2PO4) and Ti(PO4)(H2P2O7)0.5 were found to crystallise at 323, 373 and 748 K, respectively. A new tetragonal three-dimensional phosphate phase called τ-Zr(HPO4)2 was prepared, and subsequently its structure was determined and refined using the Rietveld method. In the high-temperature transformation from τ-Zr(HPO4)2 to cubic α-ZrP2O7 two new orthorhombic intermediate phases were found. The first intermediate phase, ρ-Zr(HPO4)2, forms at 598 K, and the second phase, β-ZrP2O7, at 688 K. Their respective structures were solved using direct methods and refined using the Rietveld method. In-situ high-pressure studies of τ-Zr(HPO4)2 revealed two new phases, tetragonal ν-Zr(HPO4)2 and orthorhombic ω-Zr(HPO4)2 that crystallise at 1.1 and 8.2 GPa. The structure of ν-Zr(HPO4)2 was solved and refined using the Rietveld method. The high-pressure properties of the pyrophosphates ZrP2O7 and TiP2O7, and the pyrovanadate ZrV2O7 were studied up to 40 GPa. Both pyrophosphates display smooth compression up to the highest pressures, while ZrV2O7 has a phase transformation at 1.38 GPa from cubic to pseudo-tetragonal β-ZrV2O7 and becomes X-ray amorphous at pressures above 4 GPa. In-situ high-pressure studies of trigonal α-ZrMo2O8 revealed the existence of two new phases, monoclinic δ-ZrMo2O8 and triclinic ε-ZrMo2O8 that crystallises at 1.1 and 2.5 GPa, respectively. The structure of δ-ZrMo2O8 was solved by direct methods and refined using the Rietveld method.
Resumo:
Mixed ammonia-water vapor postsynthesis treatment provides a simple and convenient method for stabilizing mesostructured silica films. X-ray diffraction, transmission electron microscopy, nitrogen adsorption/desorption, and solid-state NMR (C-13, Si-29) were applied to study the effects of mixed ammonia-water vapor at 90 degreesC on the mesostructure of the films. An increased cross-linking of the silica network was observed. Subsequent calcination of the silica films was seen to cause a bimodal pore-size distribution, with an accompanying increase in the volume and surface area ratios of the primary (d = 3 nm) to secondary (d = 5-30 nm) pores. Additionally, mixed ammonia-water treatment was observed to cause a narrowing of the primary pore-size distribution. These findings have implications for thin film based applications and devices, such as sensors, membranes, or surfaces for heterogeneous catalysis.
Resumo:
Theory predicts the water hexamer to be the smallest water cluster with a three-dimensional hydrogen-bonding network as its minimum energy structure. There are several possible low-energy isomers, and calculations with different methods and basis sets assign them different relative stabilities. Previous experimental work has provided evidence for the cage, book, and cyclic isomers, but no experiment has identified multiple coexisting structures. Here, we report that broadband rotational spectroscopy in a pulsed supersonic expansion unambiguously identifies all three isomers; we determined their oxygen framework structures by means of oxygen-18–substituted water (H218O). Relative isomer populations at different expansion conditions establish that the cage isomer is the minimum energy structure. Rotational spectra consistent with predicted heptamer and nonamer structures have also been identified.
Resumo:
It is becoming increasingly clear that the cell nucleus is a highly structurized organelle. Because of its tight compartmentalization, it is generally believed that a framework must exist, responsible for maintaining such a spatial organization. Over the last twenty years many investigations have been devoted to identifying the nuclear framework. Structures isolated by different techniques have been obtained in vitro and are variously referred to as nuclear matrix, nucleoskeleton or nuclear scaffold. Many different functions, such as DNA replication and repair, mRNA transcription, processing and transport have been described to occur in close association with these structures. However, there is still much debate as to whether or not any of these preparations corresponds to a nuclear framework that exists in vivo. In this article we summarize the most commonly-used methods for obtaining preparations of nuclear frameworks and we also stress the possible artifacts that can be created in vitro during the isolation procedures. Emphasis is placed also on the protein composition of the frameworks as well as on some possible signalling functions that have been recently described to occur in tight association with the nuclear matrix.
Resumo:
The structural transformations between cesium silver-copper cyanides under modest conditions, both in solution and in the solid state, are described. Three new cesium silver(I) copper(I) cyanides with three-dimensional (3-D) framework structures were prepared as single crystals from a one-pot reaction initially heated under hydrothermal conditions. The first product to appear, Cs3Ag2Cu3(CN)(8) (I), when left in contact with the supernatant produced CsAgCu(CN)(3) (II) and CsAgCu(CN)(3)center dot 1/3H(2)O (III) over a few months via a series of thermodynamically controlled cascade reactions. Crystals of the hydrate (III) can be dehydrated to polycrystalline CsAgCu(CN)(3) (II) on heating at 100 degrees C in a remarkable solid-state transformation involving substantial breaking and reconnection of metal-cyanide linkages. Astonishingly, the conversion between the two known polymorphs of CsAg2Cu(CN)(4), which also involves a major change in connectivity and topology, occurs at 180 degrees C as a single-crystal to single-crystal transformation. Structural features of note in these materials include the presence of helical copper-cyanide chains in (I) and (II), which in the latter compound produce a chiral material. In (II) and (III), the silver-copper cyanide networks are both self- and interpenetrating, features also seen in the known polymorphs of CsAg2Cu(CN)(4).
Resumo:
Die wichtigste Klasse zeotyper Verbindungen sind die Thio- und Selenophosphate der Übergangsmetalle. Ziel dieser Dissertation war die Darstellung und Charakterisierung neuer Uranthiophosphate. Die dargestellten Verbindungen enthalten vierwertige Urankationen, die von acht Schwefelatomen koordiniert sind. Da die enthaltenen Thiophosphatanionen in den meisten Fällen als zweizähnige Liganden fungieren, entstehen dreidimensionale Netzwerke mit pseudotetraedrisch koordinierten Metallzentren. In der Verbindung U(P2S6)2 durchdringen sich drei identische diamantartige Netzwerke, wodurch optimale Raumerfüllung erreicht wird. Die Einführung von Alkalimetallkationen in das System führt zu einer Vielzahl neuer Verbindungen, deren Eigenschaften durch die Stöchiometrie der Edukte und durch die Kationenradien bestimmt werden. Beispielsweise enthält die Kristallstruktur von Na2U(PS4)2 zweidimensionale anionische [U(PS4)2]n-Schichten, während die analoge Verbindung CsLiU(PS4)2 eine poröse dreidimensionale Netzwerkstruktur besitzt. Der Vergleich der untersuchten quaternären und quinären Verbindungen zeigt, dass eine Korrelation zwischen dem Kationenradius und dem Durchmesser der Poren besteht. Dies lässt auf eine Templatfunktion der Alkalimetallkationen beim Aufbau der anionischen Teilstruktur schließen. Die neuen Verbindungen wurden aus reaktiven Polysulfidschmelzflüssen oder durch Auflösen amorpher Vorläufer in Alkalimetallchloridschmelzen synthetisiert. Die Kristallstrukturen wurden durch Einkristall-Röntgenmethoden bestimmt. Ein Vergleich der magnetischen Eigenschaften der Verbindungen beweist, dass in allen untersuchten Fällen U(IV) vorliegt. Die Substanzen zeigen paramagnetisches Verhalten, in UP2S7 und CsLiU(PS4)2 sind außerdem antiferromagnetische Wechselwirkungen zwischen benachbarten Uranatomen nachweisbar.
Resumo:
The enormous impact of crystal engineering in modern solid state chemistry takes advantage from the connection between a typical basic science field and the word engineering. Regrettably, the engineering aspect of organic or metal organic crystalline materials are limited, so far, to descriptive structural features, sometime entangled with topological aspects, but only rarely with true material design. This should include not only the fabrication and structural description at micro- and nano-scopic level of the solids, but also a proper reverse engineering, a fundamental discipline for engineers. Translated into scientific language, the reverse crystal engineering refers to a dedicated and accurate analysis of how the building blocks contribute to generate a given material property. This would enable a more appropriate design of new crystalline material. We propose here the application of reverse crystal engineering to optical properties of organic and metal organic framework structures, applying the distributed atomic polarizability approach that we have extensively investigated in the past few years[1,2].
Resumo:
Structural robustness is an emergent concept related to the structural response to damage. At the present time, robustness is not well defined and much controversy still remains around this subject. Even if robustness has seen growing interest as a consequence of catastrophic consequences due to extreme events, the fact is that the concept can also be very useful when considered on more probable exposure scenarios such as deterioration, among others. This paper intends to be a contribution to the definition of structural robustness, especially in the analysis of reinforced concrete structures subjected to corrosion. To achieve this, first of all, several proposed robustness definitions and indicators and misunderstood concepts will be analyzed and compared. From this point and regarding a concept that could be applied to most type of structures and dam-age scenarios, a robustness definition is proposed. To illustrate the proposed concept, an example of corroded reinforced concrete structures will be analyzed using nonlinear analysis numerical methods based on a contin-uum strong discontinuities approach and isotropic damage models for concrete. Finally the robustness of the presented example will be assessed.
Resumo:
A novel framework for probabilistic-based structural assessment of existing structures, which combines model identification and reliability assessment procedures, considering in an objective way different sources of uncertainty, is presented in this paper. A short description of structural assessment applications, provided in literature, is initially given. Then, the developed model identification procedure, supported in a robust optimization algorithm, is presented. Special attention is given to both experimental and numerical errors, to be considered in this algorithm convergence criterion. An updated numerical model is obtained from this process. The reliability assessment procedure, which considers a probabilistic model for the structure in analysis, is then introduced, incorporating the results of the model identification procedure. The developed model is then updated, as new data is acquired, through a Bayesian inference algorithm, explicitly addressing statistical uncertainty. Finally, the developed framework is validated with a set of reinforced concrete beams, which were loaded up to failure in laboratory.
Resumo:
Includes bibliography
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
This paper presents a new framework based on optimal control to define new dynamic visual controllers to carry out the guidance of any serial link structure. The proposed general method employs optimal control to obtain the desired behaviour in the joint space based on an indicated cost function which determines how the control effort is distributed over the joints. The proposed approach allows the development of new direct visual controllers for any mechanical joint system with redundancy. Finally, authors show experimental results and verifications on a real robotic system for some derived controllers obtained from the control framework.
Resumo:
A major weakness among loading models for pedestrians walking on flexible structures proposed in recent years is the various uncorroborated assumptions made in their development. This applies to spatio-temporal characteristics of pedestrian loading and the nature of multi-object interactions. To alleviate this problem, a framework for the determination of localised pedestrian forces on full-scale structures is presented using a wireless attitude and heading reference systems (AHRS). An AHRS comprises a triad of tri-axial accelerometers, gyroscopes and magnetometers managed by a dedicated data processing unit, allowing motion in three-dimensional space to be reconstructed. A pedestrian loading model based on a single point inertial measurement from an AHRS is derived and shown to perform well against benchmark data collected on an instrumented treadmill. Unlike other models, the current model does not take any predefined form nor does it require any extrapolations as to the timing and amplitude of pedestrian loading. In order to assess correctly the influence of the moving pedestrian on behaviour of a structure, an algorithm for tracking the point of application of pedestrian force is developed based on data from a single AHRS attached to a foot. A set of controlled walking tests with a single pedestrian is conducted on a real footbridge for validation purposes. A remarkably good match between the measured and simulated bridge response is found, indeed confirming applicability of the proposed framework.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08