843 resultados para TiO2 nanotubes array
Resumo:
New Sr-Nd-Pb-Hf data require the existence of at least four mantle components in the genesis of basalts from the the North Atlantic Igneous Province (NAIP): (1) one (or more likely a small range of) enriched component(s) within the Iceland plume, (2) a depleted component within the Iceland plume (distinct from the shallow N-MORB source), (3) a depleted sheath surrounding the plume and (4) shallow N-MORB source mantle. These components have been available since the major phase of igneous activity associated with plume head impact during Paleogene times. In Hf-Nd isotope space, samples from Iceland, DSDP Leg 49 (Sites 407, 408 and 409), ODP Legs 152 and 163 (southeast Greenland margin), the Reykjanes Ridge, Kolbeinsey Ridge and DSDP Leg 38 (Site 348) define fields that are oblique to the main ocean island basalt array and extend toward a component with higher 176Hf/177Hf than the N-MORB source available prior to arrival of the plume, as indicated by the compositions of Cretaceous basalts from Goban Spur (~95 Ma). Aside from Goban Spur, only basalts from Hatton Bank on the oceanward side of the Rockall Plateau (DSDP Leg 81) lie consistently within the field of N-MORB, which indicates that the compositional influence of the plume did not reach this far south and east ~55 Ma ago. Thus, Hf-Nd isotope systematics are consistent with previous studies which indicate that shallow MORB-source mantle does not represent the depleted component within the Iceland plume (Thirlwall, J. Geol. Soc. London 152 (1995) 991-996; Hards et al., J. Geol. Soc. London 152 (1995) 1003-1009; Fitton et al., 1997 doi:10.1016/S0012-821X(97)00170-2). They also indicate that the depleted component is a long-lived and intrinsic feature of the Iceland plume, generated during an ancient melting event in which a mineral (such as garnet) with a high Lu/Hf was a residual phase. Collectively, these data suggest a model for the Iceland plume in which a heterogeneous core, derived from the lower mantle, consists of 'enriched' streaks or blobs dispersed in a more depleted matrix. A distinguishing feature of both the enriched and depleted components is high Nb/Y for a given Zr/Y (i.e. positive DeltaNb), but the enriched component has higher Sr and Pb isotope ratios, combined with lower epsilon-Nd and epsilon-Hf. This heterogeneous core is surrounded by a sheath of depleted material, similar to the depleted component of the Iceland plume in its epsilon-Nd and epsilon-Hf, but with lower 87Sr/86Sr, 208Pb/204Pb and negative DeltaNb; this material was probably entrained from near the 670 km discontinuity when the plume stalled at the boundary between the upper and lower mantle. The plume sheath displaced more normal MORB asthenosphere (distinguished by its lower epsilon-Hf for a given epsilon-Nd or Zr/Nb ratio), which existed in the North Atlantic prior to plume impact. Preliminary data on MORBs from near the Azores plume suggest that much of the North Atlantic may be 'polluted' not only by enriched plume material but also by depleted material similar to the Iceland plume sheath. If this hypothesis is correct, it may provide a general explanation for some of the compositional diversity and variations in inferred depth of melting (Klein and Langmuir, 1987 doi:10.1029/JB092iB08p08089) along the MAR in the North Atlantic.
Resumo:
The proposed origins for the Enriched Mantle I component are many and various and some require an arbitrary addition of an exotic component, be it pure sediment or an enriched melt from the subcontinental lithosphere. With Pitcairn, Walvis Ridge is the 'type-locality' for the Enriched Mantle I (EMI) component. We analyzed basalts from DSDP Site 525A, Site 527 and Site 528 on the Walvis Ridge with the aim to constrain the history of its source. The isotopic compositions we measured for the three sites overlap with the values obtained by Richardson et al. (1982a) and extend towards less radiogenic Sr and more radiogenic Pb and Nd isotopic compositions. We used our new trace element and radiogenic isotope (Hf, Nd, Pb and Sr) characterization in combination with the literature data to produce the simplest possible model that satisfies the trace element and isotopic constraints. Although the elevated 207Pb/204Pb with respect to 206Pb/204Pb predicts an ancient origin for EMI, none of the proposed origins had modeled it as such. The data is consistent with the EMI composition being formed by the addition of a melt to a mantle with bulk Earth-like composition followed by melt extraction of a low degree melt. The timing of these two events is such that the metasomatism has to have taken place prior to 4 Ga and the subsequent melt removal before 3.5 Ga. This confirms the expectation of an ancient character for the EMI component. The Walvis Ridge data shows two distinct two component mixing trends: one formed by the less enriched Site 527 and Site 528 basalts and one formed by the Site 525A basalts. The two trends have the EMI endmember in common. The less depleted end of the Site 527-Site 528 basalts is FOZO-like and can be explained by the addition of a recycled component (basaltic oceanic crust plus sediment). This recycled component was altered during subduction. The sense and magnitude of the chemical fractionation resulting from the subduction alteration are in agreement with dehydration experiments on basalts and sediment. Compared to other EMI like basalts the Walvis Ridge basalts have flatter REE patterns and show less fractionation between large ion lithophile and heavy REE elements. Using the isotopic compositions as constrains for the parent-daughter ratios we were able to model the trace element patterns of the basalts as melting between 5 and 10% for Site 525A and between 10 and 15% for the depleted end of the Site 528-Site 527 array. In all cases a significant portion of melting takes place in the garnet stability field.
Resumo:
The Astoria submarine fan, located off the coast of Washington and Oregon, has grown throughout the Pleistocene from continental input delivered by the Columbia River drainage system. Enormous floods from the sudden release of glacial lake water occurred periodically during the Pleistocene, carrying vast amounts of sediment to the Pacific Ocean. DSDP site 174, located on the southern distal edge of the Astoria Fan, is composed of 879 m of terrigenous sediments. The section is divided into two major units separated by a distinct seismic discontinuity: an upper, turbidite fan unit (Unit I), and an underlying finer-grained unit (Unit II). Both units have overlapping ranges of Nd and Hf isotope compositions, with the majority of samples having e-Nd values of -7.1 to -15.2 and eHf values -6.2 to -20.0; the most notable exception is the uppermost sample in the section, which is identical to modern Columbia River sediment. Nd depleted mantle model ages for the site range from 2.0 to 1.2 Ga and are consistent with derivation from cratonic Proterozoic source regions, rather than Cenozoic and Mesozoic terranes proximal to the Washington-Oregon coast. The Astoria Fan sediments have significantly less radiogenic Nd (and Hf) isotopic compositions than present day Columbia River sediment (e-Nd=-3 to -4; [Goldstein, S.J., Jacobsen, S.B., 1987. Nd and Sr isotopic systematics of river water suspended material: implications for crustal evolution. Earth. Planet. Sci. Lett. 87, 249-265; doi:10.1016/0012-821X(88)90013-1]), and suggest that outburst flooding, tapping Proterozoic source regions, was the dominant sediment transport mechanism in the genesis and construction of the Astoria Fan. Pb isotopes form a highly linear 207Pb/204Pb - 206Pb/204Pb array, and indicate the sediments are a binary mixture of two disparate sources with isotopic compositions similar to Proterozoic Belt Supergroup metasediments and Columbia River Basalts. The combined major, trace and isotopic data argue that outburst flooding was responsible for depositing the majority (top 630 m) of the sediment in the Astoria Fan.
Resumo:
Oceanic sediments contain the products of erosion of continental crust, biologic activity and chemical precipitation. These processes create a large diversity of their chemical and isotopic compositions. Here we focus on the influence of the distance from a continental platform on the trace element and isotopic compositions of sediments deposited on the ocean floor and highlight the role of zircons in decoupling high-field strength elements and Hf isotopic compositions from other trace elements and Nd isotopic compositions. We report major and trace element concentrations as well as Sr and Hf isotopic data for 80 sediments from the Lesser Antilles forearc region. The trace-element characteristics and the Sr and Hf isotopic compositions are generally dominated by detrital material from the continental crust but are also variably influenced by chemical or biogenic carbonate and pure biogenic silica. Next to the South American continent, at DSDP Site 144 and on Barbados Island, sediments, coarse quartz arenites, exhibit marked Zr and Hf excesses that we attribute to the presence of zircon. In contrast, the sediments from DSDP Site 543, which were deposited farther away from the continental platform, consist of fine clay and they show strong deficiencies in Zr and Hf. The enrichment or depletion of Zr-Hf is coupled to large changes in Hf isotopic compositions (-30 < epsilon-Hf < +4) that vary independently from the Nd isotopes. We interpret this feature as a clear expression of the "zircon effect" suggested by Patchett and coauthors in 1984. Zircon-rich sediments deposited next to the South American continent have very low epsilon-Hf values inherited from old zircons. In contrast, in detrital clay-rich sediments deposited a few hundred kilometers farther north, the mineral fraction is devoid of zircon and they have drastically higher epsilon-Hf values inherited from finer, clay-rich continental material. In the two DSDP sites, average Hf isotopes are very unradiogenic relative to other oceanic sediments worldwide (epsilon-Hf = -14.4 and -7.4) and they define the low Hf end member of the sedimentary field in Hf-Nd space. Their compositions correspond to end members that, when mixed with mantle, are able to reproduce the pattern of volcanic rocks from the Lesser Antilles. More generally, we find a relationship between Nb/Zr ratios and the vertical deviation of Hf isotope ratios from the Nd-Hf terrestrial array and we suggest that this relationship can be used as a tool to distinguish sediment input from fractionation during melting during the formation of arc lavas.
Resumo:
The 50 km-long West Valley segment of the northern Juan de Fuca Ridge is a young, extension-dominated spreading centre, with volcanic activity concentrated in its southern half. A suite of basalts dredged from the West Valley floor, the adjacent Heck Seamount chain, and a small near-axis cone here named Southwest Seamount, includes a spectrum of geochemical compositions ranging from highly depleted normal (N-) MORB to enriched (E-) MORB. Heck Seamount lavas have chondrite-normalized La/Sm en -0.3, 87Sr/86Sr = 0.70235 - 0.70242, and 206Pb/204Pb = 18.22 - 18.44, requiring a source which is highly depleted in trace elements both at the time of melt generation and over geologic time. The E-MORB from Southwest Seamount have La/Sm en -1.8, 87Sr/86Sr = 0.70245 - 0.70260, and 206Pb/204Pb = 18.73 - 19.15, indicating a more enriched source. Basalts from the West Valley floor have chemical compositions intermediate between these two end-members. As a group, West Valley basalts from a two-component mixing array in element-element and element-isotope plots which is best explained by magma mixing. Evidence for crustal-level magma mixing in some basalts includes mineral-melt chemical and isotopic disequilibrium, but mixing of melts at depth (within the mantle) may also occur. The mantle beneath the northern Juan de Fuca Ridge is modelled as a plum-pudding, with "plums" of enriched, amphibole-bearing peridotite floating in a depleted matrix (DM). Low degrees of melting preferentially melt the "plums", initially removing only the amphibole component and producing alkaline to transitional E-MORB. Higher degrees of melting tap both the "plums" and the depleted matrix to yield N-MORB. The subtly different isotopic compositions of the E-MORBs compared to the N-MORBs require that any enriched component in the upper mantle was derived from a depleted source. If the enriched component crystallized from fluids with a DM source, the "plums" could evolve to their more evolved isotopic composition after a period of 1.5-2.0 Ga. Alternatively, the enriched component could have formed recently from fluids with a lessdepleted source than DM, such as subducted oceanic crust. A third possibility is that enriched material might be dispersed as "plums" throughout the upper mantle, transported from depth by mantle plumes.
Resumo:
A compact planar array with parasitic elements is studied to be used in MIMO systems. Classical compact arrays suffer from high coupling which makes correlation and matching efficiency to be worse. A proper matching network improves these lacks although its bandwidth is low and may increase the antenna size. The proposed antenna makes use of parasitic elements to improve both correlation and efficiency. A specific software based on MoM has been developed to analyze radiating structures with several feed points. The array is optimized through a Genetic Algorithm to determine parasitic elements position in order to fulfill different figures of merit. The proposed design provides the required correlation and matching efficiency to have a good performance over a significant bandwidth.
Resumo:
A planar antenna is introduced that works as a portable system for X-band satellite communications. This antenna is low-profile and modular with dimensions of 40 × 40 × 2.5 × cm. It is composed of a square array of 144 printed circuit elements that cover a wide bandwidth (14.7%) for transmission and reception along with dual and interchangeable circular polarization. A radiation efficiency above 50% is achieved by a low-loss stripline feeding network. This printed antenna has a 3 dB beamwidth of 5°, a maximum gain of 26 dBi and an axial ratio under 1.9 dB over the entire frequency band. The complete design of the antenna is shown, and the measurements are compared with simulations to reveal very good agreement.
Resumo:
This paper presents a general systems that can be taken into account to control between elements in an antenna array. Because the digital phase shifter devices have become a strategic element and also some steps have been taken for their export by U.S. Government, this element has increased its price to the low supply in the market. Therefore, it is necessary to adopt some solutions that allow us to deal with the design and construction of antenna arrays. system based on a group of a staggered phase shift with external switching is shown, which is extrapolated array.
Resumo:
Nowadays, more a more base stations are equipped with active conformal antennas. These antenna designs combine phase shift systems with multibeam networks providing multi-beam ability and interference rejection, which optimize multiple channel systems. GEODA is a conformal adaptive antenna system designed for satellite communications. Operating at 1.7 GHz with circular polarization, it is possible to track and communicate with several satellites at once thanks to its adaptive beam. The antenna is based on a set of similar triangular arrays that are divided in subarrays of three elements called `cells'. Transmission/Receiver (T/R) modules manage beam steering by shifting the phases. A more accurate steering of the antenna GEODA could be achieved by using a multibeam network. Several multibeam network designs based on Butler network will be presented
Resumo:
A multibeam antenna study based on Butler network will be undertaken in this document. These antenna designs combines phase shift systems with multibeam networks to optimize multiple channel systems. The system will work at 1.7 GHz with circular polarization. Specifically, result simulations and measurements of 3 element triangular subarray will be shown. A 45 element triangular array will be formed by the subarrays. Using triangular subarrays, side lobes and crossing points are reduced.
Resumo:
A six inputs and three outputs structure which can be used to obtain six simultaneous beams with a triangular array of 3 elements is presented. The beam forming network is obtained combining balanced and unbalanced hybrid couplers and allows to obtain six main beams with sixty degrees of separation in azimuth direction. Simulations and measurements showing the performance of the array and other detailed results are presented
Resumo:
A compact array of monopoles with a slotted ground plane is analyzed for being used in MIMO systems. Compact arrays suffer usually from high coupling which degrades significantly MIMO benefits. Through a matching network, main drawbacks can be solved, although it tends to provide a low bandwidth. The studied design is an array of monopoles with a slot in the ground plane. The slot shape is optimized with a Genetic Algorithm and an own electromagnetic software based on MoM in order to fulfill main figures of merit within a significant bandwidth
Resumo:
El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.