952 resultados para Regular array


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Chemical composition of water determines its physical properties and character of processes proceeding in it: freezing temperature, volume of evaporation, density, color, transparency, filtration capacity, etc. Presence of chemical elements in water solution confers waters special physical properties exerting significant influence on their circulation, creates necessary conditions for development and inhabitance of flora and fauna, and imparts to the ocean waters some chemical features that radically differ them from the land waters (Alekin & Liakhin, 1984). Hydrochemical information helps to determine elements of water circulation, convection depth, makes it easier to distinguish water masses and gives additional knowledge of climatic variability of ocean conditions. Hydrochemical information is a necessary part of biological research. Water chemical composition can be the governing characteristics determining possibility and limits of use of marine objects, both stationary and moving in sea water. Subject of investigation of hydrochemistry is study of dynamics of chemical composition, i.e. processes of its formation and hydrochemical conditions of water bodies (Alekin & Liakhin 1984). The hydrochemical processes in the Arctic Ocean are the least known. Some information on these processes can be obtained in odd publications. A generalizing study of hydrochemical conditions in the Arctic Ocean based on expeditions conducted in the years 1948-1975 has been carried out by Rusanov et al. (1979). The "Atlas of the World Ocean: the Arctic Ocean" contains a special section "Hydrochemistry" (Gorshkov, 1980). Typical vertical profiles, transects and maps for different depths - 0, 100, 300, 500, 1000, 2000, 3000 m are given in this section for the following parameters: dissolved oxygen, phosphate, silicate, pH and alkaline-chlorine coefficient. The maps were constructed using the data of expeditions conducted in the years 1948-1975. The illustrations reflect main features of distribution of the hydrochemical elements for multi-year period and represent a static image of hydrochemical conditions. Distribution of the hydrochemical elements on the ocean surface is given for two seasons - winter and summer, for the other depths are given mean annual fields. Aim of the present Atlas is description of hydrochemical conditions in the Arctic Ocean on the basis of a greater body of hydrochemical information for the years 1948-2000 and using the up-to-date methods of analysis and electronic forms of presentation of hydrochemical information. The most wide-spread characteristics determined in water samples were used as hydrochemical indices. They are: dissolved oxygen, phosphate, silicate, pH, total alkalinity, nitrite and nitrate. An important characteristics of water salt composition - "salinity" has been considered in the Oceanographic Atlas of the Arctic Ocean (1997, 1998). Presentation of the hydrochemical characteristics in this Hydrochemical Atlas is wider if compared with that of the former Atlas (Gorshkov, 1980). Maps of climatic distribution of the hydrochemical elements were constructed for all the standard depths, and seasonal variability of the hydrochemical parameters is given not only for the surface, but also for the underlying standard depths up to 400 m and including. Statistical characteristics of the hydrochemical elements are given for the first time. Detailed accuracy estimates of initial data and map construction are also given in the Atlas. Calculated values of mean-root deviations, maximum and minimum values of the parameters demonstrate limits of their variability for the analyzed period of observations. Therefore, not only investigations of chemical statics are summarized in the Atlas, but also some elements of chemical dynamics are demonstrated. Digital arrays of the hydrochemical elements obtained in nodes of a regular grid are the new form of characteristics presentation in the Atlas. It should be mentioned that the same grid and the same boxes were used in the Atlas, as those that had been used by creation of the US-Russian climatic Oceanographic Atlas. It allows to combine hydrochemical and oceanographic information of these Atlases. The first block of the digital arrays contains climatic characteristics calculated using direct observational data. These climatic characteristics were not calculated in the regions without observations, and the information arrays for these regions have gaps. The other block of climatic information in a gridded form was obtained with the help of objective analysis of observational data. Procedure of the objective analysis allowed us to obtain climatic estimates of the hydrochemical characteristics for the whole water area of the Arctic Ocean including the regions not covered by observations. Data of the objective analysis can be widely used, in particular, in hydrobiological investigations and in modeling of hydrochemical conditions of the Arctic Ocean. Array of initial measurements is a separate block. It includes all the available materials of hydrochemical observations in the form, as they were presented in different sources. While keeping in mind that this array contains some amount of perverted information, the authors of the Atlas assumed it necessary to store this information in its primary form. Methods of data quality control can be developed in future in the process of hydrochemical information accumulation. It can be supposed that attitude can vary in future to the data that were rejected according to the procedure accepted in the Atlas. The hydrochemical Atlas of the Arctic Ocean is the first specialized and electronic generalization of hydrochemical observations in the Arctic Ocean and finishes the program of joint efforts of Russian and US specialists in preparation of a number of atlases for the Arctic. The published Oceanographic Atlas (1997, 1998), Atlas of Arctic Meteorology and Climate (2000), Ice Atlas of the Arctic Ocean prepared for publication and Hydrochemical Atlas of the Arctic Ocean represent a united series of fundamental generalizations of empirical knowledge of Arctic Ocean nature at climatic level. The Hydrochemical Atlas of the Arctic Ocean was elaborated in the result of joint efforts of the SRC of the RF AARI and IARC. Dr. Ye. Nikiforov was scientific supervisor of the Atlas, Dr. R. Colony was manager on behalf of the USA and Dr. L. Timokhov - on behalf of Russia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A compact planar array with parasitic elements is studied to be used in MIMO systems. Classical compact arrays suffer from high coupling which makes correlation and matching efficiency to be worse. A proper matching network improves these lacks although its bandwidth is low and may increase the antenna size. The proposed antenna makes use of parasitic elements to improve both correlation and efficiency. A specific software based on MoM has been developed to analyze radiating structures with several feed points. The array is optimized through a Genetic Algorithm to determine parasitic elements position in order to fulfill different figures of merit. The proposed design provides the required correlation and matching efficiency to have a good performance over a significant bandwidth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A planar antenna is introduced that works as a portable system for X-band satellite communications. This antenna is low-profile and modular with dimensions of 40 × 40 × 2.5 × cm. It is composed of a square array of 144 printed circuit elements that cover a wide bandwidth (14.7%) for transmission and reception along with dual and interchangeable circular polarization. A radiation efficiency above 50% is achieved by a low-loss stripline feeding network. This printed antenna has a 3 dB beamwidth of 5°, a maximum gain of 26 dBi and an axial ratio under 1.9 dB over the entire frequency band. The complete design of the antenna is shown, and the measurements are compared with simulations to reveal very good agreement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a general systems that can be taken into account to control between elements in an antenna array. Because the digital phase shifter devices have become a strategic element and also some steps have been taken for their export by U.S. Government, this element has increased its price to the low supply in the market. Therefore, it is necessary to adopt some solutions that allow us to deal with the design and construction of antenna arrays. system based on a group of a staggered phase shift with external switching is shown, which is extrapolated array.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, more a more base stations are equipped with active conformal antennas. These antenna designs combine phase shift systems with multibeam networks providing multi-beam ability and interference rejection, which optimize multiple channel systems. GEODA is a conformal adaptive antenna system designed for satellite communications. Operating at 1.7 GHz with circular polarization, it is possible to track and communicate with several satellites at once thanks to its adaptive beam. The antenna is based on a set of similar triangular arrays that are divided in subarrays of three elements called `cells'. Transmission/Receiver (T/R) modules manage beam steering by shifting the phases. A more accurate steering of the antenna GEODA could be achieved by using a multibeam network. Several multibeam network designs based on Butler network will be presented

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A multibeam antenna study based on Butler network will be undertaken in this document. These antenna designs combines phase shift systems with multibeam networks to optimize multiple channel systems. The system will work at 1.7 GHz with circular polarization. Specifically, result simulations and measurements of 3 element triangular subarray will be shown. A 45 element triangular array will be formed by the subarrays. Using triangular subarrays, side lobes and crossing points are reduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A six inputs and three outputs structure which can be used to obtain six simultaneous beams with a triangular array of 3 elements is presented. The beam forming network is obtained combining balanced and unbalanced hybrid couplers and allows to obtain six main beams with sixty degrees of separation in azimuth direction. Simulations and measurements showing the performance of the array and other detailed results are presented

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A compact array of monopoles with a slotted ground plane is analyzed for being used in MIMO systems. Compact arrays suffer usually from high coupling which degrades significantly MIMO benefits. Through a matching network, main drawbacks can be solved, although it tends to provide a low bandwidth. The studied design is an array of monopoles with a slot in the ground plane. The slot shape is optimized with a Genetic Algorithm and an own electromagnetic software based on MoM in order to fulfill main figures of merit within a significant bandwidth

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surface tension induced convection in a liquid bridge held between two parallel, coaxial, solid disks is considered. The surface tension gradient is produced by a small temperature gradient parallel Co the undisturbed surface. The study is performed by using a mathematical regular perturbation approach based on a small parameter, e, which measures the deviation of the imposed temperature field from its mean value. The first order velocity field is given by a Stokes-type problem (viscous terms are dominant) with relatively simple boundary conditions. The first order temperature field is that imposed from the end disks on a liquid bridge immersed in a non-conductive fluid. Radiative effects are supposed to be negligible. The second order temperature field, which accounts for convective effects, is split into three components, one due to the bulk motion, and the other two to the distortion of the free surface. The relative importance of these components in terms of the heat transfer to or from the end disks is assessed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Se ha diseñado y construido un array de microlentes cilíndricas de cristal líquido (CL) y se ha llevado a cabo un estudio sobre su comportamiento electroóptico. El array lenticular es novedoso en cuanto a los materiales empleados en su fabricación. Se ha utilizado Níquel como material clave para la implementación de un electrodo de alta resistividad. La combinación del electrodo de alta resistividad junto al CL (cuya impedancia paralelo es elevada) da lugar a un divisor reactivo que proporciona un gradiente de tensión hiperbólico del centro al extremo de cada lente. Este efecto, unido al alineamiento homogéneo de las moléculas de CL, permite la generación de un gradiente de índice de refracción, comportándose el dispositivo como una lente GRIN (GRadient Refraction INdex). Para la caracterización de su funcionamiento se ha analizado su perfil de fase empleando métodos interferométricos y procesamiento de imágenes. Además se han efectuado también diferentes medidas de contraste angular.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In large antenna arrays with a large number of antenna elements, the required number of measurements for the characterization of the antenna array is very demanding in cost and time. This letter presents a new offline calibration process for active antenna arrays that reduces the number of measurements by subarray-level characterization. This letter embraces measurements, characterization, and calibration as a global procedure assessing about the most adequate calibration technique and computing of compensation matrices. The procedure has been fully validated with measurements of a 45-element triangular panel array designed for Low Earth Orbit (LEO) satellite tracking that compensates the degradation due to gain and phase imbalances and mutual coupling.