985 resultados para multielectrode array
Resumo:
A CE system featuring an array of 16 contactless conductivity detectors was constructed. The detectors were arranged along 70 cm length of a capillary with 100 cm total length and allow the monitoring of separation processes. As the detectors cannot be accommodated on a conventional commercial instrument, a purpose built set-up employing a sequential injection manifold had to be employed for automation of the fluid handling. Conductivity measurements can be considered universal for electrophoresis and thus any changes in ionic composition can be monitored. The progress of the separation of Na(+) and K(+) is demonstrated. The potential of the system to the study of processes in CZE is shown in two examples. The first demonstrates the differences in the developments of peaks originating from a sample plug with a purely aqueous background to that of a plug containing the analyte ions in the buffer. The second example visualizes the opposite migration of cations and anions from a sample plug that had been placed in the middle of the capillary.
Resumo:
We report about a lung-on-chip array that mimics the pulmonary parenchymal environment, including the thin, alveolar barrier and the three-dimensional cyclic strain induced by the breathing movements. A micro-diaphragm used to stretch the alveolar barrier is inspired by the in-vivo diaphragm, the main muscle responsible for inspiration. The design of this device aims not only at best reproducing the in-vivo conditions found in the lung parenchyma, but also at making its handling easy and robust. An innovative concept, based on the reversible bonding of the device, is presented that enables to accurately control the concentration of cells cultured on the membrane by easily accessing both sides of the membranes. The functionality of the alveolar barrier could be restored by co-culturing epithelial and endothelial cells that formed tight monolayers on each side of a thin, porous and stretchable membrane. We showed that cyclic stretch significantly affects the permeability properties of epithelial cell layers. Furthermore, we could also demonstrate that the strain influences the metabolic activity and the cytokine secretion of primary human pulmonary alveolar epithelial cells obtained from patients. These results demonstrate the potential of this device and confirm the importance of the mechanical strain induced by the breathing in pulmonary research.
Resumo:
HYPOTHESIS To evaluate the feasibility and the results of insertion of two types of electrode arrays in a robotically assisted surgical approach. BACKGROUND Recent publications demonstrated that robot-assisted surgery allows the implantation of free-fitting electrode arrays through a cochleostomy drilled via a narrow bony tunnel (DCA). We investigated if electrode arrays from different manufacturers could be used with this approach. METHODS Cone-beam CT imaging was performed on fivecadaveric heads after placement of fiducial screws. Relevant anatomical structures were segmented and the DCA trajectory, including the position of the cochleostomy, was defined to target the center of the scala tympani while reducing the risk of lesions to the facial nerve. Med-El Flex 28 and Cochlear CI422 electrodes were implanted on both sides, and their position was verified by cone-beam CT. Finally, temporal bones were dissected to assess the occurrence of damage to anatomical structures during DCA drilling. RESULTS The cochleostomy site was directed in the scala tympani in 9 of 10 cases. The insertion of electrode arrays was successful in 19 of 20 attempts. No facial nerve damage was observed. The average difference between the planned and the postoperative trajectory was 0.17 ± 0.19 mm at the level of the facial nerve. The average depth of insertion was 305.5 ± 55.2 and 243 ± 32.1 degrees with Med-El and Cochlear arrays, respectively. CONCLUSIONS Robot-assisted surgery is a reliable tool to allow cochlear implantation through a cochleostomy. Technical solutions must be developed to improve the electrode array insertion using this approach.
Resumo:
Magnetic resonance imaging (MRI) is a non-invasive technique that offers excellent soft tissue contrast for characterizing soft tissue pathologies. Diffusion tensor imaging (DTI) is an MRI technique that has shown to have the sensitivity to detect subtle pathology that is not evident on conventional MRI. ^ Rats are commonly used as animal models in characterizing the spinal cord pathologies including spinal cord injury (SCI), cancer, multiple sclerosis, etc. These pathologies could affect both thoracic and cervical regions and complete characterization of these pathologies using MRI requires DTI characterization in both the thoracic and cervical regions. Prior to the application of DTI for investigating the pathologic changes in the spinal cord, it is essential to establish DTI metrics in normal animals. ^ To date, in-vivo DTI studies of rat spinal cord have used implantable coils for high signal-to-noise ratio (SNR) and spin-echo pulse sequences for reduced geometric distortions. Implantable coils have several disadvantages including: (1) the invasive nature of implantation, (2) loss of SNR due to frequency shift with time in the longitudinal studies, and (3) difficulty in imaging the cervical region. While echo planar imaging (EPI) offers much shorter acquisition times compared to spin-echo imaging, EPI is very sensitive to static magnetic field inhomogeneities and the existing shimming techniques implemented on the MRI scanner do not perform well on spinal cord because of its geometry. ^ In this work, an integrated approach has been implemented for in-vivo DTI characterization of rat spinal cord in the thoracic and cervical regions. A three element phased array coil was developed for improved SNR and extended spatial coverage. A field-map shimming technique was developed for minimizing the geometric distortions in EPI images. Using these techniques, EPI based DWI images were acquired with optimized diffusion encoding scheme from 6 normal rats and the DTI-derived metrics were quantified. ^ The phantom studies indicated higher SNR and smaller bias in the estimated DTI metrics than the previous studies in the cervical region. In-vivo results indicated no statistical difference in the DTI characteristics of either gray matter or white matter between the thoracic and cervical regions. ^
Resumo:
A compact planar array with parasitic elements is studied to be used in MIMO systems. Classical compact arrays suffer from high coupling which makes correlation and matching efficiency to be worse. A proper matching network improves these lacks although its bandwidth is low and may increase the antenna size. The proposed antenna makes use of parasitic elements to improve both correlation and efficiency. A specific software based on MoM has been developed to analyze radiating structures with several feed points. The array is optimized through a Genetic Algorithm to determine parasitic elements position in order to fulfill different figures of merit. The proposed design provides the required correlation and matching efficiency to have a good performance over a significant bandwidth.
Resumo:
A planar antenna is introduced that works as a portable system for X-band satellite communications. This antenna is low-profile and modular with dimensions of 40 × 40 × 2.5 × cm. It is composed of a square array of 144 printed circuit elements that cover a wide bandwidth (14.7%) for transmission and reception along with dual and interchangeable circular polarization. A radiation efficiency above 50% is achieved by a low-loss stripline feeding network. This printed antenna has a 3 dB beamwidth of 5°, a maximum gain of 26 dBi and an axial ratio under 1.9 dB over the entire frequency band. The complete design of the antenna is shown, and the measurements are compared with simulations to reveal very good agreement.
Resumo:
This paper presents a general systems that can be taken into account to control between elements in an antenna array. Because the digital phase shifter devices have become a strategic element and also some steps have been taken for their export by U.S. Government, this element has increased its price to the low supply in the market. Therefore, it is necessary to adopt some solutions that allow us to deal with the design and construction of antenna arrays. system based on a group of a staggered phase shift with external switching is shown, which is extrapolated array.
Resumo:
Nowadays, more a more base stations are equipped with active conformal antennas. These antenna designs combine phase shift systems with multibeam networks providing multi-beam ability and interference rejection, which optimize multiple channel systems. GEODA is a conformal adaptive antenna system designed for satellite communications. Operating at 1.7 GHz with circular polarization, it is possible to track and communicate with several satellites at once thanks to its adaptive beam. The antenna is based on a set of similar triangular arrays that are divided in subarrays of three elements called `cells'. Transmission/Receiver (T/R) modules manage beam steering by shifting the phases. A more accurate steering of the antenna GEODA could be achieved by using a multibeam network. Several multibeam network designs based on Butler network will be presented
Resumo:
A multibeam antenna study based on Butler network will be undertaken in this document. These antenna designs combines phase shift systems with multibeam networks to optimize multiple channel systems. The system will work at 1.7 GHz with circular polarization. Specifically, result simulations and measurements of 3 element triangular subarray will be shown. A 45 element triangular array will be formed by the subarrays. Using triangular subarrays, side lobes and crossing points are reduced.
Resumo:
A six inputs and three outputs structure which can be used to obtain six simultaneous beams with a triangular array of 3 elements is presented. The beam forming network is obtained combining balanced and unbalanced hybrid couplers and allows to obtain six main beams with sixty degrees of separation in azimuth direction. Simulations and measurements showing the performance of the array and other detailed results are presented
Resumo:
A compact array of monopoles with a slotted ground plane is analyzed for being used in MIMO systems. Compact arrays suffer usually from high coupling which degrades significantly MIMO benefits. Through a matching network, main drawbacks can be solved, although it tends to provide a low bandwidth. The studied design is an array of monopoles with a slot in the ground plane. The slot shape is optimized with a Genetic Algorithm and an own electromagnetic software based on MoM in order to fulfill main figures of merit within a significant bandwidth
Resumo:
El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.
Resumo:
Se ha diseñado y construido un array de microlentes cilíndricas de cristal líquido (CL) y se ha llevado a cabo un estudio sobre su comportamiento electroóptico. El array lenticular es novedoso en cuanto a los materiales empleados en su fabricación. Se ha utilizado Níquel como material clave para la implementación de un electrodo de alta resistividad. La combinación del electrodo de alta resistividad junto al CL (cuya impedancia paralelo es elevada) da lugar a un divisor reactivo que proporciona un gradiente de tensión hiperbólico del centro al extremo de cada lente. Este efecto, unido al alineamiento homogéneo de las moléculas de CL, permite la generación de un gradiente de índice de refracción, comportándose el dispositivo como una lente GRIN (GRadient Refraction INdex). Para la caracterización de su funcionamiento se ha analizado su perfil de fase empleando métodos interferométricos y procesamiento de imágenes. Además se han efectuado también diferentes medidas de contraste angular.
Resumo:
In large antenna arrays with a large number of antenna elements, the required number of measurements for the characterization of the antenna array is very demanding in cost and time. This letter presents a new offline calibration process for active antenna arrays that reduces the number of measurements by subarray-level characterization. This letter embraces measurements, characterization, and calibration as a global procedure assessing about the most adequate calibration technique and computing of compensation matrices. The procedure has been fully validated with measurements of a 45-element triangular panel array designed for Low Earth Orbit (LEO) satellite tracking that compensates the degradation due to gain and phase imbalances and mutual coupling.