165 resultados para Agilent


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resumen El diseño clásico de circuitos de microondas se basa fundamentalmente en el uso de los parámetros s, debido a su capacidad para caracterizar de forma exitosa el comportamiento de cualquier circuito lineal. La relación existente entre los parámetros s con los sistemas de medida actuales y con las herramientas de simulación lineal han facilitado su éxito y su uso extensivo tanto en el diseño como en la caracterización de circuitos y subsistemas de microondas. Sin embargo, a pesar de la gran aceptación de los parámetros s en la comunidad de microondas, el principal inconveniente de esta formulación reside en su limitación para predecir el comportamiento de sistemas no lineales reales. En la actualidad, uno de los principales retos de los diseñadores de microondas es el desarrollo de un contexto análogo que permita integrar tanto el modelado no lineal, como los sistemas de medidas de gran señal y los entornos de simulación no lineal, con el objetivo de extender las capacidades de los parámetros s a regímenes de operación en gran señal y por tanto, obtener una infraestructura que permita tanto la caracterización como el diseño de circuitos no lineales de forma fiable y eficiente. De acuerdo a esta filosofía, en los últimos años se han desarrollado diferentes propuestas como los parámetros X, de Agilent Technologies, o el modelo de Cardiff que tratan de proporcionar esta plataforma común en el ámbito de gran señal. Dentro de este contexto, uno de los objetivos de la presente Tesis es el análisis de la viabilidad del uso de los parámetros X en el diseño y simulación de osciladores para transceptores de microondas. Otro aspecto relevante en el análisis y diseño de circuitos lineales de microondas es la disposición de métodos analíticos sencillos, basados en los parámetros s del transistor, que permitan la obtención directa y rápida de las impedancias de carga y fuente necesarias para cumplir las especificaciones de diseño requeridas en cuanto a ganancia, potencia de salida, eficiencia o adaptación de entrada y salida, así como la determinación analítica de parámetros de diseño clave como el factor de estabilidad o los contornos de ganancia de potencia. Por lo tanto, el desarrollo de una formulación de diseño analítico, basada en los parámetros X y similar a la existente en pequeña señal, permitiría su uso en aplicaciones no lineales y supone un nuevo reto que se va a afrontar en este trabajo. Por tanto, el principal objetivo de la presente Tesis consistiría en la elaboración de una metodología analítica basada en el uso de los parámetros X para el diseño de circuitos no lineales que jugaría un papel similar al que juegan los parámetros s en el diseño de circuitos lineales de microondas. Dichos métodos de diseño analíticos permitirían una mejora significativa en los actuales procedimientos de diseño disponibles en gran señal, así como una reducción considerable en el tiempo de diseño, lo que permitiría la obtención de técnicas mucho más eficientes. Abstract In linear world, classical microwave circuit design relies on the s-parameters due to its capability to successfully characterize the behavior of any linear circuit. Thus the direct use of s-parameters in measurement systems and in linear simulation analysis tools, has facilitated its extensive use and success in the design and characterization of microwave circuits and subsystems. Nevertheless, despite the great success of s-parameters in the microwave community, the main drawback of this formulation is its limitation in the behavior prediction of real non-linear systems. Nowadays, the challenge of microwave designers is the development of an analogue framework that allows to integrate non-linear modeling, large-signal measurement hardware and non-linear simulation environment in order to extend s-parameters capabilities to non-linear regimen and thus, provide the infrastructure for non-linear design and test in a reliable and efficient way. Recently, different attempts with the aim to provide this common platform have been introduced, as the Cardiff approach and the Agilent X-parameters. Hence, this Thesis aims to demonstrate the X-parameter capability to provide this non-linear design and test framework in CAD-based oscillator context. Furthermore, the classical analysis and design of linear microwave transistorbased circuits is based on the development of simple analytical approaches, involving the transistor s-parameters, that are able to quickly provide an analytical solution for the input/output transistor loading conditions as well as analytically determine fundamental parameters as the stability factor, the power gain contours or the input/ output match. Hence, the development of similar analytical design tools that are able to extend s-parameters capabilities in small-signal design to non-linear ap- v plications means a new challenge that is going to be faced in the present work. Therefore, the development of an analytical design framework, based on loadindependent X-parameters, constitutes the core of this Thesis. These analytical nonlinear design approaches would enable to significantly improve current large-signal design processes as well as dramatically decrease the required design time and thus, obtain more efficient approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background:Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods: A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results: After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions: We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scope: Today, about 2–8% of the population of Western countries exhibits some type of food allergy whose impact ranges from localized symptoms confined to the oral mucosa to severe anaphylactic reactions. Consumed worldwide, lettuce is a Compositae family vegetable that can elicit allergic reactions. To date, however, only one lipid transfer protein has been described in allergic reaction to lettuce. The aim of this study was to identify potential new allergens involved in lettuce allergy. Methods and results: Sera from 42 Spanish lettuce-allergic patients were obtained from pa-tients recruited at the outpatient clinic. IgE-binding proteins were detected by SDS-PAGE and immunoblotting. Molecular characterization of IgE-binding bands was performed by MS. Thaumatin was purified using the Agilent 3100 OFFGEL system. The IgE-binding bands recognized in the sera of more than 50% of patients were identified as lipid transfer protein (9 kDa), a thaumatin-like protein (26 kDa), and an aspartyl protease (35 and 45 kDa). ELISA inhibition studies were performed to confirm the IgE reactivity of the purified allergen. Conclusion: Two new major lettuce allergens—a thaumatin-like protein and an aspartyl protease—have been identified and characterized. These allergens may be used to improve both diagnosis and treatment of lettuce-allergic patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El Grupo de Diseño Electrónico y Microelectrónico de la Universidad Politécnica de Madrid -GDEM- se dedica, entre otras cosas, al estudio y mejora del consumo en sistemas empotrados. Es en este lugar y sobre este tema donde el proyecto a exponer ha tomado forma y desarrollo. Según un artículo de la revista online Revista de Electrónica Embebida, un sistema empotrado o embebido es aquel “sistema controlado por un microprocesador y que gracias a la programación que incorpora o que se le debe incorporar, realiza una función específica para la que ha sido diseñado, integrando en su interior la mayoría de los elementos necesarios para realizar dicho función”. El porqué de estudiar sobre este tema responde a que, cada vez, hay mayor presencia de sistemas empotrados en nuestra vida cotidiana. Esto es debido a que se está tendiendo a dotar de “inteligencia” a todo lo que puedan hacer nuestra vida un poco más fácil. Nos podemos encontrar dichos sistemas en fábricas, oficinas de atención a los ciudadanos, sistemas de seguridad de hogar, relojes, móviles, lavadoras, hornos, aspiradores y un largo etcétera en cualquier aparato que nos podamos imaginar. A pesar de sus grandes ventajas, aún hay grandes inconvenientes. El mayor problema que supone a día de hoy es la autonomía del mismo sistema, ya que hablamos de aparatos que muchas veces están alimentados por baterías -para ayudar a su portabilidad–. Por esto, se está intentando dotar a dichos sistemas de una capacidad de ahorro de energía y toma de decisiones que podrían ayudar a duplicar la autonomía de dicha batería. Un ejemplo claro son los Smartphones de hoy en día, unos aparatos casi indispensables que pueden tener una autonomía de un día. Esto es poco práctico para el usuario en caso de viajes, trabajo u otras situaciones en las que se le dé mucho uso y no pueda tener acceso a una red eléctrica. Es por esto que surge la necesidad de investigar, sin necesidad de mejorar el hardware del sistema, una manera de mejorar esta situación. Este proyecto trabajará en esa línea creando un sistema automático de medida el cual generará las corrientes que servirán como entrada para verificar el sistema de adquisición que junto con la tarjeta Beagle Board permitirá la toma de decisiones en relación con el consumo de energía. Para realizar este sistema, nos ayudaremos de diferentes herramientas que podremos encontrar en el laboratorio del GDEM, como la fuente de alimentación Agilent y la Beagle Board –como principales herramientas de trabajo- . El objetivo principal será la simulación de unas señales que, después de pasar un proceso de conversión y tratado, harán la función de representación del consumo de cada una de las partes que pueden formar un sistema empotrado genérico. Por lo tanto, podemos decir que el sistema hará la funcionalidad de un banco de pruebas que ayudará a simular dicho consumo para que el microprocesador del sistema pueda llegar a tomar alguna decisión. ABSTRACT. The Electronic and Microelectronic Design Group of Universidad Politécnica de Madrid -GDEM- is in charge, between other issues, of improving the embedded system’s consumption. It is in this place and about this subject where the exposed project has taken shape and development. According to an article from de online magazine Revista de Electronica Embebida, an embedded system is “the one controlled by a microprocessor and, thanks to the programing that it includes, it carries out a specific function what it has been designed for, being integrated in it the most necessary elements for realizing the already said function”. The because of studying this subject, answers that each time there is more presence of the embedded system in our daily life. This is due to the tendency of providing “intelligence” to all what can make our lives easier. We can find this kind of systems in factories, offices, security systems, watchers, mobile phones, washing machines, ovens, hoovers and, definitely, in all kind of machines what we can think of. Despite its large vantages, there are still some inconveniences. Nowadays, the most important problem is the autonomy of the system itself when machines that have to be supplied by batteries –making easier the portability-. Therefore, this project is going after a save capacity of energy for the system as well as being able to take decisions in order to duplicate batteries’ autonomy. Smartphones are a clear example. They are a very successful product but the autonomy is just one day. This is not practical for users, at all, if they have to travel, to work or to do any activity that involves a huge use of the phone without a socket nearby. That is why the need of investigating a way to improve this situation. This project is working on this line, creating an automatic system that will generate the currents for verifying the acquisition system that, with the beagle board, will help taking decisions regarding the energy’s consumption. To carry out this system, we need different tools that we can find in the laboratory of the group previously mentioned, like power supply Agilent and the Beagle Board – as main working tools –. The main goal is the simulation of some signals that, after a conversion process, will represent de consumption of each of the parts in the embedded generic system. Therefore, the system will be a testing ground that simulate the consumption, once sent to the processor, to be processed and so the microprocessor system might take some decision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogen isotope values (dD) of sedimentary terrestrial leaf wax such as n-alkanes or n-acids have been used to map and understand past changes in rainfall amount in the tropics because dD of precipitation is commonly assumed as the first order controlling factor of leaf wax dD. Plant functional types and their photosynthetic pathways can also affect leaf wax dD but these biological effects are rarely taken into account in paleo studies relying on this rainfall proxy. To investigate how biological effects may influence dD values we here present a 37,000-year old record of dD and stable carbon isotopes (d13C) measured on four n-alkanes (n-C27, n-C29, n-C31, n-C33) from a marine sediment core collected off the Zambezi River mouth. Our paleo d13C records suggest that each individual n-alkanes had different C3/C4 proportional contributions. n-C29 was mostly derived from a C3 dicots (trees, shrubs and forbs) dominant vegetation throughout the entire record. In contrast, the longer chain n-C33 and n-C31 were mostly contributed by C4 grasses during the Glacial period but shifted to a mixture of C4 grasses and C3 dicots during the Holocene. Strong correlations between dD and d13C values of n-C33 (correlation coefficient R2 = 0.75, n = 58) and n-C31 (R2 = 0.48, n = 58) suggest that their dD values were strongly influenced by changes in the relative contributions of C3/C4 plant types in contrast to n-C29 (R2 = 0.07, n = 58). Within regions with variable C3/C4 input, we conclude that dD values of n-C29 are the most reliable and unbiased indicator for past changes in rainfall, and that dD and d13C values of n-C31 and n-C33 are sensitive to C3/C4 vegetation changes. Our results demonstrate that a robust interpretation of palaeohydrological data using n-alkane dD requires additional knowledge of regional vegetation changes from which nalkanes are synthesized, and that the combination of dD and d13C values of multiple n-alkanes can help to differentiate biological effects from those related to the hydrological cycle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: Meibomian-derived lipid secretions are well characterised but their subsequent fate in the ocular environment is less well understood. Phospholipids are thought to facilitate the interface between aqueous and lipid layers of the tear film and to be involved in ocular lubrication processes. We have extended our previous studies on phospholipid levels in the tear film to encompass the fate of polar and non-polar lipids in progressive accumulation and aging processes on both conventional and silicone-modified hydrogel lenses. This is an important aspect of the developing understanding of the role of lipids in the clinical performance of silicone hydrogels. Method: Several techniques were used to identify lipids in the tear film. Mass-spectrometric methods included Agilent 1100-based liquid chromatography coupled to mass spectrometry (LCMS) and Perkin Elmer gas chromatography mass spectrometry (GCMS). Thin layer chromatography (TLC) was used for separation of lipids on the basis of increasing solvent polarity. Routine assay of lipid extractions from patient-worn lenses was carried out using a Hewlett Packard 1090 liquid chromatograph coupled to both uv and Agilent 1100 fluorescence detection. A range of histological together with optical, and electron microscope techniques was used in deposit analysis. Results: Progressive lipid uptake was assessed in various ways, including: composition changes with wear time, differential lipid penetrate into the lens matrix and, particularly, the extent to which lipids become unextractable as a function of wear time. Solvent-based separation and HPLC gave consistent results indicating that the polarity of lipid classes decreased as follows: phospholipids/fatty acids > triglycerides > cholesterol/cholesteryl esters. Tear lipids were found to show autofluorescence—which underpinned the value of fluorescence microscopy and fluorescence detection coupled with HPLC separation. The most fluorescent lipids were found to be cholesteryl esters; histological techniques coupled with fluorescence microscopy indicated that white spots (’’jelly bumps’’) formed on silicone hydrogel lenses contain a high proportion of cholesteryl esters. Lipid profiles averaged for 30 symptomatic and 30 asymptomatic contact lens wearers were compiled. Peak classes were split into: cholesterol (C), cholesteryl esters (CE), glycerides (G), polar fatty acids/phospholipids (PL). The lipid ratio for ymptomatic/symptomatic was 0.6 ± 0.1 for all classes except one—the cholesterol ratio was 0.2 ± 0.05. Significantly the PL ratio was no different from that of any other class except cholesterol. Chromatography indicated that: lipid polarity decreased with depth of penetration and that lipid extractability decreased with wear time. Conclusions: Meibomian lipid composition differs from that in the tear film and on worn lenses. Although the same broad lipid classes were obtained by extraction from all lenses and all patients studied, quantities vary with wear and material. Lipid extractability diminishes with wear time regardless of the use of cleaning regimes. Dry eye symptoms in contact lens wear are frequently linked to lipid layer behaviour but seem to relate more to total lipid than to specific composition. Understanding the detail of lipid related processes is an important element of improving the clinical performance of materials and care solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The use of PHMB as a disinfectant in contact lens multipurpose solutions has been at the centre of much debate in recent times, particularly in relation to the issue of solution induced corneal staining. Clinical studies have been carried out which suggest different effects with individual contact lens materials used in combination with specific PHMB containing care regimes. There does not appear to be, however, a reliable analytical technique that would detect and quantify with any degree of accuracy the specific levels of PHMB that are taken up and released from individual solutions by the various contact lens materials. Methods: PHMB is a mixture of positively charged polymer units of varying molecular weight that has maximum absorbance wavelength of 236 nm. On the basis of these properties a range of assays including capillary electrophoresis, HPLC, a nickelnioxime colorimetric technique, mass spectrophotometry, UV spectroscopy and ion chromatography were assessed paying particular attention to each of their constraints and detection levels. Particular interest was focused on the relative advantage of contactless conductivity compared to UV and mass spectrometry detection in capillary electrophoresis (CE). This study provides an overview of the comparative performance of these techniques. Results: The UV absorbance of PHMB solutions, ranging from 0.0625 to 50 ppm was measured at 236 nm. Within this range the calibration curve appears to be linear however, absorption values below 1 ppm (0.0001%) were extremely difficult to reproduce. The concentration of PHMB in solutions is in the range of 0.0002–0.00005% and our investigations suggest that levels of PHMB below 0.0001% (levels encountered in uptake and release studies) can not be accurately estimated, in particular when analysing complex lens care solutions which can contain competitively absorbing, and thus interfering, species in the solution. The use of separative methodologies, such as CE using UV detection alone is similarly limited. Alternative techniques including contactless conductivity detection offer greater discrimination in complex solutions together with the opportunity for dual channel detection. Preliminary results achieved by TraceDec1 contactless conductivity detection, (Gain 150%, Offset 150) in conjunction with the Agilent capillary electrophoresis system using a bare fused silica capillary (extended light path, 50 mid, total length 64.5 cm, effective length 56 cm) and a cationic buffer at pH 3.2, exhibit great potential with reproducible PHMB split peaks. Conclusions: PHMB-based solutions are commonly associated with the potential to invoke corneal staining in combination with certain contact lens materials. However this terminology ‘PHMBbased solution’ is used primarily because PHMB itself has yet to be adequately implicated as the causative agent of the staining and compromised corneal cell integrity. The lack of well characterised adequately sensitive assays, coupled with the range of additional components that characterise individual care solutions pose a major barrier to the investigation of PHMB interactions in the lenswearing eye.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this work is to establish the application of a fully automated microfluidic chip based protein separation assay in tear analysis. It is rapid, requires small sample volumes and is vastly superior to, and more convenient than, comparable conventional gel electrophoresis assays. The protein sizing chip technology was applied to three specific fields of analysis. Firstly tear samples were collected regularly from subjects establishing the baseline effects of tear stimulation, tear state and patient health. Secondly tear samples were taken from lens wearing eyes and thirdly the use of microfluidic technology was assessed as a means to investigate a novel area of tear analysis, which we have termed the 'tear envelope'. Utilising the Agilent 2100 Bioanalyzer in combination with the Protein 200 Plus LabChip kit, these studies investigated tear proteins in the range of 14-200 kDa. Particular attention was paid to the relative concentrations of lysozyme, tear lipocalin, secretory IgA (sIgA), IgG and lactoferrin, together with the overall tear electropherogram 'fingerprint'. Furthermore, whilst lens-tear interaction studies are generally thought of as an investigation into the effects of tears components on the contact lens material, i.e. deposition studies, this report addresses the reverse phenomenon-the effect of the lens, and particularly the newly inserted lens, on the tear fluid composition and dynamics. The use of microfluidic technology provides a significant advance in tear studies and should prove invaluable in tear diagnostics and contact lens performance analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recent advancement in the growth technology of InGaN/GaN has decently positioned InGaN based white LEDs to leap into the area of general or daily lighting. Monolithic white LEDs with multiple QWs were previously demonstrated by Damilano et al. [1] in 2001. However, there are several challenges yet to be overcome for InGaN based monolithic white LEDs to establish themselves as an alternative to other day-to-day lighting sources [2,3]. Alongside the key characteristics of luminous efficacy and EQE, colour rendering index (CRI) and correlated colour temperature (CCT) are important characteristics for these structures [2,4]. Investigated monolithic white structures were similar to that described in [5] and contained blue and green InGaN multiple QWs without short-period superlattice between them and emitting at 440 nm and 530 nm, respectively. The electroluminescence (EL) measurements were done in the CW and pulse current modes. An integration sphere (Labsphere “CDS 600” spectrometer) and a pulse generator (Agilent 8114A) were used to perform the measurements. The CCT and Green/Blue radiant flux ratio were investigated at extended operation currents from 100mA to 2A using current pulses from 100ns to 100μs with a duty cycle varying from 1% to 95%. The strong dependence of the CCT on the duty cycle value, with the CCT value decreasing by more than three times at high duty cycle values (shown at the 300 mA pulse operation current) was demonstrated (Fig. 1). The pulse width variation seems to have a negligible effect on the CCT (Fig. 1). To account for the joule heating, a duty cycle more than 1% was considered as an overheated mode. For the 1% duty cycle it was demonstrated that the CCT was tuneable in three times by modulating input current and pulse width (Fig. 2). It has also been demonstrated that there is a possibility of keeping luminous flux independent of pulse width variation for a constant value of current pulse (Fig. 3).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigated controls on the water chemistry of a South Ecuadorian cloud forest catchment which is partly pristine, and partly converted to extensive pasture. From April 2007 to May 2008 water samples were taken weekly to biweekly at nine different subcatchments, and were screened for differences in electric conductivity, pH, anion, as well as element composition. A principal component analysis was conducted to reduce dimensionality of the data set and define major factors explaining variation in the data. Three main factors were isolated by a subset of 10 elements (Ca2+, Ce, Gd, K+, Mg2+, Na+, Nd, Rb, Sr, Y), explaining around 90% of the data variation. Land-use was the major factor controlling and changing water chemistry of the subcatchments. A second factor was associated with the concentration of rare earth elements in water, presumably highlighting other anthropogenic influences such as gravel excavation or road construction. Around 12% of the variation was explained by the third component, which was defined by the occurrence of Rb and K and represents the influence of vegetation dynamics on element accumulation and wash-out. Comparison of base- and fast flow concentrations led to the assumption that a significant portion of soil water from around 30 cm depth contributes to storm flow, as revealed by increased rare earth element concentrations in fast flow samples. Our findings demonstrate the utility of multi-tracer principal component analysis to study tropical headwater streams, and emphasize the need for effective land management in cloud forest catchments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The fractal self-similarity property is studied to develop frequency selective surfaces (FSS) with several rejection bands. Particularly, Gosper fractal curves are used to define the shapes of the FSS elements. Due to the difficulty of making the FSS element details, the analysis is developed for elements with up to three fractal levels. The simulation was carried out using Ansoft Designer software. For results validation, several FSS prototypes with fractal elements were fabricated. In the fabrication process, fractals elements were designed using computer aided design (CAD) tools. The prototypes were measured using a network analyzer (N3250A model, Agilent Technologies). Matlab software was used to generate compare measured and simulated results. The use of fractal elements in the FSS structures showed that the use of high fractal levels can reduce the size of the elements, at the same time as decreases the bandwidth. We also investigated the effect produced by cascading FSS structures. The considered cascaded structures are composed of two FSSs separated by a dielectric layer, which distance is varied to determine the effect produced on the bandwidth of the coupled geometry. Particularly, two FSS structures were coupled through dielectric layers of air and fiberglass. For comparison of results, we designed, fabricated and measured several prototypes of FSS on isolated and coupled structures. Agreement was observed between simulated and measured results. It was also observed that the use of cascaded FSS structures increases the FSSs bandwidths and, in particular cases, the number of resonant frequencies, in the considered frequency range. In future works, we will investigate the effects of using different types of fractal elements, in isolated, multilayer and coupled FSS structures for applications on planar filters, high-gain microstrip antennas and microwave absorbers

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work aims to investigate the behavior of fractal and helical elements structures in planar microstrip. In particular, the frequency selective surfaces (FSSs) had changed its conventional elements to fractal and helical formats. The dielectric substrate used was fiberglass (FR-4) and has a thickness of 1.5 mm, a relative permittivity 4.4 and tangent loss equal to 0.02. For FSSs, was adopting the Dürer’s fractal geometry and helical geometry. To make the measurements, we used two antennas horns in direct line of sight, connected by coaxial cable to the vector network analyzer. Some prototypes were select for built and measured. From preliminary results, it was aimed to find practical applications for structures from the cascading between them. For FSSs with Dürer’s fractal elements was observed behavior provided by the multiband fractal geometry, while the bandwidth has become narrow as the level of iteration fractal increased, making it a more selective frequency with a higher quality factor. A parametric analysis allowed the analysis of the variation of the air layer between them. The cascading between fractal elements structure were considered, presented a tri-band behavior for certain values of the layer of air between them, and find applications in the licensed 2.5GHz band (2.3-2.7) and 3.5GHz band (3.3-3.8). For FSSs with helical elements, six structures were considered, namely H0, H1, H2, H3, H4 and H5. The electromagnetic behavior of them was analyzed separately and cascaded. From preliminary results obtained from the separate analysis of structures, including the cascade, the higher the bandwidth, in that the thickness of the air layer increases. In order to find practical applications for helical structures cascaded, the helical elements structure has been cascaded find applications in the X-band (8.0-12.0) and unlicensed band (5.25-5.85). For numerical and experimental characterization of the structures discussed was used, respectively, the commercial software Ansoft Designer and a vector network analyzer, Agilent N5230A model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We quantified pigment biomarkers by high performance liquid chromatography (HPLC) to obtain a broad taxonomic classification of microphytobenthos (MPB) (i.e. identification of dominant taxa). Three replicate sediment cores were collected at 0, 50 and 100 m along transects 5-9 in Heron Reef lagoon (n=15) (Fig. 1). Transects 1-4 could not be processed because the means to have the samples analysed by HPLC were not available at the time of field data collection. Cores were stored frozen and scrapes taken from the top of each one and placed in cryovials immersed in dry ice. Samples were sent to the laboratory (CSIRO Marine and Atmospheric Research, Hobart, Australia) where pigments were extracted with 100% acetone during fifteen hours at 4°C after vortex mixing (30 seconds) and sonication (15 minutes). Samples were then centrifuged and filtered prior to the analysis of pigment composition with a Waters - Alliance HPLC system equipped with a photo-diode array detector. Pigments were separated using a Zorbax Eclipse XDB-C8 stainless steel 150 mm x 4.6 mm ID column with 3.5 µm particle size (Agilent Technologies) and a binary gradient system with an elevated column temperature following a modified version of the Van Heukelem and Thomas (2001) method. The separated pigments were detected at 436 nm and identified against standard spectra using Waters Empower software. Standards for HPLC system calibration were obtained from Sigma (USA) and DHI (Denmark).