864 resultados para Operation based method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The previously described Nc5-specific PCR test for the diagnosis of Neospora caninum infections was used to develop a quantitative PCR assay which allows the determination of infection intensities within different experimental and diagnostic sample groups. The quantitative PCR was performed by using a dual fluorescent hybridization probe system and the LightCycler Instrument for online detection of amplified DNA. This assay was successfully applied for demonstrating the parasite proliferation kinetics in organotypic slice cultures of rat brain which were infected in vitro with N. caninum tachyzoites. This PCR-based method of parasite quantitation with organotypic brain tissue samples can be regarded as a novel ex vivo approach for exploring different aspects of cerebral N. caninum infection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In several extensions of the Standard Model, the top quark can decay into a bottom quark and a light charged Higgs boson H+, t -> bH(+), in addition to the Standard Model decay t -> bW. Since W bosons decay to the three lepton generations equally, while H+ may predominantly decay into tau nu, charged Higgs bosons can be searched for using the violation of lepton universality in top quark decays. The analysis in this paper is based on 4.6 fb(-1) of proton-proton collision data at root s = 7 TeV collected by the ATLAS experiment at the Large Hadron Collider. Signatures containing leptons (e or mu) and/or a hadronically decaying tau (tau(had)) are used. Event yield ratios between e+ tau(had) and e + mu, as well as between mu + tau(had) and mu + e, final states are measured in the data and compared to predictions from simulations. This ratio-based method reduces the impact of systematic uncertainties in the analysis. No significant deviation from the Standard Model predictions is observed. With the assumption that the branching fraction B(H+ -> tau nu) is 100%, upper limits in the range 3.2%-4.4% can be placed on the branching fraction B(t -> bH(+)) for charged Higgs boson masses m(H+) in the range 90-140GeV. After combination with results from a search for charged Higgs bosons in t (t) over bar decays using the tau(had) + jets final state, upper limits on B(t -> bH(+)) can be set in the range 0.8%-3.4%, for m(H+) in the range 90-160GeV.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Membrane proteins carry out functions such as nutrient uptake, ATP synthesis or transmembrane signal transduction. An increasing number of reports indicate that cellular processes are underpinned by regulated interactions between these proteins. Consequently, functional studies of these networks at a molecular level require co-reconstitution of the interacting components. Here, we report a SNARE protein-based method for incorporation of multiple membrane proteins into artificial membrane vesicles of well-defined composition, and for delivery of large water-soluble substrates into these vesicles. The approach is used for in vitro reconstruction of a fully functional bacterial respiratory chain from purified components. Furthermore, the method is used for functional incorporation of the entire F1F0 ATP synthase complex into native bacterial membranes from which this component had been genetically removed. The novel methodology offers a tool to investigate complex interaction networks between membrane-bound proteins at a molecular level, which is expected to generate functional insights into key cellular functions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Membrane proteins carry out functions such as nutrient uptake, ATP synthesis or transmembrane signal transduction. An increasing number of reports indicate that cellular processes are underpinned by regulated interactions between these proteins. Consequently, functional studies of these networks at a molecular level require co-reconstitution of the interacting components. Here, we report a SNARE-protein based method for incorporation of multiple membrane proteins into membranes, and for delivery of large water-soluble substrates into closed membrane vesicles. The approach is used for in vitro reconstruction of a fully functional bacterial respiratory chain from purified components. Furthermore, the method is used for functional incorporation of the entire F1F0-ATP synthase complex into native bacterial membranes from which this component had been genetically removed. The novel methodology offers a tool to investigate complex interaction networks between membrane-bound proteins at a molecular level, which is expected to generate functional insights into key cellular functions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we confirm, with absolute certainty, a conjecture on a certain oscillatory behaviour of higher auto-ionizing resonances of atoms and molecules beyond a threshold. These results not only definitely settle a more than 30 year old controversy in Rittby et al. (1981 Phys. Rev. A 24, 1636–1639 (doi:10.1103/PhysRevA.24.1636)) and Korsch et al. (1982 Phys. Rev. A 26, 1802–1803 (doi:10.1103/PhysRevA.26.1802)), but also provide new and reliable information on the threshold. Our interval-arithmetic-based method allows one, for the first time, to enclose and to exclude resonances with guaranteed certainty. The efficiency of our approach is demonstrated by the fact that we are able to show that the approximations in Rittby et al. (1981 Phys. Rev. A 24, 1636–1639 (doi:10.1103/PhysRevA.24.1636)) do lie near true resonances, whereas the approximations of higher resonances in Korsch et al. (1982 Phys. Rev. A 26, 1802–1803 (doi:10.1103/PhysRevA.26.1802)) do not, and further that there exist two new pairs of resonances as suggested in Abramov et al. (2001 J. Phys. A 34, 57–72 (doi:10.1088/0305-4470/34/1/304)).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Palaeoclimatic information can be retrieved from the diffusion of the stable water isotope signal during firnification of snow. The diffusion length, a measure for the amount of diffusion a layer has experienced, depends on the firn temperature and the accumulation rate. We show that the estimation of the diffusion length using power spectral densities (PSDs) of the record of a single isotope species can be biased by uncertainties in spectral properties of the isotope signal prior to diffusion. By using a second water isotope and calculating the difference in diffusion lengths between the two isotopes, this problem is circumvented. We study the PSD method applied to two isotopes in detail and additionally present a new forward diffusion method for retrieving the differential diffusion length based on the Pearson correlation between the two isotope signals. The two methods are discussed and extensively tested on synthetic data which are generated in a Monte Carlo manner. We show that calibration of the PSD method with this synthetic data is necessary to be able to objectively determine the differential diffusion length. The correlation-based method proves to be a good alternative for the PSD method as it yields precision equal to or somewhat higher than the PSD method. The use of synthetic data also allows us to estimate the accuracy and precision of the two methods and to choose the best sampling strategy to obtain past temperatures with the required precision. In addition to application to synthetic data the two methods are tested on stable-isotope records from the EPICA (European Project for Ice Coring in Antarctica) ice core drilled in Dronning Maud Land, Antarctica, showing that reliable firn temperatures can be reconstructed with a typical uncertainty of 1.5 and 2 °C for the Holocene period and 2 and 2.5 °C for the last glacial period for the correlation and PSD method, respectively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1) H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented in jMRUI's SpectrIm plugin. Copyright © 2016 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Renal cell carcinoma (RCC) is the most common malignant tumor of the kidney. Characterization of RCC tumors indicates that the most frequent genetic event associated with the initiation of tumor formation involves a loss of heterozygosity or cytogenetic aberration on the short arm of human chromosome 3. A tumor suppressor locus Nonpapillary Renal Carcinoma-1 (NRC-1, OMIM ID 604442) has been previously mapped to a 5–7 cM region on chromosome 3p12 and shown to induce rapid tumor cell death in vivo, as demonstrated by functional complementation experiments. ^ To identify the gene that accounts for the tumor suppressor activities of NRC-1, fine-scale physical mapping was conducted with a novel real-time quantitative PCR based method developed in this study. As a result, NRC-1 was mapped within a 4.6-Mb region defined by two unique sequences within UniGene clusters Hs.41407 and Hs.371835 (78,545Kb–83,172Kb in the NCBI build 31 physical map). The involvement of a putative tumor suppressor gene Robo1/Dutt1 was excluded as a candidate for NRC-1. Furthermore, a transcript map containing eleven candidate genes was established for the 4.6-Mb region. Analyses of gene expression patterns with real-time quantitative RT-PCR assays showed that one of the eleven candidate genes in the interval (TSGc28) is down-regulated in 15 out of 20 tumor samples compared with matched normal samples. Three exons of this gene have been identified by RACE experiments, although additional exon(s) seem to exist. Further gene characterization and functional studies are required to confirm the gene as a true tumor suppressor gene. ^ To study the cellular functions of NRC-1, gene expression profiles of three tumor suppressive microcell hybrids, each containing a functional copy of NRC-1, were compared with those of the corresponding parental tumor cell lines using 16K oligonucleotide microarrays. Differentially expressed genes were identified. Analyses based on the Gene Ontology showed that introduction of NRC-1 into tumor cell lines activates genes in multiple cellular pathways, including cell cycle, signal transduction, cytokines and stress response. NRC-1 is likely to induce cell growth arrest indirectly through WEE1. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Research has shown that disease-specific health related quality of life (HRQoL) instruments are more responsive than generic instruments to particular disease conditions. However, only a few studies have used disease-specific instruments to measure HRQoL in hemophilia. The goal of this project was to develop a disease-specific utility instrument that measures patient preferences for various hemophilia health states. The visual analog scale (VAS), a ranking method, and the standard gamble (SG), a choice-based method incorporating risk, were used to measure patient preferences. Study participants (n = 128) were recruited from the UT/Gulf States Hemophilia and Thrombophilia Center and stratified by age: 0–18 years and 19+. ^ Test retest reliability was demonstrated for both VAS and SG instruments: overall within-subject correlation coefficients were 0.91 and 0.79, respectively. Results showed statistically significant differences in responses between pediatric and adult participants when using the SG (p = .045). However, no significant differences were shown between these groups when using the VAS (p = .636). When responses to VAS and SG instruments were compared, statistically significant differences in both pediatric (p < .0001) and adult (p < .0001) groups were observed. Data from this study also demonstrated that persons with hemophilia with varying severity of disease, as well as those who were HIV infected, were able to evaluate a range of health states for hemophilia. This has important implications for the study of quality of life in hemophilia and the development of disease-specific HRQoL instruments. ^ The utility measures obtained from this study can be applied in economic evaluations that analyze the cost/utility of alternative hemophilia treatments. Results derived from the SG indicate that age can influence patients' preferences regarding their state of health. This may have implications for considering treatment options based on the mean age of the population under consideration. Although both instruments independently demonstrated reliability and validity, results indicate that the two measures may not be interchangeable. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hypertension (HT) is mediated by the interaction of many genetic and environmental factors. Previous genome-wide linkage analysis studies have found many loci that show linkage to HT or blood pressure (BP) regulation, but the results were generally inconsistent. Gene by environment interaction is among the reasons that potentially explain these inconsistencies between studies. Here we investigate influences of gene by smoking (GxS) interaction on HT and BP in European American (EA), African American (AA) and Mexican American (MA) families from the GENOA study. A variance component-based method was utilized to perform genome-wide linkage analysis of systolic blood pressure (SBP), diastolic blood pressure (DBP), and HT status, as well as bivariate analysis for SBP and DBP for smokers, non-smokers, and combined groups. The most significant results were found for SBP in MA. The strongest signal was for chromosome 17q24 (LOD = 4.2), increased to (LOD = 4.7) in bivariate analysis but there was no evidence of GxS interaction at this locus (p = 0.48). Two signals were identified only in one group: on chromosome 15q26.2 (LOD = 3.37) in non-smokers and chromosome 7q21.11 (LOD = 1.4) in smokers, both of which had strong evidence for GxS interaction (p = 0.00039 and 0.009 respectively). There were also two other signals, one on chromosome 20q12 (LOD = 2.45) in smokers, which became much higher in the combined sample (LOD = 3.53), and one on chromosome 6p22.2 (LOD = 2.06) in non-smokers. Neither peak had very strong evidence for GxS interaction (p = 0.08 and 0.06 respectively). A fine mapping association study was performed using 200 SNPs in 30 genes located under the linkage signals on chromosomes 15 and 17. Under the chromosome 15 peak, the association analysis identified 6 SNPs accounting for a 7 mmHg increase in SBP in MA non-smokers. For the chromosome 17 linkage peak, the association analysis identified 3 SNPs accounting for a 6 mmHg increase in SBP in MA. However, none of these SNPs was significant after correcting for multiple testing, and accounting for them in the linkage analysis produced very small reductions in the linkage signal. ^ The linkage analysis of BP traits considering the smoking status produced very interesting signals for SBP in the MA population. The fine mapping association analysis gave some insight into the contribution of some SNPs to two of the identified signals, but since these SNPs did not remain significant after multiple testing correction and did not explain the linkage peaks, more work is needed to confirm these exploratory results and identify the culprit variations under these linkage peaks. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We developed a new FPGA-based method for coincidence detection in positronemissiontomography. The method requires low device resources and no specific peripherals in order to resolve coincident digital pulses within a time window of a few nanoseconds. This method has been validated with a low-end Xilinx Spartan-3E and provided coincidence resolutions lower than 6 ns. This resolution depends directly on the signal propagation properties of the target device and the maximum available clock frequency, therefore it is expected to improve considerably on higher-end FPGAs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los arrays de ranuras son sistemas de antennas conocidos desde los años 40, principalmente destinados a formar parte de sistemas rádar de navíos de combate y grandes estaciones terrenas donde el tamaño y el peso no eran altamente restrictivos. Con el paso de los años y debido sobre todo a importantes avances en materiales y métodos de fabricación, el rango de aplicaciones de este tipo de sistemas radiantes creció en gran medida. Desde nuevas tecnologías biomédicas, sistemas anticolisión en automóviles y navegación en aviones, enlaces de comunicaciones de alta tasa binaria y corta distancia e incluso sistemas embarcados en satélites para la transmisión de señal de televisión. Dentro de esta familia de antennas, existen dos grupos que destacan por ser los más utilizados: las antennas de placas paralelas con las ranuras distribuidas de forma circular o espiral y las agrupaciones de arrays lineales construidos sobre guia de onda. Continuando con las tareas de investigación desarrolladas durante los últimos años en el Instituto de Tecnología de Tokyo y en el Grupo de Radiación de la Universidad Politécnica de Madrid, la totalidad de esta tesis se centra en este último grupo, aunque como se verá se separa en gran medida de las técnicas de diseño y metodologías convencionales. Los arrays de ranuras rectas y paralelas al eje de la guía rectangular que las alimenta son, sin ninguna duda, los modelos más empleados debido a la fiabilidad que presentan a altas frecuencias, su capacidad para gestionar grandes cantidades de potencia y la sencillez de su diseño y fabricación. Sin embargo, también presentan desventajas como estrecho ancho de banda en pérdidas de retorno y rápida degradación del diagrama de radiación con la frecuencia. Éstas son debidas a la naturaleza resonante de sus elementos radiantes: al perder la resonancia, el sistema global se desajusta y sus prestaciones degeneran. En arrays bidimensionales de slots rectos, el campo eléctrico queda polarizado sobre el plano transversal a las ranuras, correspondiéndose con el plano de altos lóbulos secundarios. Esta tesis tiene como objetivo el desarrollo de un método sistemático de diseño de arrays de ranuras inclinadas y desplazadas del centro (en lo sucesivo “ranuras compuestas”), definido en 1971 como uno de los desafíos a superar dentro del mundo del diseño de antennas. La técnica empleada se basa en el Método de los Momentos, la Teoría de Circuitos y la Teoría de Conexión Aleatoria de Matrices de Dispersión. Al tratarse de un método circuital, la primera parte de la tesis se corresponde con el estudio de la aplicabilidad de las redes equivalentes fundamentales, su capacidad para recrear fenómenos físicos de la ranura, las limitaciones y ventajas que presentan para caracterizar las diferentes configuraciones de slot compuesto. Se profundiza en las diferencias entre las redes en T y en ! y se condiciona la selección de una u otra dependiendo del tipo de elemento radiante. Una vez seleccionado el tipo de red a emplear en el diseño del sistema, se ha desarrollado un algoritmo de cascadeo progresivo desde el puerto alimentador hacia el cortocircuito que termina el modelo. Este algoritmo es independiente del número de elementos, la frecuencia central de funcionamiento, del ángulo de inclinación de las ranuras y de la red equivalente seleccionada (en T o en !). Se basa en definir el diseño del array como un Problema de Satisfacción de Condiciones (en inglés, Constraint Satisfaction Problem) que se resuelve por un método de Búsqueda en Retroceso (Backtracking algorithm). Como resultado devuelve un circuito equivalente del array completo adaptado a su entrada y cuyos elementos consumen una potencia acorde a una distribución de amplitud dada para el array. En toda agrupación de antennas, el acoplo mutuo entre elementos a través del campo radiado representa uno de los principales problemas para el ingeniero y sus efectos perjudican a las prestaciones globales del sistema, tanto en adaptación como en capacidad de radiación. El empleo de circuito equivalente se descartó por la dificultad que suponía la caracterización de estos efectos y su inclusión en la etapa de diseño. En esta tesis doctoral el acoplo también se ha modelado como una red equivalente cuyos elementos son transformadores ideales y admitancias, conectada al conjunto de redes equivalentes que representa el array. Al comparar los resultados estimados en términos de pérdidas de retorno y radiación con aquellos obtenidos a partir de programas comerciales populares como CST Microwave Studio se confirma la validez del método aquí propuesto, el primer método de diseño sistemático de arrays de ranuras compuestos alimentados por guía de onda rectangular. Al tratarse de ranuras no resonantes, el ancho de banda en pérdidas de retorno es mucho mas amplio que el que presentan arrays de slots rectos. Para arrays bidimensionales, el ángulo de inclinación puede ajustarse de manera que el campo quede polarizado en los planos de bajos lóbulos secundarios. Además de simulaciones se han diseñado, construido y medido dos prototipos centrados en la frecuencia de 12GHz, de seis y diez elementos. Las medidas de pérdidas de retorno y diagrama de radiación revelan excelentes resultados, certificando la bondad del método genuino Method of Moments - Forward Matching Procedure desarrollado a lo largo de esta tésis. Abstract The slot antenna arrays are well known systems from the decade of 40s, mainly intended to be part of radar systems of large warships and terrestrial stations where size and weight were not highly restrictive. Over the years, mainly due to significant advances in materials and manufacturing methods, the range of applications of this type of radiating systems grew significantly. From new biomedical technologies, collision avoidance systems in cars and aircraft navigation, short communication links with high bit transfer rate and even embedded systems in satellites for television broadcast. Within this family of antennas, two groups stand out as being the most frequent in the literature: parallel plate antennas with slots placed in a circular or spiral distribution and clusters of waveguide linear arrays. To continue the vast research work carried out during the last decades in the Tokyo Institute of Technology and in the Radiation Group at the Universidad Politécnica de Madrid, this thesis focuses on the latter group, although it represents a technique that drastically breaks with traditional design methodologies. The arrays of slots straight and parallel to the axis of the feeding rectangular waveguide are without a doubt the most used models because of the reliability that they present at high frequencies, its ability to handle large amounts of power and their simplicity of design and manufacturing. However, there also exist disadvantages as narrow bandwidth in return loss and rapid degradation of the radiation pattern with frequency. These are due to the resonant nature of radiating elements: away from the resonance status, the overall system performance and radiation pattern diminish. For two-dimensional arrays of straight slots, the electric field is polarized transverse to the radiators, corresponding to the plane of high side-lobe level. This thesis aims to develop a systematic method of designing arrays of angled and displaced slots (hereinafter "compound slots"), defined in 1971 as one of the challenges to overcome in the world of antenna design. The used technique is based on the Method of Moments, Circuit Theory and the Theory of Scattering Matrices Connection. Being a circuitry-based method, the first part of this dissertation corresponds to the study of the applicability of the basic equivalent networks, their ability to recreate the slot physical phenomena, their limitations and advantages presented to characterize different compound slot configurations. It delves into the differences of T and ! and determines the selection of the most suitable one depending on the type of radiating element. Once the type of network to be used in the system design is selected, a progressive algorithm called Forward Matching Procedure has been developed to connect the proper equivalent networks from the feeder port to shorted ending. This algorithm is independent of the number of elements, the central operating frequency, the angle of inclination of the slots and selected equivalent network (T or ! networks). It is based on the definition of the array design as a Constraint Satisfaction Problem, solved by means of a Backtracking Algorithm. As a result, the method returns an equivalent circuit of the whole array which is matched at its input port and whose elements consume a power according to a given amplitude distribution for the array. In any group of antennas, the mutual coupling between elements through the radiated field represents one of the biggest problems that the engineer faces and its effects are detrimental to the overall performance of the system, both in radiation capabilities and return loss. The employment of an equivalent circuit for the array design was discarded by some authors because of the difficulty involved in the characterization of the coupling effects and their inclusion in the design stage. In this thesis the coupling has also been modeled as an equivalent network whose elements are ideal transformers and admittances connected to the set of equivalent networks that represent the antennas of the array. By comparing the estimated results in terms of return loss and radiation with those obtained from popular commercial software as CST Microwave Studio, the validity of the proposed method is fully confirmed, representing the first method of systematic design of compound-slot arrays fed by rectangular waveguide. Since these slots do not work under the resonant status, the bandwidth in return loss is much wider than the longitudinal-slot arrays. For the case of two-dimensional arrays, the angle of inclination can be adjusted so that the field is polarized at the low side-lobe level plane. Besides the performed full-wave simulations two prototypes of six and ten elements for the X-band have been designed, built and measured, revealing excellent results and agreement with the expected results. These facts certify that the genuine technique Method of Moments - Matching Forward Procedure developed along this thesis is valid and trustable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of reliable clonal propagation technologies is a requisite for performing Multi-Varietal Forestry (MVF). Somatic embryogenesis is considered the tissue culture based method more suitable for operational breeding of forest trees. Vegetative propagation is very difficult when tissues are taken from mature donors, making clonal propagation of selected trees almost impossible. We have been able to induce somatic embryogenesis in leaves taken from mature oak trees, including cork oak (Quercus suber). This important species of the Mediterranean ecosystem produces cork regularly, conferring to this species a significant economic value. In a previous paper we reported the establishment of a field trial to compare the growth of plants of somatic origin vs zygotic origin, and somatic plants from mature trees vs somatic plants from juvenile seedlings. For that purpose somatic seedlings were regenerated from five selected cork oak trees and from young plants of their half-sib progenies by somatic embryogenesis. They were planted in the field together with acorn-derived plants of the same families. After the first growth period, seedlings of zygotic origin doubled the height of somatic seedlings, showing somatic plants of adult and juvenile origin similar growth. Here we provide data on height and diameter increases after two additional growth periods. In the second one, growth parameters of zygotic seedlings were also significantly higher than those of somatic ones, but there were not significant differences in height increase between seedlings and somatic plants of mature origin. In the third growth period, height and diameter increases of somatic seedlings cloned from the selected trees did not differ from those of zygotic seedlings, which were still higher than data from plants obtained from somatic embryos from the sexual progeny. Therefore, somatic seedlings from mature origin seem not to be influenced by a possible ageing effect, and plants from somatic embryos tend to minimize the initial advantage of plants from acorns