973 resultados para Error in substance


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Radiation therapy for patients with intact cervical cancer is frequently delivered using primary external beam radiation therapy (EBRT) followed by two fractions of intracavitary brachytherapy (ICBT). Although the tumor is the primary radiation target, controlling microscopic disease in the lymph nodes is just as critical to patient treatment outcome. In patients where gross lymphadenopathy is discovered, an extra EBRT boost course is delivered between the two ICBT fractions. Since the nodal boost is an addendum to primary EBRT and ICBT, the prescription and delivery must be performed considering previously delivered dose. This project aims to address the major issues of this complex process for the purpose of improving treatment accuracy while increasing dose sparing to the surrounding normal tissues. Because external beam boosts to involved lymph nodes are given prior to the completion of ICBT, assumptions must be made about dose to positive lymph nodes from future implants. The first aim of this project was to quantify differences in nodal dose contribution between independent ICBT fractions. We retrospectively evaluated differences in the ICBT dose contribution to positive pelvic nodes for ten patients who had previously received external beam nodal boost. Our results indicate that the mean dose to the pelvic nodes differed by up to 1.9 Gy between independent ICBT fractions. The second aim is to develop and validate a volumetric method for summing dose of the normal tissues during prescription of nodal boost. The traditional method of dose summation uses the maximum point dose from each modality, which often only represents the worst case scenario. However, the worst case is often an exaggeration when highly conformal therapy methods such as intensity modulated radiation therapy (IMRT) are used. We used deformable image registration algorithms to volumetrically sum dose for the bladder and rectum and created a voxel-by-voxel validation method. The mean error in deformable image registration results of all voxels within the bladder and rectum were 5 and 6 mm, respectively. Finally, the third aim explored the potential use of proton therapy to reduce normal tissue dose. A major physical advantage of protons over photons is that protons stop after delivering dose in the tumor. Although theoretically superior to photons, proton beams are more sensitive to uncertainties caused by interfractional anatomical variations, and must be accounted for during treatment planning to ensure complete target coverage. We have demonstrated a systematic approach to determine population-based anatomical margin requirements for proton therapy. The observed optimal treatment angles for common iliac nodes were 90° (left lateral) and 180° (posterior-anterior [PA]) with additional 0.8 cm and 0.9 cm margins, respectively. For external iliac nodes, lateral and PA beams required additional 0.4 cm and 0.9 cm margins, respectively. Through this project, we have provided radiation oncologists with additional information about potential differences in nodal dose between independent ICBT insertions and volumetric total dose distribution in the bladder and rectum. We have also determined the margins needed for safe delivery of proton therapy when delivering nodal boosts to patients with cervical cancer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rate at which hydrothermal precipitates accumulate, as measured by the accumulation rate of manganese, can be used to identify periods of anomalous hydrothermal activity in the past. From a preliminary study of Sites 597 and 598, four periods prior to 6 Ma of anomalously high hydrothermal activity have been identified: 8.5 to 10.5 Ma, 12 to 16 Ma, 17 to 18 Ma, and 23-to-27 Ma. The 18-Ma anomaly is the largest and is associated with the jump in spreading from the fossil Mendoza Ridge to the East Pacific Rise, whereas the 23-to-27-Ma anomaly is correlated with the birth of the Galapagos Spreading Center and resultant ridge reorganization. The 12-to-16-Ma and 8.5-to-10.5-Ma anomalies are correlated with periods of anomalously high volcanism around the rim of the Pacific Basin and may be related to other periods of ridge reorganization along the East Pacific Rise. There is no apparent correlation between periods of fast spreading at 19°S and periods of high hydrothermal activity. We thus suggest that periods when hydrothermal activity and crustal alteration at mid-ocean ridges are the most pronounced may be periods of large-scale ridge reorganization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a high-resolution magnetostratigraphy and relative paleointensity (RPI) record derived from the upper 85 meters of IODP Site U1336, an equatorial Pacific early to middle Miocene succession recovered during Expedition 320/321. The magnetostratigraphy is well resolved with reversals typically located to within a few centimeters resulting in a well-constrained age model. The lowest normal polarity interval, from 85 to 74.87 meters, is interpreted as the upper part of Chron C6n (18.614-19.599 Ma). Another 33 magnetozones occur from 74.87 to 0.85 m, which are interpret to represent the continuous sequence of chrons from Chron C5Er (18.431-18.614 Ma) up to the top of Chron C5An.1n (12.014 Ma). We identify three new possible subchrons within Chron C5Cn.1n, Chron 5Bn.1r, and C5ABn. Sedimentation rates vary from about 7 to 15 m/Myr with a mean of about 10 m/Myr. We observe rapid, apparent changes in the sedimentation rate at geomagnetic reversals between ~16 and 19 Ma that indicate a calibration error in geomagnetic polarity timescale (ATNTS2004). The remanence is carried mainly by non-interacting particles of fine-grained magnetite, which have FORC distributions characteristic of biogenic magnetite. Given the relative homogeneity of the remanence carriers throughout the 85-m-thick succession and the quality with which the remanence is recorded, we have constructed a relative paleointensity (RPI) record that provides new insights into middle Miocene geomagnetic field behavior. The RPI record indicates a gradual decline in field strength between 18.5 Ma and 14.5 Ma, and indicates no discernible link between RPI and either chron duration or polarity state.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Peridotites (diopside-bearing harzburgites) found at 13°N of the Mid-Atlantic Ridge fall into two compositional groups. Peridotites P1 are plagioclase-free rocks with minerals of uniform composition and Ca-pyroxene strongly depleted in highly incompatible elements. Peridotites P2 bear evidence of interaction with basic melt: mafic veinlets; wide variations in mineral composition; enrichment of minerals in highly incompatible elements (Na, Zr, and LREE); enrichment of minerals in moderately incompatible elements (Ti, Y, and HREE) from P1 level to abundances 4-10 times higher toward the contacts with mafic aggregates; and exotic mineral assemblages Cr-spinel + rutile and Cr-spinel + ilmenite in peridotite and pentlandite + rutile in mafic veinlets. Anomalous incompatible-element enrichment of minerals from peridotites P2 occurred at the spinel-plagioclase facies boundary, which corresponds to pressure of about 0.8-0.9 GPa. Temperature and oxygen fugacity were estimated from spinel-orthopyroxene-olivine equilibria. Peridotites P1 with uniform mineral composition record temperature of the last complete recrystallization at 940-1050°C and FMQ buffer oxygen fugacity within the calculation error. In peridotites P2, local assemblages have different compositions of coexisting minerals, which reflects repeated partial recrystallization during heating to magmatic temperatures (above 1200°C) and subsequent reequilibration at temperatures decreasing to 910°C and oxygen fugacity significantly higher than FMQ buffer (delta log fO2 = 1.3-1.9). Mafic veins are considered to be a crystallization product from basic melt enriched in Mg and Ni via interaction with peridotite. The geochemical type of melt reconstructed by the equilibrium with Ca-pyroxene is defined as T-MORB: (La/Sm)_N~1.6 and (Ce/Yb) )_N~2.3 that is well consistent with compositional variations of modern basaltic lavas in this segment of the Mid-Atlantic Ridge, including new data on quenched basaltic glasses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present modern B/Ca core-top calibrations for the epifaunal benthic foraminifer Nuttallides umbonifera and the infaunal Oridorsalis umbonatus to test whether B/Ca values in these species can be used for the reconstruction of paleo-D[[CO3]2-]. O. umbonatus originated in the Late Cretaceous and remains extant, whereas N. umbonifera originated in the Eocene and is the closest extant relative to Nuttallides truempyi, which ranges from the Late Cretaceous through the Eocene. We measured B/Ca in both species in 35 Holocene sediment samples from the Atlantic, Pacific and Southern Oceans. B/Ca values in epifaunal N. umbonifera (~ 85-175 µmol/mol) are consistently lower than values reported for epifaunal Cibicidoides (Cibicides) wuellerstorfi (130-250 µmol/mol), though the sensitivity of D[[CO3]2-] on B/Ca in N. umbonifera (1.23 ± 0.15) is similar to that in C. wuellerstorfi (1.14 ± 0.048). In addition, we show that B/Ca values of paired N. umbonifera and its extinct ancestor, N. truempyi, from Eocene cores are indistinguishable within error. In contrast, both the B/Ca (35-85 µmol/mol) and sensitivity to D[[CO3]2-] (0.29 ± 0.20) of core-top O. umbonatus are considerably lower (as in other infaunal species), and this offset extends into the Paleocene. Thus the B/Ca of N. umbonifera and its ancestor can be used to reconstruct bottom water D[[CO3]2?], whereas O. umbonatus B/Ca appears to be buffered by porewater [[CO3]2-] and suited for constraining long-term drift in seawater B/Ca.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Drilling at Sites 534 and 603 of the Deep Sea Drilling Project recovered thick sections of Berriasian through Aptian white limestones to dark gray marls, interbedded with claystone and clastic turbidites. Progressive thermal demagnetization removed a normal-polarity overprint carried by goethite and/or pyrrhotite. The resulting characteristic magnetization is carried predominantly by magnetite. Directions and reliability of characteristic magnetization of each sample were computed by using least squares line-fits of magnetization vectors. The corrected true mean inclinations of the sites suggest that the western North Atlantic underwent approximately 6° of steady southward motion between the Berriasian and Aptian stages. The patterns of magnetic polarity of the two sites, when plotted on stratigraphic columns of the pelagic sediments without turbidite beds, display a fairly consistent magnetostratigraphy through most of the Hauterivian-Barremian interval, using dinoflagellate and nannofossil events and facies changes in pelagic sediment as controls on the correlations. The composite magnetostratigraphy appears to include most of the features of the M-sequence block model of magnetic anomalies from Ml to Ml ON (Barremian-Hauterivian) and from M16 to M23 (Berriasian-Tithonian). The Valanginian magnetostratigraphy of the sites does not exhibit reversed polarity intervals corresponding to Ml 1 to M13 of the M-sequence model; this may be the result of poor magnetization, of a major unrecognized hiatus in the early to middle Valanginian in the western North Atlantic, or of an error in the standard block model. Based on these tentative polarity-zone correlations, the Hauterivian/Barremian boundary occurs in or near the reversed-polarity Chron M7 or M5, depending upon whether the dinoflagellate or nannofossil zonation, respectively, is used; the Valanginian/Hauterivian boundary, as defined by the dinoflagellate zonation, is near reversed-polarity Chron M10N.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Millennial-scale dry events in the Northern Hemisphere monsoon regions during the last Glacial period are commonly attributed to southward shifts of the Intertropical Convergence Zone (ITCZ) associated with an intensification of the northeasterly (NE) trade wind system during intervals of reduced Atlantic meridional overturning circulation (AMOC). Through the use of high-resolution last deglaciation pollen records from the continental slope off Senegal, our data show that one of the longest and most extreme droughts in the western Sahel history, which occurred during the North Atlantic Heinrich Stadial 1 (HS1), displayed a succession of three major phases. These phases progressed from an interval of maximum pollen representation of Saharan elements between ~19 and 17.4 kyr BP indicating the onset of aridity and intensified NE trade winds, followed by a millennial interlude of reduced input of Saharan pollen and increased input of Sahelian pollen, to a final phase between ~16.2 and 15 kyr BP that was characterized by a second maximum of Saharan pollen abundances. This change in the pollen assemblage indicates a mid-HS1 interlude of NE trade wind relaxation, occurring between two distinct trade wind maxima, along with an intensified mid-tropospheric African Easterly Jet (AEJ) indicating a substantial change in West African atmospheric processes. The pollen data thus suggest that although the NE trades have weakened, the Sahel drought remained severe during this time interval. Therefore, a simple strengthening of trade winds and a southward shift of the West African monsoon trough alone cannot fully explain millennial-scale Sahel droughts during periods of AMOC weakening. Instead, we suggest that an intensification of the AEJ is needed to explain the persistence of the drought during HS1. Simulations with the Community Climate System Model indicate that an intensified AEJ during periods of reduced AMOC affected the North African climate by enhancing moisture divergence over the West African realm, thereby extending the Sahel drought for about 4000 years.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Greenland ice core records indicate that the last deglaciation (~7-21 ka) was punctuated by numerous abrupt climate reversals involving temperature changes of up to 5°C-10°C within decades. However, the cause behind many of these events is uncertain. A likely candidate may have been the input of deglacial meltwater, from the Laurentide ice sheet (LIS), to the high-latitude North Atlantic, which disrupted ocean circulation and triggered cooling. Yet the direct evidence of meltwater input for many of these events has so far remained undetected. In this study, we use the geochemistry (paired Mg/Ca-d18O) of planktonic foraminifera from a sediment core south of Iceland to reconstruct the input of freshwater to the northern North Atlantic during abrupt deglacial climate change. Our record can be placed on the same timescale as ice cores and therefore provides a direct comparison between the timing of freshwater input and climate variability. Meltwater events coincide with the onset of numerous cold intervals, including the Older Dryas (14.0 ka), two events during the Allerød (at ~13.1 and 13.6 ka), the Younger Dryas (12.9 ka), and the 8.2 ka event, supporting a causal link between these abrupt climate changes and meltwater input. During the Bølling-Allerød warm interval, we find that periods of warming are associated with an increased meltwater flux to the northern North Atlantic, which in turn induces abrupt cooling, a cessation in meltwater input, and eventual climate recovery. This implies that feedback between climate and meltwater input produced a highly variable climate. A comparison to published data sets suggests that this feedback likely included fluctuations in the southern margin of the LIS causing rerouting of LIS meltwater between southern and eastern drainage outlets, as proposed by Clark et al. (2001, doi:10.1126/science.1062517).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multibeam data were collected during R/V Polarstern cruise ARK-XXII/2 leading to the central Arctic Ocean. Multibeam sonar system was ATLAS HYDROSWEEP DS2. Data are unprocessed and may contain outliers and blunders. Because of an error in installation of the transducers, the data are affected by large systematic errors and must not be used for grid calculations and charting projects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Tokai-to-Kamioka (T2K) neutrino experiment measures neutrino oscillations by using an almost pure muon neutrino beam produced at the J-PARC accelerator facility. The T2K muon monitor was installed to measure the direction and stability of the muon beam which is produced together with the muon neutrino beam. The systematic error in the muon beam direction measurement was estimated, using data and MC simulation, to be 0.28 mrad. During beam operation, the proton beam has been controlled using measurements from the muon monitor and the direction of the neutrino beam has been tuned to within 0.3 mrad with respect to the designed beam-axis. In order to understand the muon beam properties, measurement of the absolute muon yield at the muon monitor was conducted with an emulsion detector. The number of muon tracks was measured to be (4.06 ± 0.05) × 10⁴ cm⁻² normalized with 4 × 10¹¹protons on target with 250 kA horn operation. The result is in agreement with the prediction which is corrected based on hadron production data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La presente Tesis Doctoral aborda la aplicación de métodos meshless, o métodos sin malla, a problemas de autovalores, fundamentalmente vibraciones libres y pandeo. En particular, el estudio se centra en aspectos tales como los procedimientos para la resolución numérica del problema de autovalores con estos métodos, el coste computacional y la viabilidad de la utilización de matrices de masa o matrices de rigidez geométrica no consistentes. Además, se acomete en detalle el análisis del error, con el objetivo de determinar sus principales fuentes y obtener claves que permitan la aceleración de la convergencia. Aunque en la actualidad existe una amplia variedad de métodos meshless en apariencia independientes entre sí, se han analizado las diferentes relaciones entre ellos, deduciéndose que el método Element-Free Galerkin Method [Método Galerkin Sin Elementos] (EFGM) es representativo de un amplio grupo de los mismos. Por ello se ha empleado como referencia en este análisis. Muchas de las fuentes de error de un método sin malla provienen de su algoritmo de interpolación o aproximación. En el caso del EFGM ese algoritmo es conocido como Moving Least Squares [Mínimos Cuadrados Móviles] (MLS), caso particular del Generalized Moving Least Squares [Mínimos Cuadrados Móviles Generalizados] (GMLS). La formulación de estos algoritmos indica que la precisión de los mismos se basa en los siguientes factores: orden de la base polinómica p(x), características de la función de peso w(x) y forma y tamaño del soporte de definición de esa función. Se ha analizado la contribución individual de cada factor mediante su reducción a un único parámetro cuantificable, así como las interacciones entre ellos tanto en distribuciones regulares de nodos como en irregulares. El estudio se extiende a una serie de problemas estructurales uni y bidimensionales de referencia, y tiene en cuenta el error no sólo en el cálculo de autovalores (frecuencias propias o carga de pandeo, según el caso), sino también en términos de autovectores. This Doctoral Thesis deals with the application of meshless methods to eigenvalue problems, particularly free vibrations and buckling. The analysis is focused on aspects such as the numerical solving of the problem, computational cost and the feasibility of the use of non-consistent mass or geometric stiffness matrices. Furthermore, the analysis of the error is also considered, with the aim of identifying its main sources and obtaining the key factors that enable a faster convergence of a given problem. Although currently a wide variety of apparently independent meshless methods can be found in the literature, the relationships among them have been analyzed. The outcome of this assessment is that all those methods can be grouped in only a limited amount of categories, and that the Element-Free Galerkin Method (EFGM) is representative of the most important one. Therefore, the EFGM has been selected as a reference for the numerical analyses. Many of the error sources of a meshless method are contributed by its interpolation/approximation algorithm. In the EFGM, such algorithm is known as Moving Least Squares (MLS), a particular case of the Generalized Moving Least Squares (GMLS). The accuracy of the MLS is based on the following factors: order of the polynomial basis p(x), features of the weight function w(x), and shape and size of the support domain of this weight function. The individual contribution of each of these factors, along with the interactions among them, has been studied in both regular and irregular arrangement of nodes, by means of a reduction of each contribution to a one single quantifiable parameter. This assessment is applied to a range of both one- and two-dimensional benchmarking cases, and includes not only the error in terms of eigenvalues (natural frequencies or buckling load), but also of eigenvectors

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several issues concerning the current use of speech interfaces are discussed and the design and development of a speech interface that enables air traffic controllers to command and control their terminals by voice is presented. A special emphasis is made in the comparison between laboratory experiments and field experiments in which a set of ergonomics-related effects are detected that cannot be observed in the controlled laboratory experiments. The paper presents both objective and subjective performance obtained in field evaluation of the system with student controllers at an air traffic control (ATC) training facility. The system exhibits high word recognition test rates (0.4% error in Spanish and 1.5% in English) and low command error (6% error in Spanish and 10.6% error in English in the field tests). Subjective impression has also been positive, encouraging future development and integration phases in the Spanish ATC terminals designed by Aeropuertos Españoles y Navegación Aérea (AENA).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Debido al carácter innovador de los objetivos científico-tecnológicos planteados al inicio de la presente Tesis doctoral, además de la primera recopilación bibliográfica, a lo largo del desarrollo de la misma se ha realizado una actualización bibliográfica continua centrada en los desarrollos relativos a la instrumentación, las aplicaciones y los procesos de interpretación de las técnicas de tomografía eléctrica. Durante el desarrollo del presente trabajo, se han realizado una serie de campañas diferenciadas, tanto por los ambientes geológicos y las condiciones antrópicas en los que se han efectuado los perfiles geoeléctricos, como por la profundidad y los objetivos de la prospección. En esas campañas se han obtenido, mediante el programa Res2DInv, las secciones tomográficas con la configuración de variables por defecto y se ha realizado una interpretación estructural de las mismas. Como parte central del desarrollo de esta investigación se han efectuado varios procesos de optimización: Comparación y optimización de los dispositivos empleados para la medición de perfiles de resistividad de cara a la obtención de secciones tomográficas que presenten la mayor resolución en distintos ambientes geológicos. Selección de los rangos de representación (lineales y logarítmicos) y de la gradación cromática que, pudiendo adoptarse de forma generalizada, permita la mejor interpretación gráfica para usuarios no avanzados. Diseño de criterios para la selección del factor de amortiguamiento, como parámetro crítico en los procesos de inversión tomográfica de perfiles geoeléctricos de superficie, de forma que los modelos resultantes alcancen la mejor relación entre fiabilidad y error respecto a los valores medidos. Paralelamente a los procesos anteriores se ha definido un índice de calidad de la inversión, que es directamente normalizado y que permite evaluar el grado en el que pueden considerarse fiables tanto los valores de resistividad obtenidos como su distribución espacial. El índice desarrollado (IRI) está basado en la influencia que tiene la resistividad inicial del modelo de inversión en los valores de cada una de las celdas de la malla de inversión. Tras un exhaustivo análisis estadístico se ha verificado que, a diferencia de otros índices utilizados hasta la fecha, dicho índice no está afectado ni por la profundidad de investigación y la resolución de los dispositivos, ni por la distribución de resistividades que presente el subsuelo. ABSTRACT Due to the innovative character of the scientific and technological objectives proposed at the beginning of this thesis, in addition to preparatory bibliographic reference, a constant bibliographic update on developments related to instrumentation, applications and interpretation processes for electric tomography has been performed all through the development of the work. Several measuring campaigns were performed during the development of the thesis. These campaigns, in which the geoelectric profiles were measured, varied in several aspects, both in the geological and anthropic conditions, and also in the depth and objectives of the prospection. Tomographic sections were obtained and interpretated by making use of the Res2DInv software configured with the default variables. As part of the core development in this research, several optimization processes have been done: Comparison and optimization of the devices used in the measurement of resistivity profiles. This optimization allows increasing the resolution of tomographic sections in different geological conditions. Selection of the best visual representation for the tomographic sections. This involves the selection of the range of values and type of scale (linear vs. logarithmic) as well as the chromatic graduation. This selection allows improving the interpretation of tomographic sections for the non-expert eye. Devising of criteria for the selection of the damping factor, as the critical parameter in processes of tomographic inversion of superficial geoelectrical profiles. The use of these criteria allows selecting the best factors to obtain the best ratio between reliability and error (in relation to the mean values) for the resulting model. In addition to the aforementioned processes, and as part of the work within this thesis, it has been defined a normalized quality index for inversion. This index allows evaluating the degree of reliability of the obtained resistivity values and spatial distribution. The developed index, termed IRI in the thesis, is based on the influence shown by the initial resistivity of the inversion model on the values of each of the cells of the inversion mesh. After an exhaustive statistical analysis it was verified that the IRI index is neither affected by the investigation depth, nor the resolution of the devices, nor the resistivity’s distribution of the sub soil. This independency is a unique characteristic of the IRI index, not present in other indexes developed and used till date.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.