905 resultados para Methods : Statistical
Resumo:
Context: Black women are reported to have a higher prevalence of uterine fibroids, and a threefold higher incidence rate and relative risk for clinical uterine fibroid development as compared to women of other races. Uterine fibroid research has reported that black women experience greater uterine fibroid morbidity and disproportionate uterine fibroid disease burden. With increased interest in understanding uterine fibroid development, and race being a critical component of uterine fibroid assessment, it is imperative that the methods used to determine the race of research participants is defined and the operational definition of the use of race as a variable is reported for methodological guidance, and to enable the research community to compare statistical data and replicate studies. ^ Objectives: To systematically review and evaluate the methods used to assess race and racial disparities in uterine fibroid research. ^ Data Sources: Databases searched for this review include: OVID Medline, NML PubMed, Ebscohost Cumulative Index to Nursing and Allied Health Plus with Full Text, and Elsevier Scopus. ^ Review Methods: Articles published in English were retrieved from data sources between January 2011 and March 2011. Broad search terms, uterine fibroids and race, were employed to retrieve a comprehensive list of citations for review screening. The initial database yield included 947 articles, after duplicate extraction 485 articles remained. In addition, 771 bibliographic citations were reviewed to identify additional articles not found through the primary database search, of which 17 new articles were included. In the first screening, 502 titles and abstracts were screened against eligibility questions to determine citations of exclusion and to retrieve full text articles for review. In the second screening, 197 full texted articles were screened against eligibility questions to determine whether or not they met full inclusion/exclusion criteria. ^ Results: 100 articles met inclusion criteria and were used in the results of this systematic review. The evidence suggested that black women have a higher prevalence of uterine fibroids when compared to white women. None of the 14 studies reporting data on prevalence reported an operational definition or conceptual framework for the use of race. There were a limited number of studies reporting on the prevalence of risk factors among racial subgroups. Of the 3 studies, 2 studies reported prevalence of risk factors lower for black women than other races, which was contrary to hypothesis. And, of the three studies reporting on prevalence of risk factors among racial subgroups, none of them reported a conceptual framework for the use of race. ^ Conclusion: In the 100 uterine fibroid studies included in this review over half, 66%, reported a specific objective to assess and recruit study participants based upon their race and/or ethnicity, but most, 51%, failed to report a method of determining the actual race of the participants, and far fewer, 4% (only four South American studies), reported a conceptual framework and/or operational definition of race as a variable. However, most, 95%, of all studies reported race-based health outcomes. The inadequate methodological guidance on the use of race in uterine fibroid studies, purporting to assess race and racial disparities, may be a primary reason that uterine fibroid research continues to report racial disparities, but fails to understand the high prevalence and increased exposures among African-American women. A standardized method of assessing race throughout uterine fibroid research would appear to be helpful in elucidating what race is actually measuring, and the risk of exposures for that measurement. ^
Resumo:
Mixed longitudinal designs are important study designs for many areas of medical research. Mixed longitudinal studies have several advantages over cross-sectional or pure longitudinal studies, including shorter study completion time and ability to separate time and age effects, thus are an attractive choice. Statistical methodology used in general longitudinal studies has been rapidly developing within the last few decades. Common approaches for statistical modeling in studies with mixed longitudinal designs have been the linear mixed-effects model incorporating an age or time effect. The general linear mixed-effects model is considered an appropriate choice to analyze repeated measurements data in longitudinal studies. However, common use of linear mixed-effects model on mixed longitudinal studies often incorporates age as the only random-effect but fails to take into consideration the cohort effect in conducting statistical inferences on age-related trajectories of outcome measurements. We believe special attention should be paid to cohort effects when analyzing data in mixed longitudinal designs with multiple overlapping cohorts. Thus, this has become an important statistical issue to address. ^ This research aims to address statistical issues related to mixed longitudinal studies. The proposed study examined the existing statistical analysis methods for the mixed longitudinal designs and developed an alternative analytic method to incorporate effects from multiple overlapping cohorts as well as from different aged subjects. The proposed study used simulation to evaluate the performance of the proposed analytic method by comparing it with the commonly-used model. Finally, the study applied the proposed analytic method to the data collected by an existing study Project HeartBeat!, which had been evaluated using traditional analytic techniques. Project HeartBeat! is a longitudinal study of cardiovascular disease (CVD) risk factors in childhood and adolescence using a mixed longitudinal design. The proposed model was used to evaluate four blood lipids adjusting for age, gender, race/ethnicity, and endocrine hormones. The result of this dissertation suggest the proposed analytic model could be a more flexible and reliable choice than the traditional model in terms of fitting data to provide more accurate estimates in mixed longitudinal studies. Conceptually, the proposed model described in this study has useful features, including consideration of effects from multiple overlapping cohorts, and is an attractive approach for analyzing data in mixed longitudinal design studies.^
Resumo:
Assemblages of organic-walled dinoflagellate cysts (dinocysts) from 116 marine surface samples have been analysed to assess the relationship between the spatial distribution of dinocysts and modern local environmental conditions [e.g. sea surface temperature (SST), sea surface salinity (SSS), productivity] in the eastern Indian Ocean. Results from the percentage analysis and statistical methods such as multivariate ordination analysis and end-member modelling, indicate the existence of three distinct environmental and oceanographic regions in the study area. Region 1 is located in western and eastern Indonesia and controlled by high SSTs and a low nutrient content of the surface waters. The Indonesian Throughflow (ITF) region (Region 2) is dominated by heterotrophic dinocyst species reflecting the region's high productivity. Region 3 is encompassing the area offshore north-west and west Australia which is characterised by the water masses of the Leeuwin Current, a saline and nutrient depleted southward current featuring energetic eddies.
Resumo:
An image processing observational technique for the stereoscopic reconstruction of the wave form of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired wave form is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.
Resumo:
We present a remote sensing observational method for the measurement of the spatio-temporal dynamics of ocean waves. Variational techniques are used to recover a coherent space-time reconstruction of oceanic sea states given stereo video imagery. The stereoscopic reconstruction problem is expressed in a variational optimization framework. There, we design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal regularizers. A nested iterative scheme is devised to numerically solve, via 3-D multigrid methods, the system of partial differential equations resulting from the optimality condition of the energy functional. The output of our method is the coherent, simultaneous estimation of the wave surface height and radiance at multiple snapshots. We demonstrate our algorithm on real data collected off-shore. Statistical and spectral analysis are performed. Comparison with respect to an existing sequential method is analyzed.
Resumo:
Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.
Resumo:
En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.
Resumo:
Training and assessment paradigms for laparoscopic surgical skills are evolving from traditional mentor–trainee tutorship towards structured, more objective and safer programs. Accreditation of surgeons requires reaching a consensus on metrics and tasks used to assess surgeons’ psychomotor skills. Ongoing development of tracking systems and software solutions has allowed for the expansion of novel training and assessment means in laparoscopy. The current challenge is to adapt and include these systems within training programs, and to exploit their possibilities for evaluation purposes. This paper describes the state of the art in research on measuring and assessing psychomotor laparoscopic skills. It gives an overview on tracking systems as well as on metrics and advanced statistical and machine learning techniques employed for evaluation purposes. The later ones have a potential to be used as an aid in deciding on the surgical competence level, which is an important aspect when accreditation of the surgeons in particular, and patient safety in general, are considered. The prospective of these methods and tools make them complementary means for surgical assessment of motor skills, especially in the early stages of training. Successful examples such as the Fundamentals of Laparoscopic Surgery should help drive a paradigm change to structured curricula based on objective parameters. These may improve the accreditation of new surgeons, as well as optimize their already overloaded training schedules.
Resumo:
Following the success achieved in previous research projects usin non-destructive methods to estimate the physical and mechanical aging of particle and fibre boards, this paper studies the relationships between aging, physical and mechanical changes, using non-destructive measurements of oriented strand board (OSB). 184 pieces of OSB board from a French source were tested to analyze its actual physical and mechanical properties. The same properties were estimated using acoustic non-destructive methods (ultrasound and stress wave velocity) during a physical laboratory aging test. Measurements were recorded of propagation wave velocity with the sensors aligned, edge to edge, and forming an angle of 45 degrees, with both sensors on the same face of the board. This is because aligned measures are not possible on site. The velocity results are always higher in 45 degree measurements. Given the results of statistical analysis, it can be concluded that there is a strong relationship between acoustic measurements and the decline in physical and mechanical properties of the panels due to aging. The authors propose several models to estimate the physical and mechanical properties of board, as well as their degree of aging. The best results are obtained using ultrasound, although the difference in comparison with the stress wave method is not very significant. A reliable prediction of the degree of deterioration (aging) of board is presented.
Resumo:
This paper studies the relationship between aging, physical changes and the results of non-destructive testing of plywood. 176 pieces of plywood were tested to analyze their actual and estimated density using non-destructive methods (screw withdrawal force and ultrasound wave velocity) during a laboratory aging test. From the results of statistical analysis it can be concluded that there is a strong relationship between the non-destructive measurements carried out, and the decline in the physical properties of the panels due to aging. The authors propose several models to estimate board density. The best results are obtained with ultrasound. A reliable prediction of the degree of deterioration (aging) of board is presented.
Resumo:
Ontologies and taxonomies are widely used to organize concepts providing the basis for activities such as indexing, and as background knowledge for NLP tasks. As such, translation of these resources would prove useful to adapt these systems to new languages. However, we show that the nature of these resources is significantly different from the "free-text" paradigm used to train most statistical machine translation systems. In particular, we see significant differences in the linguistic nature of these resources and such resources have rich additional semantics. We demonstrate that as a result of these linguistic differences, standard SMT methods, in particular evaluation metrics, can produce poor performance. We then look to the task of leveraging these semantics for translation, which we approach in three ways: by adapting the translation system to the domain of the resource; by examining if semantics can help to predict the syntactic structure used in translation; and by evaluating if we can use existing translated taxonomies to disambiguate translations. We present some early results from these experiments, which shed light on the degree of success we may have with each approach
Resumo:
At present there is much literature that refers to the advantages and disadvantages of different methods of statistical and dynamical downscaling of climate variables projected by climate models. Less attention has been paid to other indirect variables, like runoff, which play a significant role in evaluating the impact of climate change on hydrological systems. Runoff presents a much greater bias in climate models than other climate variables, like temperature or precipitation. It is very important to identify the methods that minimize bias while downscaling runoff from the gridded results of climate models to the basin scale
Resumo:
La planificación pre-operatoria se ha convertido en una tarea esencial en cirugías y terapias de marcada complejidad, especialmente aquellas relacionadas con órgano blando. Un ejemplo donde la planificación preoperatoria tiene gran interés es la cirugía hepática. Dicha planificación comprende la detección e identificación precisa de las lesiones individuales y vasos así como la correcta segmentación y estimación volumétrica del hígado funcional. Este proceso es muy importante porque determina tanto si el paciente es un candidato adecuado para terapia quirúrgica como la definición del abordaje a seguir en el procedimiento. La radioterapia de órgano blando es un segundo ejemplo donde la planificación se requiere tanto para la radioterapia externa convencional como para la radioterapia intraoperatoria. La planificación comprende la segmentación de tumor y órganos vulnerables y la estimación de la dosimetría. La segmentación de hígado funcional y la estimación volumétrica para planificación de la cirugía se estiman habitualmente a partir de imágenes de tomografía computarizada (TC). De igual modo, en la planificación de radioterapia, los objetivos de la radiación se delinean normalmente sobre TC. Sin embargo, los avances en las tecnologías de imagen de resonancia magnética (RM) están ofreciendo progresivamente ventajas adicionales. Por ejemplo, se ha visto que el ratio de detección de metástasis hepáticas es significativamente superior en RM con contraste Gd–EOB–DTPA que en TC. Por tanto, recientes estudios han destacado la importancia de combinar la información de TC y RM para conseguir el mayor nivel posible de precisión en radioterapia y para facilitar una descripción precisa de las lesiones del hígado. Con el objetivo de mejorar la planificación preoperatoria en ambos escenarios se precisa claramente de un algoritmo de registro no rígido de imagen. Sin embargo, la gran mayoría de sistemas comerciales solo proporcionan métodos de registro rígido. Las medidas de intensidad de voxel han demostrado ser criterios de similitud de imágenes robustos, y, entre ellas, la Información Mutua (IM) es siempre la primera elegida en registros multimodales. Sin embargo, uno de los principales problemas de la IM es la ausencia de información espacial y la asunción de que las relaciones estadísticas entre las imágenes son homogéneas a lo largo de su domino completo. La hipótesis de esta tesis es que la incorporación de información espacial de órganos al proceso de registro puede mejorar la robustez y calidad del mismo, beneficiándose de la disponibilidad de las segmentaciones clínicas. En este trabajo, se propone y valida un esquema de registro multimodal no rígido 3D usando una nueva métrica llamada Información Mutua Centrada en el Órgano (Organ-Focused Mutual Information metric (OF-MI)) y se compara con la formulación clásica de la Información Mutua. Esto permite mejorar los resultados del registro en áreas problemáticas incorporando información regional al criterio de similitud, beneficiándose de la disponibilidad real de segmentaciones en protocolos estándares clínicos, y permitiendo que la dependencia estadística entre las dos modalidades de imagen difiera entre órganos o regiones. El método propuesto se ha aplicado al registro de TC y RM con contraste Gd–EOB–DTPA así como al registro de imágenes de TC y MR para planificación de radioterapia intraoperatoria rectal. Adicionalmente, se ha desarrollado un algoritmo de apoyo de segmentación 3D basado en Level-Sets para la incorporación de la información de órgano en el registro. El algoritmo de segmentación se ha diseñado específicamente para la estimación volumétrica de hígado sano funcional y ha demostrado un buen funcionamiento en un conjunto de imágenes de TC abdominales. Los resultados muestran una mejora estadísticamente significativa de OF-MI comparada con la Información Mutua clásica en las medidas de calidad de los registros; tanto con datos simulados (p<0.001) como con datos reales en registro hepático de TC y RM con contraste Gd– EOB–DTPA y en registro para planificación de radioterapia rectal usando OF-MI multi-órgano (p<0.05). Adicionalmente, OF-MI presenta resultados más estables con menor dispersión que la Información Mutua y un comportamiento más robusto con respecto a cambios en la relación señal-ruido y a la variación de parámetros. La métrica OF-MI propuesta en esta tesis presenta siempre igual o mayor precisión que la clásica Información Mutua y consecuentemente puede ser una muy buena alternativa en aplicaciones donde la robustez del método y la facilidad en la elección de parámetros sean particularmente importantes. Abstract Pre-operative planning has become an essential task in complex surgeries and therapies, especially for those affecting soft tissue. One example where soft tissue preoperative planning is of high interest is liver surgery. It involves the accurate detection and identification of individual liver lesions and vessels as well as the proper functional liver segmentation and volume estimation. This process is very important because it determines whether the patient is a suitable candidate for surgical therapy and the type of procedure. Soft tissue radiation therapy is a second example where planning is required for both conventional external and intraoperative radiotherapy. It involves the segmentation of the tumor target and vulnerable organs and the estimation of the planned dose. Functional liver segmentations and volume estimations for surgery planning are commonly estimated from computed tomography (CT) images. Similarly, in radiation therapy planning, targets to be irradiated and healthy and vulnerable tissues to be protected from irradiation are commonly delineated on CT scans. However, developments in magnetic resonance imaging (MRI) technology are progressively offering advantages. For instance, the hepatic metastasis detection rate has been found to be significantly higher in Gd–EOB–DTPAenhanced MRI than in CT. Therefore, recent studies highlight the importance of combining the information from CT and MRI to achieve the highest level of accuracy in radiotherapy and to facilitate accurate liver lesion description. In order to improve those two soft tissue pre operative planning scenarios, an accurate nonrigid image registration algorithm is clearly required. However, the vast majority of commercial systems only provide rigid registration. Voxel intensity measures have been shown to be robust measures of image similarity, and among them, Mutual Information (MI) is always the first candidate in multimodal registrations. However, one of the main drawbacks of Mutual Information is the absence of spatial information and the assumption that statistical relationships between images are the same over the whole domain of the image. The hypothesis of the present thesis is that incorporating spatial organ information into the registration process may improve the registration robustness and quality, taking advantage of the clinical segmentations availability. In this work, a multimodal nonrigid 3D registration framework using a new Organ- Focused Mutual Information metric (OF-MI) is proposed, validated and compared to the classical formulation of the Mutual Information (MI). It allows improving registration results in problematic areas by adding regional information into the similitude criterion taking advantage of actual segmentations availability in standard clinical protocols and allowing the statistical dependence between the two modalities differ among organs or regions. The proposed method is applied to CT and T1 weighted delayed Gd–EOB–DTPA-enhanced MRI registration as well as to register CT and MRI images in rectal intraoperative radiotherapy planning. Additionally, a 3D support segmentation algorithm based on Level-Sets has been developed for the incorporation of the organ information into the registration. The segmentation algorithm has been specifically designed for the healthy and functional liver volume estimation demonstrating good performance in a set of abdominal CT studies. Results show a statistical significant improvement of registration quality measures with OF-MI compared to MI with both simulated data (p<0.001) and real data in liver applications registering CT and Gd–EOB–DTPA-enhanced MRI and in registration for rectal radiotherapy planning using multi-organ OF-MI (p<0.05). Additionally, OF-MI presents more stable results with smaller dispersion than MI and a more robust behavior with respect to SNR changes and parameters variation. The proposed OF-MI always presents equal or better accuracy than the classical MI and consequently can be a very convenient alternative within applications where the robustness of the method and the facility to choose the parameters are particularly important.
Resumo:
Two different methods to reduce the noise power in the far-field pattern of an antenna as measured in cylindrical near-field (CNF) are proposed. Both methods are based on the same principle: the data recorded in the CNF measurement, assumed to be corrupted by white Gaussian and space-stationary noise, are transformed into a new domain where it is possible to filter out a portion of noise. Those filtered data are then used to calculate a far-field pattern with less noise power than that one obtained from the measured data without applying any filtering. Statistical analyses are carried out to deduce the expressions of the signal-to-noise ratio improvement achieved with each method. Although the idea of the two alternatives is the same, there are important differences between them. The first one applies a modal filtering, requires an oversampling and improves the far-field pattern in all directions. The second method employs a spatial filtering on the antenna plane, does not require oversampling and the far-field pattern is only improved in the forward hemisphere. Several examples are presented using both simulated and measured near-field data to verify the effectiveness of the methods.
Resumo:
There is remarkable growing concern about the quality control at the time, which has led to the search for methods capable of addressing effectively the reliability analysis as part of the Statistic. Managers, researchers and Engineers must understand that 'statistical thinking' is not just a set of statistical tools. They should start considering 'statistical thinking' from a 'system', which means, developing systems that meet specific statistical tools and other methodologies for an activity. The aim of this article is to encourage them (engineers, researchers and managers) to develop a new way of thinking.