933 resultados para Groundwater flow, Well flow, Analytical solution, Unconfined flow, Imaginary error function
Resumo:
CONTEXT Both subclinical thyroid dysfunction and frailty are common among older individuals, but data on the relationship between these 2 conditions are conflicting. OBJECTIVE The purpose of this study was to assess the cross-sectional and prospective associations between subclinical thyroid dysfunction and frailty and the 5 frailty subdomains (sarcopenia, weakness, slowness, exhaustion, and low activity). SETTING AND DESIGN The Osteoporotic Fractures in Men Study is a prospective cohort study. PARTICIPANTS Men older than 65 years (n = 1455) were classified into 3 groups of thyroid status: subclinical hyperthyroidism (n = 26, 1.8%), subclinical hypothyroidism (n = 102, 7.0%), and euthyroidism (n = 1327, 91.2%). MAIN OUTCOME MEASURES Frailty was defined using a slightly modified Cardiovascular Health Study Index: men with 3 or more criteria were considered frail, men with 1 to 2 criteria were considered intermediately frail, and men with no criteria were considered robust. We assessed the cross-sectional relationship between baseline thyroid function and the 3 categories of frailty status (robust/intermediate/frail) as well as the prospective association between baseline thyroid function and subsequent frailty status and mortality after a 5-year follow-up. RESULTS At baseline, compared with euthyroid participants, men with subclinical hyperthyroidism had an increased likelihood of greater frailty status (adjusted odds ratio, 2.48; 95% confidence interval, 1.15-5.34), particularly among men aged <74 years at baseline (odds ratio for frailty, 3.63; 95% confidence interval, 1.21-10.88). After 5 years of follow-up, baseline subclinical hypothyroidism and hyperthyroidism were not consistently associated with overall frailty status or frailty components. CONCLUSION Among community-dwelling older men, subclinical hyperthyroidism, but not subclinical hypothyroidism, is associated with increased odds of prevalent but not incident frailty.
Resumo:
SOMS is a general surrogate-based multistart algorithm, which is used in combination with any local optimizer to find global optima for computationally expensive functions with multiple local minima. SOMS differs from previous multistart methods in that a surrogate approximation is used by the multistart algorithm to help reduce the number of function evaluations necessary to identify the most promising points from which to start each nonlinear programming local search. SOMS’s numerical results are compared with four well-known methods, namely, Multi-Level Single Linkage (MLSL), MATLAB’s MultiStart, MATLAB’s GlobalSearch, and GLOBAL. In addition, we propose a class of wavy test functions that mimic the wavy nature of objective functions arising in many black-box simulations. Extensive comparisons of algorithms on the wavy testfunctions and on earlier standard global-optimization test functions are done for a total of 19 different test problems. The numerical results indicate that SOMS performs favorably in comparison to alternative methods and does especially well on wavy functions when the number of function evaluations allowed is limited.
Resumo:
In the peripheral sensory nervous system the neuronal expression of voltage-gated sodium channels (Navs) is very important for the transmission of nociceptive information since they give rise to the upstroke of the action potential (AP). Navs are composed of nine different isoforms with distinct biophysical properties. Studying the mutations associated with the increase or absence of pain sensitivity in humans, as well as other expression studies, have highlighted Nav1.7, Nav1.8, and Nav1.9 as being the most important contributors to the control of nociceptive neuronal electrogenesis. Modulating their expression and/or function can impact the shape of the AP and consequently modify nociceptive transmission, a process that is observed in persistent pain conditions. Post-translational modification (PTM) of Navs is a well-known process that modifies their expression and function. In chronic pain syndromes, the release of inflammatory molecules into the direct environment of dorsal root ganglia (DRG) sensory neurons leads to an abnormal activation of enzymes that induce Navs PTM. The addition of small molecules, i.e., peptides, phosphoryl groups, ubiquitin moieties and/or carbohydrates, can modify the function of Navs in two different ways: via direct physical interference with Nav gating, or via the control of Nav trafficking. Both mechanisms have a profound impact on neuronal excitability. In this review we will discuss the role of Protein Kinase A, B, and C, Mitogen Activated Protein Kinases and Ca++/Calmodulin-dependent Kinase II in peripheral chronic pain syndromes. We will also discuss more recent findings that the ubiquitination of Nav1.7 by Nedd4-2 and the effect of methylglyoxal on Nav1.8 are also implicated in the development of experimental neuropathic pain. We will address the potential roles of other PTMs in chronic pain and highlight the need for further investigation of PTMs of Navs in order to develop new pharmacological tools to alleviate pain.
Resumo:
Multi-center clinical trials are very common in the development of new drugs and devices. One concern in such trials, is the effect of individual investigational sites enrolling small numbers of patients on the overall result. Can the presence of small centers cause an ineffective treatment to appear effective when treatment-by-center interaction is not statistically significant?^ In this research, simulations are used to study the effect that centers enrolling few patients may have on the analysis of clinical trial data. A multi-center clinical trial with 20 sites is simulated to investigate the effect of a new treatment in comparison to a placebo treatment. Twelve of these 20 investigational sites are considered small, each enrolling less than four patients per treatment group. Three clinical trials are simulated with sample sizes of 100, 170 and 300. The simulated data is generated with various characteristics, one in which treatment should be considered effective and another where treatment is not effective. Qualitative interactions are also produced within the small sites to further investigate the effect of small centers under various conditions.^ Standard analysis of variance methods and the "sometimes-pool" testing procedure are applied to the simulated data. One model investigates treatment and center effect and treatment-by-center interaction. Another model investigates treatment effect alone. These analyses are used to determine the power to detect treatment-by-center interactions, and the probability of type I error.^ We find it is difficult to detect treatment-by-center interactions when only a few investigational sites enrolling a limited number of patients participate in the interaction. However, we find no increased risk of type I error in these situations. In a pooled analysis, when the treatment is not effective, the probability of finding a significant treatment effect in the absence of significant treatment-by-center interaction is well within standard limits of type I error. ^
Resumo:
Cyclic fluctuations of the atmospheric temperature on the dam site, of the water temperature in the reservoir and of the intensity of solar radiation on the faces of the dam cause significant stresses in the body of concrete dams. These stresses can be evaluated first by introducing in analysis models a linear temperature distribution statically equivalent to the real temperature distribution in the dam; the stress valúes obtained from this first step must be complemented (especially in the área of dam faces) with the stress valúes resuiting from the difference between the real temperature law and the linear law at each node. In the case of arch gravity dams, and because of their characteristics of arch dam featuring a thick section, both types of temperature-induced stresses are of similar importance. Thermal stress valúes are directly linked to a series of factors: atmospheric and water temperature and intensity of solar radiation at dam site, site latitude, azimuth of the dam, as well as geometrical characteristics of the dam and thermal properties of concrete. This thesis first presents a complete study of the physical phenomenon of heat exchange between the environment and the dam itself, and establishes the participation scheme of all parameters involved in the problem considered. A detailed documental review of available methods and techniques is then carried out both for the estimation of environmental thermal loads and for the evaluation of the stresses induced by these loads. Variation ranges are also established for the main parameters. The definition of the geometrical parameters of the dam is provided based on the description of a wide set of arch gravity dams built in Spain and abroad. As a practical reference of the parameters defining the thermal action of the environment, a set of zones, in which thermal parameters reach homogeneous valúes, was established for Spain. The mean valué and variation range of atmospheric temperature were then determined for each zone, based on a series of historical valúes. Summer and winter temperature increases caused by solar radiation were also defined for each zone. Since the hypothesis of thermal stratification in the reservoir has been considered, máximum and mínimum temperature valúes reached at the bottom of the reservoir were determined for each climatic zone, as well as the law of temperature variation in function of depth. Various dam-and-foundation configurations were analysed by means of finite element 3D models, in which the dam and foundation were each submitted to different load combinations. The seasonal thermal behaviour of sections of variable thickness was analysed through the application of numerical techniques to one-dimensional models. Contrasting the results of both analyses led to conclusions on the influence of environmental thermal action on the stress conditions of the structure. Las oscilaciones periódicas de la temperatura ambiente en el emplazamiento y de la temperatura del agua en el embalse, así como de la incidencia de la radiación solar sobre los paramentos de la presa, son causa de tensiones importantes en el cuerpo de las presas de hormigón. Estas tensiones pueden ser evaluadas en primer lugar introduciendo en los modelos tridimensionales de análisis, distribuciones lineales de temperatura estáticamente equivalentes a las correspondientes distribuciones reales en el cuerpo de la presa; las tensiones así obtenidas han de complementarse (sobre todo en las cercanías de los paramentos) con tensiones cuyo origen está en la temperatura diferencia entre la ley real y la lineal en cada punto. En el caso de las presas arco-gravedad y en razón de su doble característica de presas arco y de sección gruesa, ambas componentes de la tensión inducida por la temperatura son de magnitud similar. Los valores de estas tensiones de origen térmico están directamente relacionados con la temperatura del emplazamiento y del embalse, con la intensidad de la insolación, con la latitud y el azimut de la presa, con las características geométricas de la estructura y con las propiedades térmicas del hormigón. En esta tesis se realiza, en primer lugar, un estudio completo del fenómeno físico del intercambio de calor entre el medio ambiente y el cuerpo de la presa, estableciendo el mecanismo de participación de todos los parámetros que configuran el problema. En segundo lugar se realiza a cabo una revisión documental detallada de los métodos y técnicas utilizables tanto en la estimación de las cargas térmicas ambientales como en la evaluación de las tensiones inducidas por dichas cargas. En tercer lugar se establecen rangos de variación para los principales parámetros que configuran el problema. Los parámetros geométricos de la presa se definen a partir de la descripción de un amplio conjunto de presas arco-gravedad tanto españolas como del resto del mundo. Como referencia práctica de los parámetros que definen la acción térmica ambiental se establecen en España un conjunto de zonas caracterizadas por que, en cada una de ellas, los parámetros térmicos alcanzan valores homogéneos. Así, y en base a series de valores históricos, se establecen la media y la amplitud de la variación anual de la temperatura ambiental en cada una de las zonas. Igualmente, se han definido para cada zona los incrementos de temperatura que, en invierno y en verano, produce la insolación. En relación con el agua del embalse y en la hipótesis de estratificación térmica de este, se han definido los valores, aplicables en cada una de las zonas, de las temperaturas máxima y mínima en el fondo así como la ley de variación de la temperatura con la profundidad. Utilizando modelos tridimensionales de elementos finitos se analizan diferentes configuraciones de la presa y la cimentación sometidas, cada una de ellas, a diferentes combinaciones de carga. Aplicando técnicas numéricas a modelos unidimensionales se analiza el comportamiento térmico temporal de secciones de espesor variable. Considerando conjuntamente los resultados de los análisis anteriores se obtienen conclusiones parametrizadas de detalle sobre la influencia que tiene en el estado tensional de la estructura la consideración de la acción térmica ambiental.
Resumo:
La iluminación con diodos emisores de luz (LED) está reemplazando cada vez en mayor medida a las fuentes de luz tradicionales. La iluminación LED ofrece ventajas en eficiencia, consumo de energía, diseño, tamaño y calidad de la luz. Durante más de 50 años, los investigadores han estado trabajando en mejoras LED. Su principal relevancia para la iluminación está aumentando rápidamente. Esta tesis se centra en un campo de aplicación importante, como son los focos. Se utilizan para enfocar la luz en áreas definidas, en objetos sobresalientes en condiciones profesionales. Esta iluminación de alto rendimiento requiere una calidad de luz definida, que incluya temperaturas ajustables de color correlacionadas (CCT), de alto índice de reproducción cromática (CRI), altas eficiencias, y colores vivos y brillantes. En el paquete LED varios chips de diferentes colores (rojo, azul, fósforo convertido) se combinan para cumplir con la distribución de energía espectral con alto CRI. Para colimar la luz en los puntos concretos deseados con un ángulo de emisión determinado, se utilizan blancos sintonizables y diversos colores de luz y ópticas secundarias. La combinación de una fuente LED de varios colores con elementos ópticos puede causar falta de homogeneidad cromática en la distribución espacial y angular de la luz, que debe resolverse en el diseño óptico. Sin embargo, no hay necesidad de uniformidad perfecta en el punto de luz debido al umbral en la percepción visual del ojo humano. Por lo tanto, se requiere una descripción matemática del nivel de uniformidad del color con respecto a la percepción visual. Esta tesis está organizada en siete capítulos. Después de un capítulo inicial que presenta la motivación que ha guiado la investigación de esta tesis, en el capítulo 2 se presentan los fundamentos científicos de la uniformidad del color en luces concentradas, como son: el espacio de color aplicado CIELAB, la percepción visual del color, los fundamentos de diseño de focos respecto a los motores de luz y ópticas no formadoras de imágenes, y los últimos avances en la evaluación de la uniformidad del color en el campo de los focos. El capítulo 3 desarrolla diferentes métodos para la descripción matemática de la distribución espacial del color en un área definida, como son la diferencia de color máxima, la desviación media del color, el gradiente de la distribución espacial de color, así como la suavidad radial y axial. Cada función se refiere a los diferentes factores que influyen en la visión, los cuales necesitan un tratamiento distinto que el de los datos que se tendrán en cuenta, además de funciones de ponderación que pre- y post-procesan los datos simulados o medidos para la reducción del ruido, la luminancia de corte, la aplicación de la ponderación de luminancia, la función de sensibilidad de contraste, y la función de distribución acumulativa. En el capítulo 4, se obtiene la función de mérito Usl para la estimación de la uniformidad del color percibida en focos. Se basó en los resultados de dos conjuntos de experimentos con factor humano realizados para evaluar la percepción visual de los sujetos de los patrones de focos típicos. El primer experimento con factor humano dio lugar al orden de importancia percibida de los focos. El orden de rango percibido se utilizó para correlacionar las descripciones matemáticas de las funciones básicas y la función ponderada sobre la distribución espacial del color, que condujo a la función Usl. El segundo experimento con factor humano probó la percepción de los focos bajo condiciones ambientales diversas, con el objetivo de proporcionar una escala absoluta para Usl, para poder así sustituir la opinión subjetiva personal de los individuos por una función de mérito estandarizada. La validación de la función Usl se presenta en relación con el alcance de la aplicación y condiciones, así como las limitaciones y restricciones que se realizan en el capítulo 5. Se compararon los datos medidos y simulados de varios sistemas ópticos. Se discuten los campos de aplicación , así como validaciones y restricciones de la función. El capítulo 6 presenta el diseño del sistema de focos y su optimización. Una evaluación muestra el análisis de sistemas basados en el reflector y la lente TIR. Los sistemas ópticos simulados se comparan en la uniformidad del color Usl, sensibilidad a las sombras coloreadas, eficiencia e intensidad luminosa máxima. Se ha comprobado que no hay un sistema único que obtenga los mejores resultados en todas las categorías, y que una excelente uniformidad de color se pudo alcanzar por la conjunción de dos sistemas diferentes. Finalmente, el capítulo 7 presenta el resumen de esta tesis y la perspectiva para investigar otros aspectos. ABSTRACT Illumination with light-emitting diodes (LED) is more and more replacing traditional light sources. They provide advantages in efficiency, energy consumption, design, size and light quality. For more than 50 years, researchers have been working on LED improvements. Their main relevance for illumination is rapidly increasing. This thesis is focused on one important field of application which are spotlights. They are used to focus light on defined areas, outstanding objects in professional conditions. This high performance illumination required a defined light quality including tunable correlated color temperatures (CCT), high color rendering index (CRI), high efficiencies and bright, vivid colors. Several differently colored chips (red, blue, phosphor converted) in the LED package are combined to meet spectral power distribution with high CRI, tunable white and several light colors and secondary optics are used to collimate the light into the desired narrow spots with defined angle of emission. The combination of multi-color LED source and optical elements may cause chromatic inhomogeneities in spatial and angular light distribution which needs to solved at the optical design. However, there is no need for perfect uniformity in the spot light due to threshold in visual perception of human eye. Therefore, a mathematical description of color uniformity level with regard to visual perception is required. This thesis is organized seven seven chapters. After an initial one presenting the motivation that has guided the research of this thesis, Chapter 2 introduces the scientific basics of color uniformity in spot lights including: the applied color space CIELAB, the visual color perception, the spotlight design fundamentals with regards to light engines and nonimaging optics, and the state of the art for the evaluation of color uniformity in the far field of spotlights. Chapter 3 develops different methods for mathematical description of spatial color distribution in a defined area, which are the maximum color difference, the average color deviation, the gradient of spatial color distribution as well as the radial and axial smoothness. Each function refers to different visual influencing factors, and they need different handling of data be taken into account, along with weighting functions which pre- and post-process the simulated or measured data for noise reduction, luminance cutoff, the implementation of luminance weighting, contrast sensitivity function, and cumulative distribution function. In chapter 4, the merit function Usl for the estimation of the perceived color uniformity in spotlights is derived. It was based on the results of two sets of human factor experiments performed to evaluate the visual perception of typical spotlight patterns by subjects. The first human factor experiment resulted in the perceived rank order of the spotlights. The perceived rank order was used to correlate the mathematical descriptions of basic functions and weighted function concerning the spatial color distribution, which lead to the Usl function. The second human factor experiment tested the perception of spotlights under varied environmental conditions, with to objective to provide an absolute scale for Usl, so the subjective personal opinion of individuals could be replaced by a standardized merit function. The validation of the Usl function is presented concerning the application range and conditions as well as limitations and restrictions in carried out in chapter 5. Measured and simulated data of various optical several systems were compared. Fields of applications are discussed as well as validations and restrictions of the function. Chapter 6 presents spotlight system design and their optimization. An evaluation shows the analysis of reflector-based and TIR lens systems. The simulated optical systems are compared in color uniformity Usl , sensitivity to colored shadows, efficiency, and peak luminous intensity. It has been found that no single system which performed best in all categories, and that excellent color uniformity could be reached by two different system assemblies. Finally, chapter 7 summarizes the conclusions of the present thesis and an outlook for further investigation topics.
Resumo:
The ubiquitin-dependent proteolysis of mitotic cyclin B, which is catalyzed by the anaphase-promoting complex/cyclosome (APC/C) and ubiquitin-conjugating enzyme H10 (UbcH10), begins around the time of the metaphase–anaphase transition and continues through G1 phase of the next cell cycle. We have used cell-free systems from mammalian somatic cells collected at different cell cycle stages (G0, G1, S, G2, and M) to investigate the regulated degradation of four targets of the mitotic destruction machinery: cyclins A and B, geminin H (an inhibitor of S phase identified in Xenopus), and Cut2p (an inhibitor of anaphase onset identified in fission yeast). All four are degraded by G1 extracts but not by extracts of S phase cells. Maintenance of destruction during G1 requires the activity of a PP2A-like phosphatase. Destruction of each target is dependent on the presence of an N-terminal destruction box motif, is accelerated by additional wild-type UbcH10 and is blocked by dominant negative UbcH10. Destruction of each is terminated by a dominant activity that appears in nuclei near the start of S phase. Previous work indicates that the APC/C–dependent destruction of anaphase inhibitors is activated after chromosome alignment at the metaphase plate. In support of this, we show that addition of dominant negative UbcH10 to G1 extracts blocks destruction of the yeast anaphase inhibitor Cut2p in vitro, and injection of dominant negative UbcH10 blocks anaphase onset in vivo. Finally, we report that injection of dominant negative Ubc3/Cdc34, whose role in G1–S control is well established and has been implicated in kinetochore function during mitosis in yeast, dramatically interferes with congression of chromosomes to the metaphase plate. These results demonstrate that the regulated ubiquitination and destruction of critical mitotic proteins is highly conserved from yeast to humans.
Resumo:
An empirical model based on constant flux is presented for chloride transport through concrete in atmospherical exposure conditions. A continuous supply of chlorides is assumed as a constant mass flux at the exposed concrete surface. The model is applied to experimental chloride profiles obtained from a real marine structure, and results are compared with the classical error-function model. The proposed model shows some advantages. It yields a better predictive capacity than the classical error-function model. The previously observed chloride surface concentration increases are compatible with the proposed model. Nevertheless, the predictive capacity of the model can fail if the concrete microstructure changes with time. The model seems to be appropriate for well-maturated concretes exposed to a marine environment in atmospherical conditions.
Resumo:
A new approach based on the nonlocal density functional theory to determine pore size distribution (PSD) of activated carbons and energetic heterogeneity of the pore wall is proposed. The energetic heterogeneity is modeled with an energy distribution function (EDF), describing the distribution of solid-fluid potential well depth (this distribution is a Dirac delta function for an energetic homogeneous surface). The approach allows simultaneous determining of the PSD (assuming slit shape) and EDF from nitrogen or argon isotherms at their respective boiling points by using a set of local isotherms calculated for a range of pore widths and solid-fluid potential well depths. It is found that the structure of the pore wall surface significantly differs from that of graphitized carbon black. This could be attributed to defects in the crystalline structure of the surface, active oxide centers, finite size of the pore walls (in either wall thickness or pore length), and so forth. Those factors depend on the precursor and the process of carbonization and activation and hence provide a fingerprint for each adsorbent. The approach allows very accurate correlation of the experimental adsorption isotherm and leads to PSDs that are simpler and more realistic than those obtained with the original nonlocal density functional theory.
Resumo:
Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.
Resumo:
It is well known that the addition of noise to the input data of a neural network during training can, in some circumstances, lead to significant improvements in generalization performance. Previous work has shown that such training with noise is equivalent to a form of regularization in which an extra term is added to the error function. However, the regularization term, which involves second derivatives of the error function, is not bounded below, and so can lead to difficulties if used directly in a learning algorithm based on error minimization. In this paper we show that, for the purposes of network training, the regularization term can be reduced to a positive definite form which involves only first derivatives of the network mapping. For a sum-of-squares error function, the regularization term belongs to the class of generalized Tikhonov regularizers. Direct minimization of the regularized error function provides a practical alternative to training with noise.
Resumo:
Mixture Density Networks (MDNs) are a well-established method for modelling the conditional probability density which is useful for complex multi-valued functions where regression methods (such as MLPs) fail. In this paper we extend earlier research of a regularisation method for a special case of MDNs to the general case using evidence based regularisation and we show how the Hessian of the MDN error function can be evaluated using R-propagation. The method is tested on two data sets and compared with early stopping.
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.