998 resultados para WEIGHTING FUNCTIONS


Relevância:

60.00% 60.00%

Publicador:

Resumo:

An updated analysis of observed stratospheric temperature variability and trends is presented on the basis of satellite, radiosonde, and lidar observations. Satellite data include measurements from the series of NOAA operational instruments, including the Microwave Sounding Unit covering 1979–2007 and the Stratospheric Sounding Unit (SSU) covering 1979–2005. Radiosonde results are compared for six different data sets, incorporating a variety of homogeneity adjustments to account for changes in instrumentation and observational practices. Temperature changes in the lower stratosphere show cooling of 0.5 K/decade over much of the globe for 1979–2007, with some differences in detail among the different radiosonde and satellite data sets. Substantially larger cooling trends are observed in the Antarctic lower stratosphere during spring and summer, in association with development of the Antarctic ozone hole. Trends in the lower stratosphere derived from radiosonde data are also analyzed for a longer record (back to 1958); trends for the presatellite era (1958–1978) have a large range among the different homogenized data sets, implying large trend uncertainties. Trends in the middle and upper stratosphere have been derived from updated SSU data, taking into account changes in the SSU weighting functions due to observed atmospheric CO2 increases. The results show mean cooling of 0.5–1.5 K/decade during 1979–2005, with the greatest cooling in the upper stratosphere near 40–50 km. Temperature anomalies throughout the stratosphere were relatively constant during the decade 1995–2005. Long records of lidar temperature measurements at a few locations show reasonable agreement with SSU trends, although sampling uncertainties are large in the localized lidar measurements. Updated estimates of the solar cycle influence on stratospheric temperatures show a statistically significant signal in the tropics (30N–S), with an amplitude (solar maximum minus solar minimum) of 0.5 K (lower stratosphere) to 1.0 K (upper stratosphere).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A novel analytical model for mixed-phase, unblocked and unseeded orographic precipitation with embedded convection is developed and evaluated. The model takes an idealised background flow and terrain geometry, and calculates the area-averaged precipitation rate and other microphysical quantities. The results provide insight into key physical processes, including cloud condensation, vapour deposition, evaporation, sublimation, as well as precipitation formation and sedimentation (fallout). To account for embedded convection in nominally stratiform clouds, diagnostics for purely convective and purely stratiform clouds are calculated independently and combined using weighting functions based on relevant dynamical and microphysical time scales. An in-depth description of the model is presented, as well as a quantitative assessment of its performance against idealised, convection-permitting numerical simulations with a sophisticated microphysics parameterisation. The model is found to accurately reproduce the simulation diagnostics over most of the parameter space considered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A method is proposed for merging different nadir-sounding climate data records using measurements from high-resolution limb sounders to provide a transfer function between the different nadir measurements. The two nadir-sounding records need not be overlapping so long as the limb-sounding record bridges between them. The method is applied to global-mean stratospheric temperatures from the NOAA Climate Data Records based on the Stratospheric Sounding Unit (SSU) and the Advanced Microwave Sounding Unit-A (AMSU), extending the SSU record forward in time to yield a continuous data set from 1979 to present, and providing a simple framework for extending the SSU record into the future using AMSU. SSU and AMSU are bridged using temperature measurements from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), which is of high enough vertical resolution to accurately represent the weighting functions of both SSU and AMSU. For this application, a purely statistical approach is not viable since the different nadir channels are not sufficiently linearly independent, statistically speaking. The near-global-mean linear temperature trends for extended SSU for 1980–2012 are −0.63 ± 0.13, −0.71 ± 0.15 and −0.80 ± 0.17 K decade−1 (95 % confidence) for channels 1, 2 and 3, respectively. The extended SSU temperature changes are in good agreement with those from the Microwave Limb Sounder (MLS) on the Aura satellite, with both exhibiting a cooling trend of ~ 0.6 ± 0.3 K decade−1 in the upper stratosphere from 2004 to 2012. The extended SSU record is found to be in agreement with high-top coupled atmosphere–ocean models over the 1980–2012 period, including the continued cooling over the first decade of the 21st century.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Risk attitudes are known to be sensitive to large stake variations. However, little is known on the sensitivity to moderate variations in stakes. This is important for studies that want to compare risk attitudes between countries or over time. I find that variations of ±20% affect only utility, while larger variations may affect also probability weighting. Surprisingly, the effect on weighting functions is larger for losses than for gains. It is also more pronounced for risk than for uncertainty.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Boundary Element Method (BEM) is a discretisation technique for solving partial differential equations, which offers, for certain problems, important advantages over domain techniques. Despite the high CPU time reduction that can be achieved, some 3D problems remain today untreatable because the extremely large number of degrees of freedom—dof—involved in the boundary description. Model reduction seems to be an appealing choice for both, accurate and efficient numerical simulations. However, in the BEM the reduction in the number of degrees of freedom does not imply a significant reduction in the CPU time, because in this technique the more important part of the computing time is spent in the construction of the discrete system of equations. In this way, a reduction also in the number of weighting functions, seems to be a key point to render efficient boundary element simulations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La iluminación con diodos emisores de luz (LED) está reemplazando cada vez en mayor medida a las fuentes de luz tradicionales. La iluminación LED ofrece ventajas en eficiencia, consumo de energía, diseño, tamaño y calidad de la luz. Durante más de 50 años, los investigadores han estado trabajando en mejoras LED. Su principal relevancia para la iluminación está aumentando rápidamente. Esta tesis se centra en un campo de aplicación importante, como son los focos. Se utilizan para enfocar la luz en áreas definidas, en objetos sobresalientes en condiciones profesionales. Esta iluminación de alto rendimiento requiere una calidad de luz definida, que incluya temperaturas ajustables de color correlacionadas (CCT), de alto índice de reproducción cromática (CRI), altas eficiencias, y colores vivos y brillantes. En el paquete LED varios chips de diferentes colores (rojo, azul, fósforo convertido) se combinan para cumplir con la distribución de energía espectral con alto CRI. Para colimar la luz en los puntos concretos deseados con un ángulo de emisión determinado, se utilizan blancos sintonizables y diversos colores de luz y ópticas secundarias. La combinación de una fuente LED de varios colores con elementos ópticos puede causar falta de homogeneidad cromática en la distribución espacial y angular de la luz, que debe resolverse en el diseño óptico. Sin embargo, no hay necesidad de uniformidad perfecta en el punto de luz debido al umbral en la percepción visual del ojo humano. Por lo tanto, se requiere una descripción matemática del nivel de uniformidad del color con respecto a la percepción visual. Esta tesis está organizada en siete capítulos. Después de un capítulo inicial que presenta la motivación que ha guiado la investigación de esta tesis, en el capítulo 2 se presentan los fundamentos científicos de la uniformidad del color en luces concentradas, como son: el espacio de color aplicado CIELAB, la percepción visual del color, los fundamentos de diseño de focos respecto a los motores de luz y ópticas no formadoras de imágenes, y los últimos avances en la evaluación de la uniformidad del color en el campo de los focos. El capítulo 3 desarrolla diferentes métodos para la descripción matemática de la distribución espacial del color en un área definida, como son la diferencia de color máxima, la desviación media del color, el gradiente de la distribución espacial de color, así como la suavidad radial y axial. Cada función se refiere a los diferentes factores que influyen en la visión, los cuales necesitan un tratamiento distinto que el de los datos que se tendrán en cuenta, además de funciones de ponderación que pre- y post-procesan los datos simulados o medidos para la reducción del ruido, la luminancia de corte, la aplicación de la ponderación de luminancia, la función de sensibilidad de contraste, y la función de distribución acumulativa. En el capítulo 4, se obtiene la función de mérito Usl para la estimación de la uniformidad del color percibida en focos. Se basó en los resultados de dos conjuntos de experimentos con factor humano realizados para evaluar la percepción visual de los sujetos de los patrones de focos típicos. El primer experimento con factor humano dio lugar al orden de importancia percibida de los focos. El orden de rango percibido se utilizó para correlacionar las descripciones matemáticas de las funciones básicas y la función ponderada sobre la distribución espacial del color, que condujo a la función Usl. El segundo experimento con factor humano probó la percepción de los focos bajo condiciones ambientales diversas, con el objetivo de proporcionar una escala absoluta para Usl, para poder así sustituir la opinión subjetiva personal de los individuos por una función de mérito estandarizada. La validación de la función Usl se presenta en relación con el alcance de la aplicación y condiciones, así como las limitaciones y restricciones que se realizan en el capítulo 5. Se compararon los datos medidos y simulados de varios sistemas ópticos. Se discuten los campos de aplicación , así como validaciones y restricciones de la función. El capítulo 6 presenta el diseño del sistema de focos y su optimización. Una evaluación muestra el análisis de sistemas basados en el reflector y la lente TIR. Los sistemas ópticos simulados se comparan en la uniformidad del color Usl, sensibilidad a las sombras coloreadas, eficiencia e intensidad luminosa máxima. Se ha comprobado que no hay un sistema único que obtenga los mejores resultados en todas las categorías, y que una excelente uniformidad de color se pudo alcanzar por la conjunción de dos sistemas diferentes. Finalmente, el capítulo 7 presenta el resumen de esta tesis y la perspectiva para investigar otros aspectos. ABSTRACT Illumination with light-emitting diodes (LED) is more and more replacing traditional light sources. They provide advantages in efficiency, energy consumption, design, size and light quality. For more than 50 years, researchers have been working on LED improvements. Their main relevance for illumination is rapidly increasing. This thesis is focused on one important field of application which are spotlights. They are used to focus light on defined areas, outstanding objects in professional conditions. This high performance illumination required a defined light quality including tunable correlated color temperatures (CCT), high color rendering index (CRI), high efficiencies and bright, vivid colors. Several differently colored chips (red, blue, phosphor converted) in the LED package are combined to meet spectral power distribution with high CRI, tunable white and several light colors and secondary optics are used to collimate the light into the desired narrow spots with defined angle of emission. The combination of multi-color LED source and optical elements may cause chromatic inhomogeneities in spatial and angular light distribution which needs to solved at the optical design. However, there is no need for perfect uniformity in the spot light due to threshold in visual perception of human eye. Therefore, a mathematical description of color uniformity level with regard to visual perception is required. This thesis is organized seven seven chapters. After an initial one presenting the motivation that has guided the research of this thesis, Chapter 2 introduces the scientific basics of color uniformity in spot lights including: the applied color space CIELAB, the visual color perception, the spotlight design fundamentals with regards to light engines and nonimaging optics, and the state of the art for the evaluation of color uniformity in the far field of spotlights. Chapter 3 develops different methods for mathematical description of spatial color distribution in a defined area, which are the maximum color difference, the average color deviation, the gradient of spatial color distribution as well as the radial and axial smoothness. Each function refers to different visual influencing factors, and they need different handling of data be taken into account, along with weighting functions which pre- and post-process the simulated or measured data for noise reduction, luminance cutoff, the implementation of luminance weighting, contrast sensitivity function, and cumulative distribution function. In chapter 4, the merit function Usl for the estimation of the perceived color uniformity in spotlights is derived. It was based on the results of two sets of human factor experiments performed to evaluate the visual perception of typical spotlight patterns by subjects. The first human factor experiment resulted in the perceived rank order of the spotlights. The perceived rank order was used to correlate the mathematical descriptions of basic functions and weighted function concerning the spatial color distribution, which lead to the Usl function. The second human factor experiment tested the perception of spotlights under varied environmental conditions, with to objective to provide an absolute scale for Usl, so the subjective personal opinion of individuals could be replaced by a standardized merit function. The validation of the Usl function is presented concerning the application range and conditions as well as limitations and restrictions in carried out in chapter 5. Measured and simulated data of various optical several systems were compared. Fields of applications are discussed as well as validations and restrictions of the function. Chapter 6 presents spotlight system design and their optimization. An evaluation shows the analysis of reflector-based and TIR lens systems. The simulated optical systems are compared in color uniformity Usl , sensitivity to colored shadows, efficiency, and peak luminous intensity. It has been found that no single system which performed best in all categories, and that excellent color uniformity could be reached by two different system assemblies. Finally, chapter 7 summarizes the conclusions of the present thesis and an outlook for further investigation topics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

At the level of the cochlear nucleus (CN), the auditory pathway divides into several parallel circuits, each of which provides a different representation of the acoustic signal. Here, the representation of the power spectrum of an acoustic signal is analyzed for two CN principal cells—chopper neurons of the ventral CN and type IV neurons of the dorsal CN. The analysis is based on a weighting function model that relates the discharge rate of a neuron to first- and second-order transformations of the power spectrum. In chopper neurons, the transformation of spectral level into rate is a linear (i.e., first-order) or nearly linear function. This transformation is a predominantly excitatory process involving multiple frequency components, centered in a narrow frequency range about best frequency, that usually are processed independently of each other. In contrast, type IV neurons encode spectral information linearly only near threshold. At higher stimulus levels, these neurons are strongly inhibited by spectral notches, a behavior that cannot be explained by level transformations of first- or second-order. Type IV weighting functions reveal complex excitatory and inhibitory interactions that involve frequency components spanning a wider range than that seen in choppers. These findings suggest that chopper and type IV neurons form parallel pathways of spectral information transmission that are governed by two different mechanisms. Although choppers use a predominantly linear mechanism to transmit tonotopic representations of spectra, type IV neurons use highly nonlinear processes to signal the presence of wide-band spectral features.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We introduce a new second-order method of texture analysis called Adaptive Multi-Scale Grey Level Co-occurrence Matrix (AMSGLCM), based on the well-known Grey Level Co-occurrence Matrix (GLCM) method. The method deviates significantly from GLCM in that features are extracted, not via a fixed 2D weighting function of co-occurrence matrix elements, but by a variable summation of matrix elements in 3D localized neighborhoods. We subsequently present a new methodology for extracting optimized, highly discriminant features from these localized areas using adaptive Gaussian weighting functions. Genetic Algorithm (GA) optimization is used to produce a set of features whose classification worth is evaluated by discriminatory power and feature correlation considerations. We critically appraised the performance of our method and GLCM in pairwise classification of images from visually similar texture classes, captured from Markov Random Field (MRF) synthesized, natural, and biological origins. In these cross-validated classification trials, our method demonstrated significant benefits over GLCM, including increased feature discriminatory power, automatic feature adaptability, and significantly improved classification performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with the measurement of the characteristics of nonlinear systems by crosscorrelation, using pseudorandom input signals based on m sequences. The systems are characterised by Volterra series, and analytical expressions relating the rth order Volterra kernel to r-dimensional crosscorrelation measurements are derived. It is shown that the two-dimensional crosscorrelation measurements are related to the corresponding second order kernel values by a set of equations which may be structured into a number of independent subsets. The m sequence properties determine how the maximum order of the subsets for off-diagonal values is related to the upper bound of the arguments for nonzero kernel values. The upper bound of the arguments is used as a performance index, and the performance of antisymmetric pseudorandom binary, ternary and quinary signals is investigated. The performance indices obtained above are small in relation to the periods of the corresponding signals. To achieve higher performance with ternary signals, a method is proposed for combining the estimates of the second order kernel values so that the effects of some of the undesirable nonzero values in the fourth order autocorrelation function of the input signal are removed. The identification of the dynamics of two-input, single-output systems with multiplicative nonlinearity is investigated. It is shown that the characteristics of such a system may be determined by crosscorrelation experiments using phase-shifted versions of a common signal as inputs. The effects of nonlinearities on the estimates of system weighting functions obtained by crosscorrelation are also investigated. Results obtained by correlation testing of an industrial process are presented, and the differences between theoretical and experimental results discussed for this case;

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We analyze directional monotonicity of several mixture functions in the direction (1,1...,1), called weak monotonicity. Our particular focus is on power weighting functions and the special cases of Lehmer and Gini means. We establish limits on the number of arguments of these means for which they are weakly monotone. These bounds significantly improve the earlier results and hence increase the range of applicability of Gini and Lehmer means. We also discuss the case of affine weighting functions and find the smallest constant which ensures weak monotonicity of such mixture functions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a μ-Synthesis H∞ Controller for regulating the switching signal of the inverter connected with a three-phase photovoltaic (PV) system. To facilitate the control design, the system is represented in terms of state space realization with uncertainties. The control design involves selecting proper weighting functions and performing synthesis. The controller order is reduced by Henkel-norm method. Simulations are carried out to evaluate the characteristics of the controller under parametric uncertainties. It is found out that the proposed controller is inherently stable, possesses significantly small tracking error, and preserves nominal performance, robust stability and robust performance for the grid-connected three-phase PV system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2015 IEEE.This paper presents an H« controller synthesised based on linear matrix inequalities (LMI) for a current source converter based superconducting magnetic energy systems (SMESs) connected to a node of power systems where the regulation of grid current has considered as a control objective. To facilitate the control design, the system is represented in terms of state space realization with uncertainties. The control design involves selecting proper weighting functions and performing LMI-synthesis. The controller order is reduced by Henkel-norm method. Simulations are carried out to evaluate the characteristics of the controller under parametric uncertainties. It is found out that the proposed controller is inherently stable, possesses significantly small tracking error, and preserves robust performance for the SMES.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a measurement of the top quark mass with t-tbar dilepton events produced in p-pbar collisions at the Fermilab Tevatron $\sqrt{s}$=1.96 TeV and collected by the CDF II detector. A sample of 328 events with a charged electron or muon and an isolated track, corresponding to an integrated luminosity of 2.9 fb$^{-1}$, are selected as t-tbar candidates. To account for the unconstrained event kinematics, we scan over the phase space of the azimuthal angles ($\phi_{\nu_1},\phi_{\nu_2}$) of neutrinos and reconstruct the top quark mass for each $\phi_{\nu_1},\phi_{\nu_2}$ pair by minimizing a $\chi^2$ function in the t-tbar dilepton hypothesis. We assign $\chi^2$-dependent weights to the solutions in order to build a preferred mass for each event. Preferred mass distributions (templates) are built from simulated t-tbar and background events, and parameterized in order to provide continuous probability density functions. A likelihood fit to the mass distribution in data as a weighted sum of signal and background probability density functions gives a top quark mass of $165.5^{+{3.4}}_{-{3.3}}$(stat.)$\pm 3.1$(syst.) GeV/$c^2$.