83 resultados para Robustness
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.
Resumo:
This paper analyses the robustness of Least-Squares Monte Carlo, a techniquerecently proposed by Longstaff and Schwartz (2001) for pricing Americanoptions. This method is based on least-squares regressions in which theexplanatory variables are certain polynomial functions. We analyze theimpact of different basis functions on option prices. Numerical resultsfor American put options provide evidence that a) this approach is veryrobust to the choice of different alternative polynomials and b) few basisfunctions are required. However, these conclusions are not reached whenanalyzing more complex derivatives.
Resumo:
In the quest to completely describe entanglement in the general case of a finite number of parties sharing a physical system of finite-dimensional Hilbert space an entanglement magnitude is introduced for its pure and mixed states: robustness. It corresponds to the minimal amount of mixing with locally prepared states which washes out all entanglement. It quantifies in a sense the endurance of entanglement against noise and jamming. Its properties are studied comprehensively. Analytical expressions for the robustness are given for pure states of two-party systems, and analytical bounds for mixed states of two-party systems. Specific results are obtained mainly for the qubit-qubit system (qubit denotes quantum bit). As by-products local pseudomixtures are generalized, a lower bound for the relative volume of separable states is deduced, and arguments for considering convexity a necessary condition of any entanglement measure are put forward.
Resumo:
working paper
Resumo:
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. In order to obtain information about each possible data division we carried out a conditional Monte Carlo simulation with 100,000 samples for each systematically chosen triplet. Robustness and power are studied under several experimental conditions: different autocorrelation levels and different effect sizes, as well as different phase lengths determined by the points of change. Type I error rates were distorted by the presence of autocorrelation for the majority of data divisions. Satisfactory Type II error rates were obtained only for large treatment effects. The relationship between the lengths of the four phases appeared to be an important factor for the robustness and the power of the randomization test.
Resumo:
The aim of this note is to complement some of the results appearing in Dolado et al. (2003) article “Publishing Performance in Economics: Spanish Rankings” Particularly we want to focus on three issues: the robustness of the results regardless of the time span considered, the evaluation of a researcher to the advance of the knowledge, and to what extent the choice of a particular database to download the results can affect the results. Differences are significant when we expand the time period considered. There are also small but significant differences if we combine datasets to derive the rankings.
Resumo:
This work complements some of the results appearing in the article ?Publishing Performance in Economics: Spanish Rankings? by Dolado et al. . Specifically we focus on the robustness of the results regardless of the time span considered, the effect of the choice of a particular database on the final results, and the effects on changes in the unit of institutional measure (departments versus institutions as a whole). Differences are significant when we expand the time period considered. There are also significant but small differences if we combine datasets to derive the rankings. Finally, department rankings offer a more precise picture of the situation of the Spanish academics, although results do not differ substantially from those obtained when overall institutions are considered.
Resumo:
En este proyecto se han presentado los modelos de distribución de canal más comunes que se puede encontrar una señal en una transmisión. Seguidamente se ha presentado el concepto de diversidad en comunicaciones inalámbricas terrestres y se ha trasladado el escenario a comunicaciones por satélite. Para analizar la calidad de los enlaces con diversidad se ha realizado un simulador, con Matlab, que modele la estructura básica de un sistema de comunicaciones (emisor, canal y receptor). Simulando las comunicaciones entre los diferentes sistemas de diversidad se ha podido comparar la calidad de cada enlace. El modelo Alamouti ha presentado una robustez y una baja probabilidad de error que hacen que sea la mejor elección a la hora de diseñar un sistema de diversidad para comunicaciones por satélite. Utiliza la diversidad de canal para aprovechar cada pizca de señal que recibe y así poder descifrar el mensaje enviado.
Resumo:
Este trabajo presenta una metodología para detectar y realizar el seguimiento de características faciales. En el primer paso del procedimiento se detectan caras mediante Adaboost con cascadas de clasificadores débiles. El segundo paso busca las características internas de la cara mediante el CSR, detectando zonas de interés. Una vez que estas características se capturan, un proceso de tracking basado en el descriptor SIFT, que hemos llamado pseudo-SIFT, es capaz de guardar información sobre la evolución de movimiento en las regiones detectadas. Además, un conjunto de datos públicos ha sido desarrollado con el propósito de compartirlo con otras investigaciones sobre detección, clasificación y tracking. Experimentos reales muestran la robustez de este trabajo y su adaptabilidad para trabajos futuros.
Resumo:
As computer chips implementation technologies evolve to obtain more performance, those computer chips are using smaller components, with bigger density of transistors and working with lower power voltages. All these factors turn the computer chips less robust and increase the probability of a transient fault. Transient faults may occur once and never more happen the same way in a computer system lifetime. There are distinct consequences when a transient fault occurs: the operating system might abort the execution if the change produced by the fault is detected by bad behavior of the application, but the biggest risk is that the fault produces an undetected data corruption that modifies the application final result without warnings (for example a bit flip in some crucial data). With the objective of researching transient faults in computer system’s processor registers and memory we have developed an extension of HP’s and AMD joint full system simulation environment, named COTSon. This extension allows the injection of faults that change a single bit in processor registers and memory of the simulated computer. The developed fault injection system makes it possible to: evaluate the effects of single bit flip transient faults in an application, analyze an application robustness against single bit flip transient faults and validate fault detection mechanism and strategies.
Resumo:
Minimal models for the explanation of decision-making in computational neuroscience are based on the analysis of the evolution for the average firing rates of two interacting neuron populations. While these models typically lead to multi-stable scenario for the basic derived dynamical systems, noise is an important feature of the model taking into account finite-size effects and robustness of the decisions. These stochastic dynamical systems can be analyzed by studying carefully their associated Fokker-Planck partial differential equation. In particular, we discuss the existence, positivity and uniqueness for the solution of the stationary equation, as well as for the time evolving problem. Moreover, we prove convergence of the solution to the the stationary state representing the probability distribution of finding the neuron families in each of the decision states characterized by their average firing rates. Finally, we propose a numerical scheme allowing for simulations performed on the Fokker-Planck equation which are in agreement with those obtained recently by a moment method applied to the stochastic differential system. Our approach leads to a more detailed analytical and numerical study of this decision-making model in computational neuroscience.
Resumo:
Computer chips implementation technologies evolving to obtain more performance are increasing the probability of transient faults. As this probability grows and on-chip solutions are expensive or tend to degrade processor performance, the efforts to deal with these transient faults in higher levels (such as the operating system or even at the application level) are increasing. Mostly, these efforts are trying to avoid silent data corruptions using hardware, software and hybrid based techniques to add redundancy to detect the errors generated by the transient faults. This work presents our proposal to improve the robustness of applications with source code based transformation adding redundancy. Also, our proposal takes account of the tradeoff between the improved robustness and the overhead generated by the added redundancy.
Resumo:
COMPSs és un entorn de programació paral·lela desenvolupat per BSC-CNS. Aquest projecte busca estendre aquest entorn per tal de dotar-lo de funcionalitats inicialment no suportades. Aquest conjunt d’extensions radiquen principalment en la implementació de mecanismes que permetin incrementar la flexibilitat, robustesa i polivalència del sistema.
Resumo:
En aquest treball s'ha implementat un sistema d'inserció i recuperació de marques que ha permès fer aquest procés amb un tipus específic d'imatges (imatges en format JPEG). La implementació correcta del sistema ha permès fer proves d'inserció i recuperació de marca en imatges JPEG. S'ha estudiat la robustesa del sistema enfront de manipulacions per compressió que pretenen eliminar la marca i s?ha comprovat que el sistema implementat ofereix més robustesa en imatges en escala de grisos que en imatges en color.
Resumo:
Aquest TFC parteix de la necessitat de realitzar una aplicació web de comerç electrònic, en J2EE, que presenti la característica de escalabilitat, robustesa i reutilització. S'ha utilitzat el framework Struts.