83 resultados para Robustness
Resumo:
This paper presents a new registration algorithm, called Temporal Di eomorphic Free Form Deformation (TDFFD), and its application to motion and strain quanti cation from a sequence of 3D ultrasound (US) images. The originality of our approach resides in enforcing time consistency by representing the 4D velocity eld as the sum of continuous spatiotemporal B-Spline kernels. The spatiotemporal displacement eld is then recovered through forward Eulerian integration of the non-stationary velocity eld. The strain tensor iscomputed locally using the spatial derivatives of the reconstructed displacement eld. The energy functional considered in this paper weighs two terms: the image similarity and a regularization term. The image similarity metric is the sum of squared di erences between the intensities of each frame and a reference one. Any frame in the sequence can be chosen as reference. The regularization term is based on theincompressibility of myocardial tissue. TDFFD was compared to pairwise 3D FFD and 3D+t FFD, bothon displacement and velocity elds, on a set of synthetic 3D US images with di erent noise levels. TDFFDshowed increased robustness to noise compared to these two state-of-the-art algorithms. TDFFD also proved to be more resistant to a reduced temporal resolution when decimating this synthetic sequence. Finally, this synthetic dataset was used to determine optimal settings of the TDFFD algorithm. Subsequently, TDFFDwas applied to a database of cardiac 3D US images of the left ventricle acquired from 9 healthy volunteers and 13 patients treated by Cardiac Resynchronization Therapy (CRT). On healthy cases, uniform strain patterns were observed over all myocardial segments, as physiologically expected. On all CRT patients, theimprovement in synchrony of regional longitudinal strain correlated with CRT clinical outcome as quanti ed by the reduction of end-systolic left ventricular volume at follow-up (6 and 12 months), showing the potential of the proposed algorithm for the assessment of CRT.
Resumo:
Wireless “MIMO” systems, employing multiple transmit and receive antennas, promise a significant increase of channel capacity, while orthogonal frequency-division multiplexing (OFDM) is attracting a good deal of attention due to its robustness to multipath fading. Thus, the combination of both techniques is an attractive proposition for radio transmission. The goal of this paper is the description and analysis of a new and novel pilot-aided estimator of multipath block-fading channels. Typical models leading to estimation algorithms assume the number of multipath components and delays to be constant (and often known), while their amplitudes are allowed to vary with time. Our estimator is focused instead on the more realistic assumption that the number of channel taps is also unknown and varies with time following a known probabilistic model. The estimation problem arising from these assumptions is solved using Random-Set Theory (RST), whereby one regards the multipath-channel response as a single set-valued random entity.Within this framework, Bayesian recursive equations determine the evolution with time of the channel estimator. Due to the lack of a closed form for the solution of Bayesian equations, a (Rao–Blackwellized) particle filter (RBPF) implementation ofthe channel estimator is advocated. Since the resulting estimator exhibits a complexity which grows exponentially with the number of multipath components, a simplified version is also introduced. Simulation results describing the performance of our channel estimator demonstrate its effectiveness.
Resumo:
Using a new dataset on capital account openness, we investigate why equity return correlations changed over the last century. Based on a new, long-run dataset on capital account regulations in a group of 16 countries over the period 1890-2001, we show that correlations increase as financial markets are liberalized. These findings are robust to controlling for both the Forbes-Rigobon bias and global averages in equity return correlations. We test the robustness of our conclusions, and show that greater synchronization of fundamentals is not the main cause of increasing correlations. These results imply that the home bias puzzle may be smaller than traditionally claimed.
Resumo:
The aim of this paper is to test formally the classical business cycle hypothesis, using data from industrialized countries for the time period since 1960. The hypothesis is characterized by the view that the cyclical structure in GDP is concentrated in the investment series: fixed investment has typically a long cycle, while the cycle in inventory investment is shorter. To check the robustness of our results, we subject the data for 15 OECD countries to a variety of detrending techniques. While the hypothesis is not confirmed uniformly for all countries, there is a considerably high number for which the data display the predicted pattern. None of the countries shows a pattern which can be interpreted as a clear rejection of the classical hypothesis.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
This paper investigates the properties of an international real business cycle model with household production. We show that a model with disturbances to both market and household technologies reproduces the main regularities of the data and improves existing models in matching international consumption, investment and output correlations without irrealistic assumptions on the structure of international financial markets. Sensitivity analysis shows the robustness of the results to alternative specifications of the stochastic processes for the disturbances and to variations of unmeasured parameters within a reasonable range.
Resumo:
In moment structure analysis with nonnormal data, asymptotic valid inferences require the computation of a consistent (under general distributional assumptions) estimate of the matrix $\Gamma$ of asymptotic variances of sample second--order moments. Such a consistent estimate involves the fourth--order sample moments of the data. In practice, the use of fourth--order moments leads to computational burden and lack of robustness against small samples. In this paper we show that, under certain assumptions, correct asymptotic inferences can be attained when $\Gamma$ is replaced by a matrix $\Omega$ that involves only the second--order moments of the data. The present paper extends to the context of multi--sample analysis of second--order moment structures, results derived in the context of (simple--sample) covariance structure analysis (Satorra and Bentler, 1990). The results apply to a variety of estimation methods and general type of statistics. An example involving a test of equality of means under covariance restrictions illustrates theoretical aspects of the paper.
Resumo:
The aim of this paper is to test formally the classical business cyclehypothesis, using data from industrialized countries for the timeperiod since 1960. The hypothesis is characterized by the view that the cyclical structure in GDP is concentrated in the investment series: fixed investment has typically a long cycle, while the cycle in inventory investment is shorter. To check the robustness of our results, we subject the data for 15 OECD countries to a variety of detrending techniques. While the hypothesis is not confirmed uniformly for all countries, there is a considerably high number for which the data display the predicted pattern. None of the countries shows a pattern which can be interpreted as a clear rejection of the classical hypothesis.
Resumo:
We analyze the effects of neutral and investment-specific technology shockson hours and output. Long cycles in hours are captured in a variety of ways.Hours robustly fall in response to neutral shocks and robustly increase inresponse to investment specific shocks. The percentage of the variance ofhours (output) explained by neutral shocks is small (large); the opposite istrue for investment specific shocks. News shocks are uncorrelated with theestimated technology shocks.
Resumo:
What are the best voting systems in terms of utilitarianism? Or in terms of maximin, or maximax? We study these questions for the case of three alternatives and a class of structurally equivalent voting rules. We show that plurality, arguably the most widely used voting system, performs very poorly in terms of remarkable ideals of justice, such as utilitarianism or maximin, and yet is optimal in terms of maximax. Utilitarianism is bestapproached by a voting system converging to the Borda count, while the best way to achieve maximin is by means of a voting system converging to negative voting. We study the robustness of our results across different social cultures, measures of performance, and population sizes.
Resumo:
Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.
Resumo:
Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.
Resumo:
This article designs what it calls a Credit-Risk Balance Sheet (the risk being that of default by customers), a tool which, in principle, can contribute to revealing, controlling and managing the bad debt risk arising from a company¿s commercial credit, whose amount can represent a significant proportion of both its current and total assets.To construct it, we start from the duality observed in any credit transaction of this nature, whose basic identity can be summed up as Credit = Risk. ¿Credit¿ is granted by a company to its customer, and can be ranked by quality (we suggest the credit scoring system) and ¿risk¿ can either be assumed (interiorised) by the company itself or transferred to third parties (exteriorised).What provides the approach that leads to us being able to talk with confidence of a real Credit-Risk Balance Sheet with its methodological robustness is that the dual vision of the credit transaction is not, as we demonstrate, merely a classificatory duality (a double risk-credit classification of reality) but rather a true causal relationship, that is, a risk-credit causal duality.Once said Credit-Risk Balance Sheet (which bears a certain structural similarity with the classic net asset balance sheet) has been built, and its methodological coherence demonstrated, its properties ¿static and dynamic¿ are studied.Analysis of the temporal evolution of the Credit-Risk Balance Sheet and of its applications will be the object of subsequent works.
Resumo:
This article designs what it calls a Credit-Risk Balance Sheet (the risk being that of default by customers), a tool which, in principle, can contribute to revealing, controlling and managing the bad debt risk arising from a company¿s commercial credit, whose amount can represent a significant proportion of both its current and total assets.To construct it, we start from the duality observed in any credit transaction of this nature, whose basic identity can be summed up as Credit = Risk. ¿Credit¿ is granted by a company to its customer, and can be ranked by quality (we suggest the credit scoring system) and ¿risk¿ can either be assumed (interiorised) by the company itself or transferred to third parties (exteriorised).What provides the approach that leads to us being able to talk with confidence of a real Credit-Risk Balance Sheet with its methodological robustness is that the dual vision of the credit transaction is not, as we demonstrate, merely a classificatory duality (a double risk-credit classification of reality) but rather a true causal relationship, that is, a risk-credit causal duality.Once said Credit-Risk Balance Sheet (which bears a certain structural similarity with the classic net asset balance sheet) has been built, and its methodological coherence demonstrated, its properties ¿static and dynamic¿ are studied.Analysis of the temporal evolution of the Credit-Risk Balance Sheet and of its applications will be the object of subsequent works.
Resumo:
In this paper we describe the results of a simulation study performed to elucidate the robustness of the Lindstrom and Bates (1990) approximation method under non-normality of the residuals, under different situations. Concerning the fixed effects, the observed coverage probabilities and the true bias and mean square error values, show that some aspects of this inferential approach are not completely reliable. When the true distribution of the residuals is asymmetrical, the true coverage is markedly lower than the nominal one. The best results are obtained for the skew normal distribution, and not for the normal distribution. On the other hand, the results are partially reversed concerning the random effects. Soybean genotypes data are used to illustrate the methods and to motivate the simulation scenarios