54 resultados para Nonlinear maximum principle
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.
Resumo:
The analysis of multi-modal and multi-sensor images is nowadays of paramount importance for Earth Observation (EO) applications. There exist a variety of methods that aim at fusing the different sources of information to obtain a compact representation of such datasets. However, for change detection existing methods are often unable to deal with heterogeneous image sources and very few consider possible nonlinearities in the data. Additionally, the availability of labeled information is very limited in change detection applications. For these reasons, we present the use of a semi-supervised kernel-based feature extraction technique. It incorporates a manifold regularization accounting for the geometric distribution and jointly addressing the small sample problem. An exhaustive example using Landsat 5 data illustrates the potential of the method for multi-sensor change detection.
Resumo:
OBJECTIVE: Transthoracic echocardiography (TTE) has been used clinically to disobstruct venous drainage cannula and to optimise placement of venous cannulae in the vena cava but it has never been used to evaluate performance capabilities. Also, little progress has been made in venous cannula design in order to optimise venous return to the heart lung machine. We designed a self-expandable Smartcanula (SC) and analysed its performance capability using echocardiography. METHODS: An epicardial echocardiography probe was placed over the SC or control cannula (CTRL) and a Doppler image was obtained. Mean (V(m)) and maximum (V(max)) velocities, flow and diameter were obtained. Also, pressure drop (DeltaP(CPB)) was obtained between the central venous pressure and inlet to venous reservoir. LDH and Free Hb were also compared in 30 patients. Comparison was made between the two groups using the student's t-test with statistical significance established when p<0.05. RESULTS: Age for the SC and CC groups were 61.6+/-17.6 years and 64.6+/-13.1 years, respectively. Weight was 70.3+/-11.6 kg and 72.8+/-14.4 kg, respectively. BSA was 1.80+/-0.2 m(2) and 1.82+/-0.2 m(2), respectively. CPB times were 114+/-53 min and 108+/-44 min, respectively. Cross-clamp time was 59+/-15 min and 76+/-29 min, respectively (p=NS). Free-Hb was 568+/-142 U/l versus 549+/-271 U/l post-CPB for the SC and CC, respectively (p=NS). LDH was 335+/-73 mg/l versus 354+/-116 mg/l for the SC and CC, respectively (p=NS). V(m) was 89+/-10 cm/s (SC) versus 63+/-3 cm/s (CC), V(max) was 139+/-23 cm/s (SC) versus 93+/-11 cm/s (CC) (both p<0.01). DeltaP(CPB) was 30+/-10 mmHg (SC) versus 43+/-13 mmHg (CC) (p<0.05). A Bland-Altman test showed good agreement between the two devices used concerning flow rate calculations between CPB and TTE (bias 300 ml+/-700 ml standard deviation). CONCLUSIONS: This novel Smartcanula design, due to its self-expanding principle, provides superior flow characteristics compared to classic two stage venous cannula used for adult CPB surgery. No detrimental effects were observed concerning blood damage. Echocardiography was effective in analysing venous cannula performance and velocity patterns.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
Introduction: Glenoid bone volume and bone quality can render the fixation of a reversed shoulder arthroplasty (RSA) basis plate hazardous. Cadaveric study at our institution has demonstrated that optimal baseplate fixation could be achieved with screws in three major columns. Our aim is to review our early rate of aseptic glenoid loosening in a series of baseplates fixed according to this principle. Methods: Between 2005 and 2008, 48 consecutive RSA (Reversed Aequalis) were implanted in 48 patients with an average age of 74.4 years (range, 56 to 86 years). There were 37 women and 11 men. Twenty-seven primary RSAs were performed for cuff tear arthropathy, 3 after failed rotator cuff surgery, 6 for failed arthroplasties, 7 for acute fractures and 5 after failed ORIF. All baseplate fixations were done using a nonlocking posterior screw in the scapular spine, a nonlocking anterior screw in the glenoid body, a locking superior screw in the coracoid and a locking inferior screw in the pillar. All patients were reviewed with standardized radiographs. We reported the positions of the screws in relation to the scapular spine and the coracoid process in two different views. We defined screw positions as totally, partially or out of the target. Finally, we reported aseptic glenoid loosening which was defined as implant subsidence. Results: Four patients were lost to follow-up. Thus 44 shoulders could be reviewed after a mean follow-up of 16 months (range, 9 to 32 months). Thirty-seven (84%) screws were either partially or totally in the spine. Thus, 7 (16%) scapular spine screws were out of the target. No coracoid screw was out of the target. At final follow-up control, we reported no glenoid loosening. Conclusion: Early glenoid loosening occurred before the two years follow-up and is most of time related to technical problems and/or insufficient glenoid bone stock and bone quality. Our study demonstrate that baseplate fixation of a RSA according to the three columns principle is a reproducible technique and a valuable way to prevent early glenoid loosening.
Resumo:
Although hemoglobin (Hb) is mainly present in the cytoplasm of erythrocytes (red blood cells), lower concentrations of pure, cell-free Hb are released permanently into the circulation due to an inherent intravascular hemolytic disruption of erythrocytes. Previously it was shown that the interaction of Hb with bacterial endotoxins (lipopolysaccharides, LPS) results in a significant increase of the biological activity of LPS. There is clear evidence that the enhancement of the biological activity of LPS by Hb is connected with a disaggregation of LPS. From these findings one questions whether the property to enhance the biological activity of endotoxin, in most cases proven by the ability to increase the cytokine (tumor-necrosis-factor-alpha, interleukins) production in human mononuclear cells, is restricted to bacterial endotoxin or is a more general principle in nature. To elucidate this question, we investigated the interaction of various synthetic and natural virulence (pathogenicity) factors with hemoglobin of human or sheep origin. In addition to enterobacterial R-type LPS a synthetic bacterial lipopeptide and synthetic phospholipid-like structures mimicking the lipid A portion of LPS were analysed. Furthermore, we also tested endotoxically inactive LPS and lipid A compounds such as those from Chlamydia trachomatis. We found that the observations made for endotoxically active form of LPS can be generalized for the other synthetic and natural virulence factors: In every case, the cytokine-production induced by them is increased by the addition of Hb. This biological property of Hb is connected with its physical property to convert the aggregate structures of the virulence factors into one with cubic symmetry, accompanied with a considerable reduction of the size and number of the original aggregates.
Resumo:
Nonlinear regression problems can often be reduced to linearity by transforming the response variable (e.g., using the Box-Cox family of transformations). The classic estimates of the parameter defining the transformation as well as of the regression coefficients are based on the maximum likelihood criterion, assuming homoscedastic normal errors for the transformed response. These estimates are nonrobust in the presence of outliers and can be inconsistent when the errors are nonnormal or heteroscedastic. This article proposes new robust estimates that are consistent and asymptotically normal for any unimodal and homoscedastic error distribution. For this purpose, a robust version of conditional expectation is introduced for which the prediction mean squared error is replaced with an M scale. This concept is then used to develop a nonparametric criterion to estimate the transformation parameter as well as the regression coefficients. A finite sample estimate of this criterion based on a robust version of smearing is also proposed. Monte Carlo experiments show that the new estimates compare favorably with respect to the available competitors.
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).
Resumo:
An epidemic model is formulated by a reactionâeuro"diffusion system where the spatial pattern formation is driven by cross-diffusion. The reaction terms describe the local dynamics of susceptible and infected species, whereas the diffusion terms account for the spatial distribution dynamics. For both self-diffusion and cross-diffusion, nonlinear constitutive assumptions are suggested. To simulate the pattern formation two finite volume formulations are proposed, which employ a conservative and a non-conservative discretization, respectively. An efficient simulation is obtained by a fully adaptive multiresolution strategy. Numerical examples illustrate the impact of the cross-diffusion on the pattern formation.
Resumo:
BACKGROUND: : A primary goal of clinical pharmacology is to understand the factors that determine the dose-effect relationship and to use this knowledge to individualize drug dose. METHODS: : A principle-based criterion is proposed for deciding among alternative individualization methods. RESULTS: : Safe and effective variability defines the maximum acceptable population variability in drug concentration around the population average. CONCLUSIONS: : A decision on whether patient covariates alone are sufficient, or whether therapeutic drug monitoring in combination with target concentration intervention is needed, can be made by comparing the remaining population variability after a particular dosing method with the safe and effective variability.