921 resultados para Generalized Variational Inequality
Resumo:
A rigorous asymptotic theory for Wald residuals in generalized linear models is not yet available. The authors provide matrix formulae of order O(n(-1)), where n is the sample size, for the first two moments of these residuals. The formulae can be applied to many regression models widely used in practice. The authors suggest adjusted Wald residuals to these models with approximately zero mean and unit variance. The expressions were used to analyze a real dataset. Some simulation results indicate that the adjusted Wald residuals are better approximated by the standard normal distribution than the Wald residuals.
Resumo:
A twisted generalized Weyl algebra A of degree n depends on a. base algebra R, n commuting automorphisms sigma(i) of R, n central elements t(i) of R and on some additional scalar parameters. In a paper by Mazorchuk and Turowska, it is claimed that certain consistency conditions for sigma(i) and t(i) are sufficient for the algebra to be nontrivial. However, in this paper we give all example which shows that this is false. We also correct the statement by finding a new set of consistency conditions and prove that the old and new conditions together are necessary and sufficient for the base algebra R to map injectively into A. In particular they are sufficient for the algebra A to be nontrivial. We speculate that these consistency relations may play a role in other areas of mathematics, analogous to the role played by the Yang-Baxter equation in the theory of integrable systems.
Resumo:
The generalized finite element method (GFEM) is applied to a nonconventional hybrid-mixed stress formulation (HMSF) for plane analysis. In the HMSF, three approximation fields are involved: stresses and displacements in the domain and displacement fields on the static boundary. The GFEM-HMSF shape functions are then generated by the product of a partition of unity associated to each field and the polynomials enrichment functions. In principle, the enrichment can be conducted independently over each of the HMSF approximation fields. However, stability and convergence features of the resulting numerical method can be affected mainly by spurious modes generated when enrichment is arbitrarily applied to the displacement fields. With the aim to efficiently explore the enrichment possibilities, an extension to GFEM-HMSF of the conventional Zienkiewicz-Patch-Test is proposed as a necessary condition to ensure numerical stability. Finally, once the extended Patch-Test is satisfied, some numerical analyses focusing on the selective enrichment over distorted meshes formed by bilinear quadrilateral finite elements are presented, thus showing the performance of the GFEM-HMSF combination.
Resumo:
Background Support for the adverse effect of high income inequality on population health has come from studies that focus on larger areas, such as the US states, while studies at smaller geographical areas (eg, neighbourhoods) have found mixed results. Methods We used propensity score matching to examine the relationship between income inequality and mortality rates across 96 neighbourhoods (distritos) of the municipality of Sao Paulo, Brazil. Results Prior to matching, higher income inequality distritos (Gini >= 0.25) had slightly lower overall mortality rates (2.23 per 10 000, 95% CI -23.92 to 19.46) compared to lower income inequality areas (Gini <0.25). After propensity score matching, higher inequality was associated with a statistically significant higher mortality rate (41.58 per 10 000, 95% CI 8.85 to 73.3). Conclusion In Sao Paulo, the more egalitarian communities are among some of the poorest, with the worst health profiles. Propensity score matching was used to avoid inappropriate comparisons between the health status of unequal (but wealthy) neighbourhoods versus equal (but poor) neighbourhoods. Our methods suggest that, with proper accounting of heterogeneity between areas, income inequality is associated with worse population health in Sao Paulo.
Resumo:
Background: Thalamotomies and pallidotomies were commonly performed before the deep brain stimulation (DBS) era. Although ablative procedures can lead to significant dystonia improvement, longer periods of analysis reveal disease progression and functional deterioration. Today, the same patients seek additional treatment possibilities. Methods: Four patients with generalized dystonia who previously had undergone bilateral pallidotomy came to our service seeking additional treatment because of dystonic symptom progression. Bilateral subthalamic nucleus DBS (B-STN-DBS) was the treatment of choice. The patients were evaluated with the BurkeFahnMarsden Dystonia Rating Scale (BFMDRS) and the Unified Dystonia Rating Scale (UDRS) before and 2 years after surgery. Results: All patients showed significant functional improvement, averaging 65.3% in BFMDRS (P = .014) and 69.2% in UDRS (P = .025). Conclusions: These results suggest that B-STN-DBS may be an interesting treatment option for generalized dystonia, even for patients who have already undergone bilateral pallidotomy. (c) 2012 Movement Disorder Society
Resumo:
Abstract Background The generalized odds ratio (GOR) was recently suggested as a genetic model-free measure for association studies. However, its properties were not extensively investigated. We used Monte Carlo simulations to investigate type-I error rates, power and bias in both effect size and between-study variance estimates of meta-analyses using the GOR as a summary effect, and compared these results to those obtained by usual approaches of model specification. We further applied the GOR in a real meta-analysis of three genome-wide association studies in Alzheimer's disease. Findings For bi-allelic polymorphisms, the GOR performs virtually identical to a standard multiplicative model of analysis (e.g. per-allele odds ratio) for variants acting multiplicatively, but augments slightly the power to detect variants with a dominant mode of action, while reducing the probability to detect recessive variants. Although there were differences among the GOR and usual approaches in terms of bias and type-I error rates, both simulation- and real data-based results provided little indication that these differences will be substantial in practice for meta-analyses involving bi-allelic polymorphisms. However, the use of the GOR may be slightly more powerful for the synthesis of data from tri-allelic variants, particularly when susceptibility alleles are less common in the populations (≤10%). This gain in power may depend on knowledge of the direction of the effects. Conclusions For the synthesis of data from bi-allelic variants, the GOR may be regarded as a multiplicative-like model of analysis. The use of the GOR may be slightly more powerful in the tri-allelic case, particularly when susceptibility alleles are less common in the populations.
Resumo:
OBJECTIVE: To analyze cause-specific mortality rates according to the relative income hypothesis. METHODS: All 96 administrative areas of the city of São Paulo, southeastern Brazil, were divided into two groups based on the Gini coefficient of income inequality: high (>0.25) and low (<0.25). The propensity score matching method was applied to control for confounders associated with socioeconomic differences among areas. RESULTS: The difference between high and low income inequality areas was statistically significant for homicide (8.57 per 10,000; 95%CI: 2.60;14.53); ischemic heart disease (5.47 per 10,000 [95%CI 0.76;10.17]); HIV/AIDS (3.58 per 10,000 [95%CI 0.58;6.57]); and respiratory diseases (3.56 per 10,000 [95%CI 0.18;6.94]). The ten most common causes of death accounted for 72.30% of the mortality difference. Infant mortality also had significantly higher age-adjusted rates in high inequality areas (2.80 per 10,000 [95%CI 0.86;4.74]), as well as among males (27.37 per 10,000 [95%CI 6.19;48.55]) and females (15.07 per 10,000 [95%CI 3.65;26.48]). CONCLUSIONS: The study results support the relative income hypothesis. After propensity score matching cause-specific mortality rates was higher in more unequal areas. Studies on income inequality in smaller areas should take proper accounting of heterogeneity of social and demographic characteristics.
Resumo:
[EN] We present in this paper a variational approach to accurately estimate simultaneously the velocity field and its derivatives directly from PIV image sequences. Our method differs from other techniques that have been presented in the literature in the fact that the energy minimization used to estimate the particles motion depends on a second order Taylor development of the flow. In this way, we are not only able to compute the motion vector field, but we also obtain an accurate estimation of their derivatives. Hence, we avoid the use of numerical schemes to compute the derivatives from the estimated flow that usually yield to numerical amplification of the inherent uncertainty on the estimated flow. The performance of our approach is illustrated with the estimation of the motion vector field and the vorticity on both synthetic and real PIV datasets.
Resumo:
[EN] In this paper we study a variational problem derived from a computer vision application: video camera calibration with smoothing constraint. By video camera calibration we meanto estimate the location, orientation and lens zoom-setting of the camera for each video frame taking into account image visible features. To simplify the problem we assume that the camera is mounted on a tripod, in such case, for each frame captured at time t , the calibration is provided by 3 parameters : (1) P(t) (PAN) which represents the tripod vertical axis rotation, (2) T(t) (TILT) which represents the tripod horizontal axis rotation and (3) Z(t) (CAMERA ZOOM) the camera lens zoom setting. The calibration function t -> u(t) = (P(t),T(t),Z(t)) is obtained as the minima of an energy function I[u] . In thIs paper we study the existence of minima of such energy function as well as the solutions of the associated Euler-Lagrange equations.
Resumo:
[EN] In this paper we present a variational technique for the reconstruction of 3D cylindrical surfaces. Roughly speaking by a cylindrical surface we mean a surface that can be parameterized using the projection on a cylinder in terms of two coordinates, representing the displacement and angle in a cylindrical coordinate system respectively. The starting point for our method is a set of different views of a cylindrical surface, as well as a precomputed disparity map estimation between pair of images. The proposed variational technique is based on an energy minimization where we balance on the one hand the regularity of the cylindrical function given by the distance of the surface points to cylinder axis, and on the other hand, the distance between the projection of the surface points on the images and the expected location following the precomputed disparity map estimation between pair of images. One interesting advantage of this approach is that we regularize the 3D surface by means of a bi-dimensio al minimization problem. We show some experimental results for large stereo sequences.
Resumo:
[EN] In the last years we have developed some methods for 3D reconstruction. First we began with the problem of reconstructing a 3D scene from a stereoscopic pair of images. We developed some methods based on energy functionals which produce dense disparity maps by preserving discontinuities from image boundaries. Then we passed to the problem of reconstructing a 3D scene from multiple views (more than 2). The method for multiple view reconstruction relies on the method for stereoscopic reconstruction. For every pair of consecutive images we estimate a disparity map and then we apply a robust method that searches for good correspondences through the sequence of images. Recently we have proposed several methods for 3D surface regularization. This is a postprocessing step necessary for smoothing the final surface, which could be afected by noise or mismatch correspondences. These regularization methods are interesting because they use the information from the reconstructing process and not only from the 3D surface. We have tackled all these problems from an energy minimization approach. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method.
Resumo:
The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.
Resumo:
Some fundamental biological processes such as embryonic development have been preserved during evolution and are common to species belonging to different phylogenetic positions, but are nowadays largely unknown. The understanding of cell morphodynamics leading to the formation of organized spatial distribution of cells such as tissues and organs can be achieved through the reconstruction of cells shape and position during the development of a live animal embryo. We design in this work a chain of image processing methods to automatically segment and track cells nuclei and membranes during the development of a zebrafish embryo, which has been largely validates as model organism to understand vertebrate development, gene function and healingrepair mechanisms in vertebrates. The embryo is previously labeled through the ubiquitous expression of fluorescent proteins addressed to cells nuclei and membranes, and temporal sequences of volumetric images are acquired with laser scanning microscopy. Cells position is detected by processing nuclei images either through the generalized form of the Hough transform or identifying nuclei position with local maxima after a smoothing preprocessing step. Membranes and nuclei shapes are reconstructed by using PDEs based variational techniques such as the Subjective Surfaces and the Chan Vese method. Cells tracking is performed by combining informations previously detected on cells shape and position with biological regularization constraints. Our results are manually validated and reconstruct the formation of zebrafish brain at 7-8 somite stage with all the cells tracked starting from late sphere stage with less than 2% error for at least 6 hours. Our reconstruction opens the way to a systematic investigation of cellular behaviors, of clonal origin and clonal complexity of brain organs, as well as the contribution of cell proliferation modes and cell movements to the formation of local patterns and morphogenetic fields.
Resumo:
Die vorliegende Arbeit befaßt sich mit einer Klasse von nichtlinearen Eigenwertproblemen mit Variationsstrukturin einem reellen Hilbertraum. Die betrachteteEigenwertgleichung ergibt sich demnach als Euler-Lagrange-Gleichung eines stetig differenzierbarenFunktionals, zusätzlich sei der nichtlineare Anteil desProblems als ungerade und definit vorausgesetzt.Die wichtigsten Ergebnisse in diesem abstrakten Rahmen sindKriterien für die Existenz spektral charakterisierterLösungen, d.h. von Lösungen, deren Eigenwert gerade miteinem vorgegeben variationellen Eigenwert eines zugehörigen linearen Problems übereinstimmt. Die Herleitung dieserKriterien basiert auf einer Untersuchung kontinuierlicher Familien selbstadjungierterEigenwertprobleme und erfordert Verallgemeinerungenspektraltheoretischer Konzepte.Neben reinen Existenzsätzen werden auch Beziehungen zwischenspektralen Charakterisierungen und denLjusternik-Schnirelman-Niveaus des Funktionals erörtert.Wir betrachten Anwendungen auf semilineareDifferentialgleichungen (sowieIntegro-Differentialgleichungen) zweiter Ordnung. Diesliefert neue Informationen über die zugehörigenLösungsmengen im Hinblick auf Knoteneigenschaften. Diehergeleiteten Methoden eignen sich besonders für eindimensionale und radialsymmetrische Probleme, während einTeil der Resultate auch ohne Symmetrieforderungen gültigist.
Resumo:
Eine Gruppe G hat endlichen Prüferrang (bzw. Ko-zentralrang) kleiner gleich r, wenn für jede endlich erzeugte Gruppe H gilt: H (bzw. H modulo seinem Zentrum) ist r-erzeugbar. In der vorliegenden Arbeit werden, soweit möglich, die bekannten Sätze über Gruppen von endlichem Prüferrang (kurz X-Gruppen), auf die wesentlich größere Klasse der Gruppen mit endlichem Ko-zentralrang (kurz R-Gruppen) verallgemeinert.Für lokal nilpotente R-Gruppen, welche torsionsfrei oder p-Gruppen sind, wird gezeigt, dass die Zentrumsfaktorgruppe eine X-Gruppe sein muss. Es folgt, dass Hyperzentralität und lokale Nilpotenz für R-Gruppen identische Bediungungen sind. Analog hierzu sind R-Gruppen genau dann lokal auflösbar, wenn sie hyperabelsch sind. Zentral für die Strukturtheorie hyperabelscher R-Gruppen ist die Tatsache, dass solche Gruppen eine aufsteigende Normalreihe abelscher X-Gruppen besitzen. Es wird eine Sylowtheorie für periodische hyperabelsche R-Gruppen entwickelt. Für torsionsfreie hyperabelsche R-Gruppen wird deren Auflösbarkeit bewiesen. Des weiteren sind lokal endliche R-Gruppen fast hyperabelsch. Für R-Gruppen fallen sehr große Gruppenklassen mit den fast hyperabelschen Gruppen zusammen. Hierzu wird der Begriff der Sektionsüberdeckung eingeführt und gezeigt, dass R-Gruppen mit fast hyperabelscher Sektionsüberdeckung fast hyperabelsch sind.