918 resultados para Estimateur orthogonalement invariant
Resumo:
The front form and the point form of dynamics are studied in the framework of predictive relativistic mechanics. The non-interaction theorem is proved when a Poincar-invariant Hamiltonian formulation with canonical position coordinates is required.
Resumo:
The infinitesimal transformations that leave invariant a two-covariant symmetric tensor are studied. The interest of these symmetry transformations lays in the fact that this class of tensors includes the energy-momentum and Ricci tensors. We find that in most cases the class of infinitesimal generators of these transformations is a finite dimensional Lie algebra, but in some cases exhibiting a higher degree of degeneracy, this class is infinite dimensional and may fail to be a Lie algebra. As an application, we study the Ricci collineations of a type B warped spacetime.
Resumo:
This paper presents a new method to analyze timeinvariant linear networks allowing the existence of inconsistent initial conditions. This method is based on the use of distributions and state equations. Any time-invariant linear network can be analyzed. The network can involve any kind of pure or controlled sources. Also, the transferences of energy that occur at t=O are determined, and the concept of connection energy is introduced. The algorithms are easily implemented in a computer program.
Resumo:
We study numerically the disappearance of normally hyperbolic invariant tori in quasiperiodic systems and identify a scenario for their breakdown. In this scenario, the breakdown happens because two invariant directions of the transversal dynamics come close to each other, losing their regularity. On the other hand, the Lyapunov multipliers associated with the invariant directions remain more or less constant. We identify notable quantitative regularities in this scenario, namely that the minimum angle between the two invariant directions and the Lyapunov multipliers have power law dependence with the parameters. The exponents of the power laws seem to be universal.
Resumo:
In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.
Resumo:
Estimation of the spatial statistics of subsurface velocity heterogeneity from surface-based geophysical reflection survey data is a problem of significant interest in seismic and ground-penetrating radar (GPR) research. A method to effectively address this problem has been recently presented, but our knowledge regarding the resolution of the estimated parameters is still inadequate. Here we examine this issue using an analytical approach that is based on the realistic assumption that the subsurface velocity structure can be characterized as a band-limited scale-invariant medium. Our work importantly confirms recent numerical findings that the inversion of seismic or GPR reflection data for the geostatistical properties of the probed subsurface region is sensitive to the aspect ratio of the velocity heterogeneity and to the decay of its power spectrum, but not to the individual values of the horizontal and vertical correlation lengths.
Resumo:
In this study we propose an evaluation of the angular effects altering the spectral response of the land-cover over multi-angle remote sensing image acquisitions. The shift in the statistical distribution of the pixels observed in an in-track sequence of WorldView-2 images is analyzed by means of a kernel-based measure of distance between probability distributions. Afterwards, the portability of supervised classifiers across the sequence is investigated by looking at the evolution of the classification accuracy with respect to the changing observation angle. In this context, the efficiency of various physically and statistically based preprocessing methods in obtaining angle-invariant data spaces is compared and possible synergies are discussed.
Resumo:
Valpha14 invariant natural killer T (Valpha14i NKT) cells are a unique lineage of mouse T cells that share properties with both NK cells and memory T cells. Valpha14i NKT cells recognize CDld-associated glycolipids via a semi-invariant T cell receptor (TCR) composed of an invariant Valpha14-Jalpha 18 chain paired preferentially with a restricted set of TCRbeta chains. During development in the thymus, rare CD4+ CD8+ (DP) cortical thymocytes that successfully rearrange the semi-invariant TCR are directed to the Valpha14i NKT cell lineage via interactions with CD d-associated endogenous glycolipids expressed by other DP thymocytes. As they mature, Valphal4i NKT lineage cells upregulate activation markers such as CD44 and subsequently express NK-related molecules such as NKI.1 and members of the Ly-49 inhibitory receptor family. The developmental program of Valpha l4i NKT cells is critically regulated by a number of signaling cues that have little or no effect on conventional T cell development, such as the Fyn/SAP/SLAM pathway, NFkappaB and T-bet transcription factors, and the cytokine IL-15. The unique developmental requirements of Valphal4i NKT cells may represent a paradigm for other unconventional T cell subsets that are positively selected by agonist ligands expressed on hematopoietic cells.
Resumo:
The mature TCR is composed of a clonotypic heterodimer (alpha beta or gamma delta) associated with the invariant CD3 components (gamma, delta, epsilon and zeta). There is now considerable evidence that more immature forms of the TCR-CD3 complex (consisting of either CD3 alone or CD3 associated with a heterodimer of TCR beta and pre-T alpha) can be expressed at the cell surface on early thymocytes. These pre-TCR complexes are believed to be necessary for the ordered progression of early T cell development. We have analyzed in detail the expression of both the pre-TCR and CD3 complex at various stages of adult thymus development. Our data indicate that all CD3 components are already expressed at the mRNA level by the earliest identifiable (CD4lo) thymic precursor. In contrast, genes encoding the pre-TCR complex (pre-T alpha and fully rearranged TCR beta) are first expressed at the CD44loCD25+CD4-CD8- stage. Detectable surface expression of both CD3 and TCR beta are delayed relative to expression of the corresponding genes, suggesting the existence of other (as yet unidentified) components of the pre-TCR complex.
Resumo:
U-Pb dating of zircons by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) is a widely used analytical technique in Earth Sciences. For U-Pb ages below 1 billion years (1 Ga), Pb-206/U-238 dates are usually used, showing the least bias by external parameters such as the presence of initial lead and its isotopic composition in the analysed mineral. Precision and accuracy of the Pb/U ratio are thus of highest importance in LA-ICPMS geochronology. We consider the evaluation of the statistical distribution of the sweep intensities based on goodness-of-fit tests in order to find a model probability distribution fitting the data to apply an appropriate formulation for the standard deviation. We then discuss three main methods to calculate the Pb/U intensity ratio and its uncertainty in the LA-ICPMS: (1) ratio-of-the-mean intensities method, (2) mean-of-the-intensity-ratios method and (3) intercept method. These methods apply different functions to the same raw intensity vs. time data to calculate the mean Pb/U intensity ratio. Thus, the calculated intensity ratio and its uncertainty depend on the method applied. We demonstrate that the accuracy and, conditionally, the precision of the ratio-of-the-mean intensities method are invariant to the intensity fluctuations and averaging related to the dwell time selection and off-line data transformation (averaging of several sweeps); we present a statistical approach how to calculate the uncertainty of this method for transient signals. We also show that the accuracy of methods (2) and (3) is influenced by the intensity fluctuations and averaging, and the extent of this influence can amount to tens of percentage points; we show that the uncertainty of these methods also depends on how the signal is averaged. Each of the above methods imposes requirements to the instrumentation. The ratio-of-the-mean intensities method is sufficiently accurate provided the laser induced fractionation between the beginning and the end of the signal is kept low and linear. We show, based on a comprehensive series of analyses with different ablation pit sizes, energy densities and repetition rates for a 193 nm ns-ablation system that such a fractionation behaviour requires using a low ablation speed (low energy density and low repetition rate). Overall, we conclude that the ratio-of-the-mean intensities method combined with low sampling rates is the most mathematically accurate among the existing data treatment methods for U-Pb zircon dating by sensitive sector field ICPMS.
Resumo:
Etude des modèles de Whittle markoviens probabilisés Résumé Le modèle de Whittle markovien probabilisé est un modèle de champ spatial autorégressif simultané d'ordre 1 qui exprime simultanément chaque variable du champ comme une moyenne pondérée aléatoire des variables adjacentes du champ, amortie d'un coefficient multiplicatif ρ, et additionnée d'un terme d'erreur (qui est une variable gaussienne homoscédastique spatialement indépendante, non mesurable directement). Dans notre cas, la moyenne pondérée est une moyenne arithmétique qui est aléatoire du fait de deux conditions : (a) deux variables sont adjacentes (au sens d'un graphe) avec une probabilité 1 − p si la distance qui les sépare est inférieure à un certain seuil, (b) il n'y a pas d'adjacence pour des distances au-dessus de ce seuil. Ces conditions déterminent un modèle d'adjacence (ou modèle de connexité) du champ spatial. Un modèle de Whittle markovien probabilisé aux conditions où p = 0 donne un modèle de Whittle classique qui est plus familier en géographie, économétrie spatiale, écologie, sociologie, etc. et dont ρ est le coefficient d'autorégression. Notre modèle est donc une forme probabilisée au niveau de la connexité du champ de la forme des modèles de Whittle classiques, amenant une description innovante de l'autocorrélation spatiale. Nous commençons par décrire notre modèle spatial en montrant les effets de la complexité introduite par le modèle de connexité sur le pattern de variances et la corrélation spatiale du champ. Nous étudions ensuite la problématique de l'estimation du coefficent d'autorégression ρ pour lequel au préalable nous effectuons une analyse approfondie de son information au sens de Fisher et de Kullback-Leibler. Nous montrons qu'un estimateur non biaisé efficace de ρ possède une efficacité qui varie en fonction du paramètre p, généralement de manière non monotone, et de la structure du réseau d'adjacences. Dans le cas où la connexité du champ est non observée, nous montrons qu'une mauvaise spécification de l'estimateur de maximum de vraisemblance de ρ peut biaiser celui-ci en fonction de p. Nous proposons dans ce contexte d'autres voies pour estimer ρ. Pour finir, nous étudions la puissance des tests de significativité de ρ pour lesquels les statistiques de test sont des variantes classiques du I de Moran (test de Cliff-Ord) et du I de Moran maximal (en s'inspirant de la méthode de Kooijman). Nous observons la variation de puissance en fonction du paramètre p et du coefficient ρ, montrant par cette voie la dualité de l'autocorrélation spatiale entre intensité et connectivité dans le contexte des modèles autorégressifs
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
We study the families of periodic orbits of the spatial isosceles 3-body problem (for small enough values of the mass lying on the symmetry axis) coming via the analytic continuation method from periodic orbits of the circular Sitnikov problem. Using the first integral of the angular momentum, we reduce the dimension of the phase space of the problem by two units. Since periodic orbits of the reduced isosceles problem generate invariant two-dimensional tori of the nonreduced problem, the analytic continuation of periodic orbits of the (reduced) circular Sitnikov problem at this level becomes the continuation of invariant two-dimensional tori from the circular Sitnikov problem to the nonreduced isosceles problem, each one filled with periodic or quasi-periodic orbits. These tori are not KAM tori but just isotropic, since we are dealing with a three-degrees-of-freedom system. The continuation of periodic orbits is done in two different ways, the first going directly from the reduced circular Sitnikov problem to the reduced isosceles problem, and the second one using two steps: first we continue the periodic orbits from the reduced circular Sitnikov problem to the reduced elliptic Sitnikov problem, and then we continue those periodic orbits of the reduced elliptic Sitnikov problem to the reduced isosceles problem. The continuation in one or two steps produces different results. This work is merely analytic and uses the variational equations in order to apply Poincar´e’s continuation method.
Resumo:
Abstract. In this paper we study the relative equilibria and their stability for a system of three point particles moving under the action of a Lennard{Jones potential. A central con guration is a special position of the particles where the position and acceleration vectors of each particle are proportional, and the constant of proportionality is the same for all particles. Since the Lennard{Jones potential depends only on the mutual distances among the particles, it is invariant under rotations. In a rotating frame the orbits coming from central con gurations become equilibrium points, the relative equilibria. Due to the form of the potential, the relative equilibria depend on the size of the system, that is, depend strongly of the momentum of inertia I. In this work we characterize the relative equilibria, we nd the bifurcation values of I for which the number of relative equilibria is changing, we also analyze the stability of the relative equilibria.