915 resultados para Hasse invariant
Resumo:
In this paper we find the quantities that are adiabatic invariants of any desired order for a general slowly time-dependent Hamiltonian. In a preceding paper, we chose a quantity that was initially an adiabatic invariant to first order, and sought the conditions to be imposed upon the Hamiltonian so that the quantum mechanical adiabatic theorem would be valid to mth order. [We found that this occurs when the first (m - 1) time derivatives of the Hamiltonian at the initial and final time instants are equal to zero.] Here we look for a quantity that is an adiabatic invariant to mth order for any Hamiltonian that changes slowly in time, and that does not fulfill any special condition (its first time derivatives are not zero initially and finally).
Resumo:
The front form and the point form of dynamics are studied in the framework of predictive relativistic mechanics. The non-interaction theorem is proved when a Poincar-invariant Hamiltonian formulation with canonical position coordinates is required.
Resumo:
The infinitesimal transformations that leave invariant a two-covariant symmetric tensor are studied. The interest of these symmetry transformations lays in the fact that this class of tensors includes the energy-momentum and Ricci tensors. We find that in most cases the class of infinitesimal generators of these transformations is a finite dimensional Lie algebra, but in some cases exhibiting a higher degree of degeneracy, this class is infinite dimensional and may fail to be a Lie algebra. As an application, we study the Ricci collineations of a type B warped spacetime.
Resumo:
This paper presents a new method to analyze timeinvariant linear networks allowing the existence of inconsistent initial conditions. This method is based on the use of distributions and state equations. Any time-invariant linear network can be analyzed. The network can involve any kind of pure or controlled sources. Also, the transferences of energy that occur at t=O are determined, and the concept of connection energy is introduced. The algorithms are easily implemented in a computer program.
Resumo:
We study numerically the disappearance of normally hyperbolic invariant tori in quasiperiodic systems and identify a scenario for their breakdown. In this scenario, the breakdown happens because two invariant directions of the transversal dynamics come close to each other, losing their regularity. On the other hand, the Lyapunov multipliers associated with the invariant directions remain more or less constant. We identify notable quantitative regularities in this scenario, namely that the minimum angle between the two invariant directions and the Lyapunov multipliers have power law dependence with the parameters. The exponents of the power laws seem to be universal.
Resumo:
OBJECTIVES: Etravirine (ETV) is a novel nonnucleoside reverse transcriptase inhibitor (NNRTI) with reduced cross-resistance to first-generation NNRTIs, which has been primarily studied in randomized clinical trials and not in routine clinical settings. METHODS: ETV resistance-associated mutations (RAMs) were investigated by analysing 6072 genotypic tests. The antiviral activity of ETV was predicted using different interpretation systems: International AIDS Society-USA (IAS-USA), Stanford, Rega and Agence Nationale de Recherches sur le Sida et les hépatites virales (ANRS). RESULTS: The prevalence of ETV RAMs was higher in NNRTI-exposed patients [44.9%, 95% confidence interval (CI) 41.0-48.9%] than in treatment-naïve patients (9.6%, 95% CI 8.5-10.7%). ETV RAMs in treatment-naïve patients mainly represent polymorphism, as prevalence estimates in genotypic tests for treatment-naïve patients with documented recent (<1 year) infection, who had acquired HIV before the introduction of NNRTIs, were almost identical (9.8%, 95% CI 3.3-21.4). Discontinuation of NNRTI treatment led to a marked drop in the detection of ETV RAMs, from 51.7% (95% CI 40.8-62.6%) to 34.5% (95% CI 24.6-45.4%, P=0.032). Differences in prevalence among subtypes were found for V90I and V179T (P<0.001). Estimates of restricted virological response to ETV varied among algorithms in patients with exposure to efavirenz (EFV)/nevirapine (NVP), ranging from 3.8% (95% CI 2.5-5.6%) for ANRS to 56.2% (95% CI 52.2-60.1%) for Stanford. The predicted activity of ETV decreased as the sensitivity of potential optimized background regimens decreased. The presence of major IAS-USA mutations (L100I, K101E/H/P and Y181C/I/V) reduced the treatment response at week 24. CONCLUSIONS: Most ETV RAMs in drug-naïve patients are polymorphisms rather than transmitted RAMs. Uncertainty regarding predictions of antiviral activity for ETV in NNRTI-treated patients remains high. The lowest activity was predicted for patients harbouring extensive multidrug-resistant viruses, thus limiting ETV use in those who are most in need.
Resumo:
In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.
Resumo:
Estimation of the spatial statistics of subsurface velocity heterogeneity from surface-based geophysical reflection survey data is a problem of significant interest in seismic and ground-penetrating radar (GPR) research. A method to effectively address this problem has been recently presented, but our knowledge regarding the resolution of the estimated parameters is still inadequate. Here we examine this issue using an analytical approach that is based on the realistic assumption that the subsurface velocity structure can be characterized as a band-limited scale-invariant medium. Our work importantly confirms recent numerical findings that the inversion of seismic or GPR reflection data for the geostatistical properties of the probed subsurface region is sensitive to the aspect ratio of the velocity heterogeneity and to the decay of its power spectrum, but not to the individual values of the horizontal and vertical correlation lengths.
Resumo:
In this study we propose an evaluation of the angular effects altering the spectral response of the land-cover over multi-angle remote sensing image acquisitions. The shift in the statistical distribution of the pixels observed in an in-track sequence of WorldView-2 images is analyzed by means of a kernel-based measure of distance between probability distributions. Afterwards, the portability of supervised classifiers across the sequence is investigated by looking at the evolution of the classification accuracy with respect to the changing observation angle. In this context, the efficiency of various physically and statistically based preprocessing methods in obtaining angle-invariant data spaces is compared and possible synergies are discussed.
Resumo:
Valpha14 invariant natural killer T (Valpha14i NKT) cells are a unique lineage of mouse T cells that share properties with both NK cells and memory T cells. Valpha14i NKT cells recognize CDld-associated glycolipids via a semi-invariant T cell receptor (TCR) composed of an invariant Valpha14-Jalpha 18 chain paired preferentially with a restricted set of TCRbeta chains. During development in the thymus, rare CD4+ CD8+ (DP) cortical thymocytes that successfully rearrange the semi-invariant TCR are directed to the Valpha14i NKT cell lineage via interactions with CD d-associated endogenous glycolipids expressed by other DP thymocytes. As they mature, Valphal4i NKT lineage cells upregulate activation markers such as CD44 and subsequently express NK-related molecules such as NKI.1 and members of the Ly-49 inhibitory receptor family. The developmental program of Valpha l4i NKT cells is critically regulated by a number of signaling cues that have little or no effect on conventional T cell development, such as the Fyn/SAP/SLAM pathway, NFkappaB and T-bet transcription factors, and the cytokine IL-15. The unique developmental requirements of Valphal4i NKT cells may represent a paradigm for other unconventional T cell subsets that are positively selected by agonist ligands expressed on hematopoietic cells.
Resumo:
The mature TCR is composed of a clonotypic heterodimer (alpha beta or gamma delta) associated with the invariant CD3 components (gamma, delta, epsilon and zeta). There is now considerable evidence that more immature forms of the TCR-CD3 complex (consisting of either CD3 alone or CD3 associated with a heterodimer of TCR beta and pre-T alpha) can be expressed at the cell surface on early thymocytes. These pre-TCR complexes are believed to be necessary for the ordered progression of early T cell development. We have analyzed in detail the expression of both the pre-TCR and CD3 complex at various stages of adult thymus development. Our data indicate that all CD3 components are already expressed at the mRNA level by the earliest identifiable (CD4lo) thymic precursor. In contrast, genes encoding the pre-TCR complex (pre-T alpha and fully rearranged TCR beta) are first expressed at the CD44loCD25+CD4-CD8- stage. Detectable surface expression of both CD3 and TCR beta are delayed relative to expression of the corresponding genes, suggesting the existence of other (as yet unidentified) components of the pre-TCR complex.
Resumo:
U-Pb dating of zircons by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) is a widely used analytical technique in Earth Sciences. For U-Pb ages below 1 billion years (1 Ga), Pb-206/U-238 dates are usually used, showing the least bias by external parameters such as the presence of initial lead and its isotopic composition in the analysed mineral. Precision and accuracy of the Pb/U ratio are thus of highest importance in LA-ICPMS geochronology. We consider the evaluation of the statistical distribution of the sweep intensities based on goodness-of-fit tests in order to find a model probability distribution fitting the data to apply an appropriate formulation for the standard deviation. We then discuss three main methods to calculate the Pb/U intensity ratio and its uncertainty in the LA-ICPMS: (1) ratio-of-the-mean intensities method, (2) mean-of-the-intensity-ratios method and (3) intercept method. These methods apply different functions to the same raw intensity vs. time data to calculate the mean Pb/U intensity ratio. Thus, the calculated intensity ratio and its uncertainty depend on the method applied. We demonstrate that the accuracy and, conditionally, the precision of the ratio-of-the-mean intensities method are invariant to the intensity fluctuations and averaging related to the dwell time selection and off-line data transformation (averaging of several sweeps); we present a statistical approach how to calculate the uncertainty of this method for transient signals. We also show that the accuracy of methods (2) and (3) is influenced by the intensity fluctuations and averaging, and the extent of this influence can amount to tens of percentage points; we show that the uncertainty of these methods also depends on how the signal is averaged. Each of the above methods imposes requirements to the instrumentation. The ratio-of-the-mean intensities method is sufficiently accurate provided the laser induced fractionation between the beginning and the end of the signal is kept low and linear. We show, based on a comprehensive series of analyses with different ablation pit sizes, energy densities and repetition rates for a 193 nm ns-ablation system that such a fractionation behaviour requires using a low ablation speed (low energy density and low repetition rate). Overall, we conclude that the ratio-of-the-mean intensities method combined with low sampling rates is the most mathematically accurate among the existing data treatment methods for U-Pb zircon dating by sensitive sector field ICPMS.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
We study the families of periodic orbits of the spatial isosceles 3-body problem (for small enough values of the mass lying on the symmetry axis) coming via the analytic continuation method from periodic orbits of the circular Sitnikov problem. Using the first integral of the angular momentum, we reduce the dimension of the phase space of the problem by two units. Since periodic orbits of the reduced isosceles problem generate invariant two-dimensional tori of the nonreduced problem, the analytic continuation of periodic orbits of the (reduced) circular Sitnikov problem at this level becomes the continuation of invariant two-dimensional tori from the circular Sitnikov problem to the nonreduced isosceles problem, each one filled with periodic or quasi-periodic orbits. These tori are not KAM tori but just isotropic, since we are dealing with a three-degrees-of-freedom system. The continuation of periodic orbits is done in two different ways, the first going directly from the reduced circular Sitnikov problem to the reduced isosceles problem, and the second one using two steps: first we continue the periodic orbits from the reduced circular Sitnikov problem to the reduced elliptic Sitnikov problem, and then we continue those periodic orbits of the reduced elliptic Sitnikov problem to the reduced isosceles problem. The continuation in one or two steps produces different results. This work is merely analytic and uses the variational equations in order to apply Poincar´e’s continuation method.