37 resultados para DEPENDENT QUANTUM PROBLEMS
Resumo:
Within an effective field theory framework, we obtain an expression, with O(1/m2) accuracy, for the energies of the gluonic excitations between heavy quarks, which holds beyond perturbation theory. For the singlet heavy-quarkantiquark energy, in particular, we also obtain an expression in terms of Wilson loops. This provides, twenty years after the seminal work of Eichten and Feinberg, the first complete expression for the heavy quarkonium potential up to O(1/m2) for pure gluodynamics. Several errors present in the previous literature (also in the work of Eichten and Feinberg) have been corrected. We also briefly discuss the power counting of NRQCD in the nonperturbative regime.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
Rigorous quantum dynamics calculations of reaction rates and initial state-selected reaction probabilities of polyatomic reactions can be efficiently performed within the quantum transition state concept employing flux correlation functions and wave packet propagation utilizing the multi-configurational time-dependent Hartree approach. Here, analytical formulas and a numerical scheme extending this approach to the calculation of state-to-state reaction probabilities are presented. The formulas derived facilitate the use of three different dividing surfaces: two dividing surfaces located in the product and reactant asymptotic region facilitate full state resolution while a third dividing surface placed in the transition state region can be used to define an additional flux operator. The eigenstates of the corresponding thermal flux operator then correspond to vibrational states of the activated complex. Transforming these states to reactant and product coordinates and propagating them into the respective asymptotic region, the full scattering matrix can be obtained. To illustrate the new approach, test calculations study the D + H2(ν, j) → HD(ν′, j′) + H reaction for J = 0.
Resumo:
We study the details of electronic transport related to the atomistic structure of silicon quantum dots embedded in a silicon dioxide matrix using ab initio calculations of the density of states. Several structural and composition features of quantum dots (QDs), such as diameter and amorphization level, are studied and correlated with transport under transfer Hamiltonian formalism. The current is strongly dependent on the QD density of states and on the conduction gap, both dependent on the dot diameter. In particular, as size increases, the available states inside the QD increase, while the QD band gap decreases due to relaxation of quantum confinement. Both effects contribute to increasing the current with the dot size. Besides, valence band offset between the band edges of the QD and the silica, and conduction band offset in a minor grade, increases with the QD diameter up to the theoretical value corresponding to planar heterostructures, thus decreasing the tunneling transmission probability and hence the total current. We discuss the influence of these parameters on electron and hole transport, evidencing a correlation between the electron (hole) barrier value and the electron (hole) current, and obtaining a general enhancement of the electron (hole) transport for larger (smaller) QD. Finally, we show that crystalline and amorphous structures exhibit enhanced probability of hole and electron current, respectively.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
N = 1 designs imply repeated registrations of the behaviour of the same experimental unit and the measurements obtained are often few due to time limitations, while they are also likely to be sequentially dependent. The analytical techniques needed to enhance statistical and clinical decision making have to deal with these problems. Different procedures for analysing data from single-case AB designs are discussed, presenting their main features and revising the results reported by previous studies. Randomization tests represent one of the statistical methods that seemed to perform well in terms of controlling false alarm rates. In the experimental part of the study a new simulation approach is used to test the performance of randomization tests and the results suggest that the technique is not always robust against the violation of the independence assumption. Moreover, sensitivity proved to be generally unacceptably low for series lengths equal to 30 and 40. Considering the evidence available, there does not seem to be an optimal technique for single-case data analysis
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
We study a simple model of assigning indivisible objects (e.g., houses, jobs, offices, etc.) to agents. Each agent receives at most one object and monetary compensations are not possible. We completely describe all rules satisfying efficiency and resource-monotonicity. The characterized rules assign the objects in a sequence of steps such that at each step there is either a dictator or two agents "trade" objects from their hierarchically specified "endowments."
Resumo:
The decisions of many individuals and social groups, taking according to well-defined objectives, are causing serious social and environmental problems, in spite of following the dictates of economic rationality. There are many examples of serious problems for which there are not yet appropriate solutions, such as management of scarce natural resources including aquifer water or the distribution of space among incompatible uses. In order to solve these problems, the paper first characterizes the resources and goods involved from an economic perspective. Then, for each case, the paper notes that there is a serious divergence between individual and collective interests and, where possible, it designs the procedure for solving the conflict of interests. With this procedure, the real opportunities for the application of economic theory are shown, and especially the theory on collective goods and externalities. The limitations of conventional economic analysis are shown and the opportunity to correct the shortfalls is examined. Many environmental problems, such as climate change, have an impact on different generations that do not participate in present decisions. The paper shows that for these cases, the solutions suggested by economic theory are not valid. Furthermore, conventional methods of economic valuation (which usually help decision-makers) are unable to account for the existence of different generations and tend to obviate long-term impacts. The paper analyzes how economic valuation methods could account for the costs and benefits enjoyed by present and future generations. The paper studies an appropriate consideration of preferences for future consumption and the incorporation of sustainability as a requirement in social decisions, which implies not only more efficiency but also a fairer distribution between generations than the one implied by conventional economic analysis.
Resumo:
Planar polynomial vector fields which admit invariant algebraic curves, Darboux integrating factors or Darboux first integrals are of special interest. In the present paper we solve the inverse problem for invariant algebraic curves with a given multiplicity and for integrating factors, under generic assumptions regarding the (multiple) invariant algebraic curves involved. In particular we prove, in this generic scenario, that the existence of a Darboux integrating factor implies Darboux integrability. Furthermore we construct examples where the genericity assumption does not hold and indicate that the situation is different for these.