949 resultados para Function theory
Resumo:
We describe an ab initio nonperturbative time-dependent R-matrix theory for ultrafast atomic processes. This theory enables investigations of the interaction of few-femtosecond and -attosecond pulse lasers with complex multielectron atoms and atomic ions. A derivation and analysis of the basic equations are given, which propagate the atomic wave function in the presence of the laser field forward in time in the internal and external R-matrix regions. To verify the accuracy of the approach, we investigate two-photon ionization of Ne irradiated by an intense laser pulse and compare current results with those obtained using the R-matrix Floquet method and an alternative time-dependent method. We also verify the capability of the current approach by applying it to the study of two-dimensional momentum distributions of electrons ejected from Ne due to irradiation by a sequence of 2 as light pulses in the presence of a 780 nm laser field.
Resumo:
For a digital echo canceller it is desirable to reduce the adaptation time, during which the transmission of useful data is not possible. LMS is a non-optimal algorithm in this case as the signals involved are statistically non-Gaussian. Walach and Widrow (IEEE Trans. Inform. Theory 30 (2) (March 1984) 275-283) investigated the use of a power of 4, while other research established algorithms with arbitrary integer (Pei and Tseng, IEEE J. Selected Areas Commun. 12(9)(December 1994) 1540-1547) or non-quadratic power (Shah and Cowan, IEE.Proc.-Vis. Image Signal Process. 142 (3) (June 1995) 187-191). This paper suggests that continuous and automatic, adaptation of the error exponent gives a more satisfactory result. The family of cost function adaptation (CFA) stochastic gradient algorithm proposed allows an increase in convergence rate and, an improvement of residual error. As special case the staircase CFA algorithm is first presented, then the smooth CFA is developed. Details of implementations are also discussed. Results of simulation are provided to show the properties of the proposed family of algorithms. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Several studies have reported imitative deficits in autism spectrum disorder (ASD). However, it is still debated if imitative deficits are specific to ASD or shared with clinical groups with similar mental impairment and motor difficulties. We investigated whether imitative tasks can be used to discriminate ASD children from typically developing children (TD) and children with general developmental delay (GDD). We applied discriminant function analyses to the performance of these groups on three imitation tasks and tests of dexterity, motor planning, verbal skills, theory of mind (ToM). Analyses revealed two significant dimensions. The first represented impairment of dexterity and verbal ability, and discriminated TD from GDD children. Once these differences were accounted for, differences in ToM and the three imitation tasks accounted for a significant proportion of the remaining intergroup variance and discriminated the ASD group from other groups. Further analyses revealed that inclusion of imitative tasks increased the specificity and sensitivity of ASD classification and that imitative tasks considered alone were able to reliably discriminate ASD, TD and GDD. The results suggest that imitation and theory of mind impairment in autism may stem from a common domain of origin separate from general cognitive and motor skill.
Resumo:
This paper reviews Alfred Marshall's attempts to reconcile increasing returns and competition from the early economic writings to the later editions of his Principles. It is shown that while Marshall's final solution to the problem involved naming external economies the cause of increasing returns in a regime of competition , both the life cycle of the firm and internal economies remained necessary to his argument. Their function was to give some operation al content to the elusive concept of external economies.
Resumo:
A theory of strongly interacting Fermi systems of a few particles is developed. At high excit at ion energies (a few times the single-parti cle level spacing) these systems are characterized by an extreme degree of complexity due to strong mixing of the shell-model-based many-part icle basis st at es by the residual two- body interaction. This regime can be described as many-body quantum chaos. Practically, it occurs when the excitation energy of the system is greater than a few single-particle level spacings near the Fermi energy. Physical examples of such systems are compound nuclei, heavy open shell atoms (e.g. rare earths) and multicharged ions, molecules, clusters and quantum dots in solids. The main quantity of the theory is the strength function which describes spreading of the eigenstates over many-part icle basis states (determinants) constructed using the shell-model orbital basis. A nonlinear equation for the strength function is derived, which enables one to describe the eigenstates without diagonalization of the Hamiltonian matrix. We show how to use this approach to calculate mean orbital occupation numbers and matrix elements between chaotic eigenstates and introduce typically statistical variable s such as t emperature in an isolated microscopic Fermi system of a few particles.
Resumo:
The key questions of uniqueness and existence in time-dependent density-functional theory are usually formulated only for potentials and densities that are analytic in time. Simple examples, standard in quantum mechanics, lead, however, to nonanalyticities. We reformulate these questions in terms of a nonlinear Schroedinger equation with a potential that depends nonlocally on the wave function.
Resumo:
The purpose of this study is to survey the use of networks and network-based methods in systems biology. This study starts with an introduction to graph theory and basic measures allowing to quantify structural properties of networks. Then, the authors present important network classes and gene networks as well as methods for their analysis. In the last part of this study, the authors review approaches that aim at analysing the functional organisation of gene networks and the use of networks in medicine. In addition to this, the authors advocate networks as a systematic approach to general problems in systems biology, because networks are capable of assuming multiple roles that are very beneficial connecting experimental data with a functional interpretation in biological terms.
Resumo:
Belief revision characterizes the process of revising an agent’s beliefs when receiving new evidence. In the field of artificial intelligence, revision strategies have been extensively studied in the context of logic-based formalisms and probability kinematics. However, so far there is not much literature on this topic in evidence theory. In contrast, combination rules proposed so far in the theory of evidence, especially Dempster rule, are symmetric. They rely on a basic assumption, that is, pieces of evidence being combined are considered to be on a par, i.e. play the same role. When one source of evidence is less reliable than another, it is possible to discount it and then a symmetric combination operation
is still used. In the case of revision, the idea is to let prior knowledge of an agent be altered by some input information. The change problem is thus intrinsically asymmetric. Assuming the input information is reliable, it should be retained whilst the prior information should be changed minimally to that effect. To deal with this issue, this paper defines the notion of revision for the theory of evidence in such a way as to bring together probabilistic and logical views. Several revision rules previously proposed are reviewed and we advocate one of them as better corresponding to the idea of revision. It is extended to cope with inconsistency between prior and input information. It reduces to Dempster
rule of combination, just like revision in the sense of Alchourron, Gardenfors, and Makinson (AGM) reduces to expansion, when the input is strongly consistent with the prior belief function. Properties of this revision rule are also investigated and it is shown to generalize Jeffrey’s rule of updating, Dempster rule of conditioning and a form of AGM revision.
Resumo:
This paper and its companion paper describe the comparison between a one-dimensional theoretical model of a hydrogen discharge in a magnetic multipole plasma source and experimental measurements of the plasma parameters. The discharge chamber, described here, has been designed to produce significant densities of H- ions by incorporating a weak transverse field through the discharge to obtain electron cooling so as to maximize H- production. Langmuir probes are used to monitor the plasma, determining the ion density, the electron density and temperature and the plasma potential. The negative density is measured by photo-detachment of the extra electron using an intense laser beam. The model, described in the companion paper, uses the presented source geometry to calculate these plasma quantities as a function of the major are parameters; namely the are current and voltage and gas pressure. Good agreement is obtained between theory and experiment as a function of position and arc parameters.
Resumo:
We study the charge transfer between colliding ions, atoms, or molecules, within time-dependent density functional theory. Two particular cases are presented, the collision between a proton and a Helium atom, and between a gold atom and a butane molecule. In the first case, proton kinetic energies between 16 keV and 1.2 MeV are considered, with impact parameters between 0.31 and 1.9 angstrom. The partial transfer of charge is monitored with time. The total cross-section is obtained as a function of the proton kinetic energy. In the second case, we analyze one trajectory and discuss spin-dependent charge transfer between the different fragments.
Resumo:
Quantitative scaling relationships among body mass, temperature and metabolic rate of organisms are still controversial, while resolution may be further complicated through the use of different and possibly inappropriate approaches to statistical analysis. We propose the application of a modelling strategy based on the theoretical approach of Akaike's information criteria and non-linear model fitting (nlm). Accordingly, we collated and modelled available data at intraspecific level on the individual standard metabolic rate of Antarctic microarthropods as a function of body mass (M), temperature (T), species identity (S) and high rank taxa to which species belong (G) and tested predictions from metabolic scaling theory (mass-metabolism allometric exponent b = 0.75, activation energy range 0.2-1.2 eV). We also performed allometric analysis based on logarithmic transformations (lm). Conclusions from lm and nlm approaches were different. Best-supported models from lm incorporated T, M and S. The estimates of the allometric scaling exponent linking body mass and metabolic rate resulted in a value of 0.696 +/- 0.105 (mean +/- 95% CI). In contrast, the four best-supported nlm models suggested that both the scaling exponent and activation energy significantly vary across the high rank taxa (Collembola, Cryptostigmata, Mesostigmata and Prostigmata) to which species belong, with mean values of b ranging from about 0.6 to 0.8. We therefore reached two conclusions: 1, published analyses of arthropod metabolism based on logarithmic data may be biased by data transformation; 2, non-linear models applied to Antarctic microarthropod metabolic rate suggest that intraspecific scaling of standard metabolic rate in Antarctic microarthropods is highly variable and can be characterised by scaling exponents that greatly vary within taxa, which may have biased previous interspecific comparisons that neglected intraspecific variability.
Resumo:
We introduce a time-dependent R-matrix theory generalized to describe double-ionization processes. The method is used to investigate two-photon double ionization of He by intense XUV laser radiation. We combine a detailed B-spline-based wave-function description in an extended inner region with a single-electron outer region containing channels representing both single ionization and double ionization. A comparison of wave-function densities for different box sizes demonstrates that the flow between the two regions is described with excellent accuracy. The obtained two-photon double-ionization cross sections are in excellent agreement with other cross sections available. Compared to calculations fully contained within a finite inner region, the present calculations can be propagated over the time it takes the slowest electron to reach the boundary.
Resumo:
Recent experiments on Au break junctions [Phys. Rev. Lett. 88 (2002) 216803] have characterized the nonlinear conductance of stretched short Au nanowires. They reveal in the voltage range 10-20 meV the signatures of dissipation effects, likely due to phonons in the nanowire, reducing the conductance below the quantized value of 2e(2)/h. We present here a theory, based on a model tight-binding Hamiltonian and on non-equilibrium Green's function techniques, which accounts for the main features of the experiment. The theory helps in revealing details of the experiment which need to be addressed with a more realistic, less idealized, theoretical framework. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
We have excited mid-infrared surface plasmons in two YBCO thin films of contrasting properties using attenuated total reflection of light and found that the imaginary part of the dielectric function decreases linearly with reduction in temperature. This result is in contrast with the commonly reported conclusion of infrared normal reflectance studies. If sustained it may clarify the problem of understanding the normal state properties of YBCO and the other cuprates. The dielectric function of the films, epsilon = epsilon(1) + i epsilon(2), was determined between room temperature and 80K: epsilon(1) was found to be only slightly temperature dependent but somewhat sample dependent, probably as a result of surface and grain boundary contamination. The imaginary part, epsilon(2), (and the real part of the conductivity, sigma(1),) decreased linearly with reduction in temperature in both films. Results obtained were: for film 1: epsilon(1) = - 14.05 - 0.0024T and epsilon(2) - 4.11 + 0.086T and for film 2: epsilon(1) = - 24.09 + 0.0013T and epsilon(2) = 7.66 + 0.067T where T is the temperature in Kelvin. An understanding of the results is offered in terms of temperature-dependent intrinsic intragrain inelastic scattering and temperature-independent contributions: elastic and inelastic grain boundary scattering and optical interband (or localised charge) absorption. The relative contribution of each is estimated. A key conclusion is that the interband (or localised charge) absorption is only similar to 10%. Most importantly, the intrinsic scattering rate, 1/tau, decreases linearly with fall in temperature, T, in a regime where current theory predicts dependence on frequency, omega, to dominate. The coupling constant, lambda, between the charge carriers and the thermal excitations has a value of 1.7, some fivefold greater than the far infrared value. These results imply a need to restate the phenomenology of the normal state of high temperature superconductors and seek a corresponding theoretical understanding.
Resumo:
Knowledge is an important component in many intelligent systems.
Since items of knowledge in a knowledge base can be conflicting, especially if
there are multiple sources contributing to the knowledge in this base, significant
research efforts have been made on developing inconsistency measures for
knowledge bases and on developing merging approaches. Most of these efforts
start with flat knowledge bases. However, in many real-world applications, items
of knowledge are not perceived with equal importance, rather, weights (which
can be used to indicate the importance or priority) are associated with items of
knowledge. Therefore, measuring the inconsistency of a knowledge base with
weighted formulae as well as their merging is an important but difficult task. In
this paper, we derive a numerical characteristic function from each knowledge
base with weighted formulae, based on the Dempster-Shafer theory of evidence.
Using these functions, we are able to measure the inconsistency of the knowledge
base in a convenient and rational way, and are able to merge multiple knowledge
bases with weighted formulae, even if knowledge in these bases may be
inconsistent. Furthermore, by examining whether multiple knowledge bases are
dependent or independent, they can be combined in different ways using their
characteristic functions, which cannot be handled (or at least have never been
considered) in classic knowledge based merging approaches in the literature.