878 resultados para solution trees
Resumo:
We present the finite-element method in its application to solving quantum-mechanical problems for diatomic molecules. Results for Hartree-Fock calculations of H_2 and Hartree-Fock-Slater calculations for molecules like N_2 and CO are presented. The accuracy achieved with fewer than 5000 grid points for the total energies of these systems is 10^-8 a.u., which is about two orders of magnitude better than the accuracy of any other available method.
Resumo:
The extension of the Periodic Table into the range of unknown atomic numbers of above one hundred requires relativistic calculations. The results of the latter are used to indicate probable values for X-ray transition lines which will be useful for identification of the atomic species formed during collision between accelerated ions and the target. If the half-lives of the isotopes are long, then the chemistry of these new species becomes an important question which is reviewed for E110, E 111 and E112. The possible structural chemistry of the elements E108 to E112 is suggested. Finally the effects of solvation on ions of the actinide and superheavy elements have been studied.
Resumo:
We present a new scheme to solve the time dependent Dirac-Fock-Slater equation (TDDFS) for heavy many electron ion-atom collision systems. Up to now time independent self consistent molecular orbitals have been used to expand the time dependent wavefunction and rather complicated potential coupling matrix elements have been neglected. Our idea is to minimize the potential coupling by using the time dependent electronic density to generate molecular basis functions. We present the first results for 16 MeV S{^16+} on Ar.
Resumo:
The rejection of the European Constitution marks an important crystallization point for debate about the European Union (EU) and the integration process. The European Constitution was envisaged as the founding document of a renewed and enlarged European Union and thus it was rather assumed to find wide public support. Its rejection was not anticipated. The negative referenda in France and the Netherlands therefore led to a controversial debate about the more fundamental meaning and the consequences of the rejection both for the immediate state of affairs as well as for the further integration process. The rejection of the Constitution and the controversy about its correct interpretation therefore present an intriguing puzzle for political analysis. Although the treaty rejection was taken up widely in the field of European Studies, the focus of existing analyses has predominantly been on explaining why the current situation occurred. Underlying these approaches is the premise that by establishing the reasons for the rejection it is possible to derive the ‘true’ meaning of the event for the EU integration process. In my paper I rely on an alternative, discourse theoretical approach which aims to overcome the positivist perspective dominating the existing analyses. I argue that the meaning of the event ‘treaty rejection’ is not fixed or inherent to it but discursively constructed. The critical assessment of this concrete meaning-production is of high relevance as the specific meaning attributed to the treaty rejection effectively constrains the scope for supposedly ‘reasonable’ options for action, both in the concrete situation and in the further European integration process more generally. I will argue that the overall framing suggests a fundamental technocratic approach to governance from part of the Commission. Political struggle and public deliberation is no longer foreseen as the concrete solutions to the citizens’ general concerns are designed by supposedly apolitical experts. Through the communicative diffusion and the active implementation of this particular model of governance the Commission shapes the future integration process in a more substantial way than is obvious from its seemingly limited immediate problem-solving orientation of overcoming the ‘constitutional crisis’. As the European Commission is a central actor in the discourse production my analysis focuses on the specific interpretation of the situation put forward by the Commission. In order to work out the Commission’s particular take on the event I conducted a frame analysis (according to Benford/Snow) on a body of key sources produced in the context of coping with the treaty rejection.
Resumo:
We investigate solution sets of a special kind of linear inequality systems. In particular, we derive characterizations of these sets in terms of minimal solution sets. The studied inequalities emerge as information inequalities in the context of Bayesian networks. This allows to deduce important properties of Bayesian networks, which is important within causal inference.
Resumo:
There have being increasing debate on the prospects of biofuel becoming the next best alternative to solving the problem of CO2 emission and the escalating fuel prices, but the question is whether this assertion is true and also if it comes without any cost to pay. This paper seeks to find out whether this much praised alternative to solving these problems is a better option or another way for the developed countries to find more areas where they could get cheap land, labour and raw materials for the production of biofuel. This will focus mainly on some effects the growing biofuel production has on food security, livelihood of people, the environment and some land conflicts developing as a result of land grabbing for biofuel production in the developing countries.
Resumo:
This study was conducted in 2010 in Eastern Nuba Mountains, Sudan to investigate ethnobotanical food and non-food uses of 16 wild edible fruit producing trees. Quantitative and qualitative information was collected from 105 individuals distributed in 7 villages using a semi-structured questionnaire. Also gathering of data was done using a number of rapid rural appraisal techniques, including key informant interviews, group discussion, secondary data sources and direct observations. Data was analysed using fidelity level and informant consensus factor methods to reveal the cultural importance of species and use category. Utilizations for timber products were found of most community importance than food usages, especially during cultivated food abundance. Balanites aegyptiaca, Ziziphus spina-christi and Tamarindus indica fruits were asserted as most preferable over the others and of high marketability in most of the study sites. Harvesting for timber-based utilizations in addition to agricultural expansion and overgrazing were the principal threats to wild edible food producing trees in the area. The on and off prevailing armed conflict in the area make it crucial to conserve wild food trees which usually play a more significant role in securing food supply during emergency times, especially in times of famine and wars. Increasing the awareness of population on importance of wild food trees and securing alternative income sources, other than wood products, is necessary in any rural development programme aiming at securing food and sustaining its resources in the area.
Resumo:
Different theoretical models have tried to investigate the feasibility of recurrent neural mechanisms for achieving direction selectivity in the visual cortex. The mathematical analysis of such models has been restricted so far to the case of purely linear networks. We present an exact analytical solution of the nonlinear dynamics of a class of direction selective recurrent neural models with threshold nonlinearity. Our mathematical analysis shows that such networks have form-stable stimulus-locked traveling pulse solutions that are appropriate for modeling the responses of direction selective cortical neurons. Our analysis shows also that the stability of such solutions can break down giving raise to a different class of solutions ("lurching activity waves") that are characterized by a specific spatio-temporal periodicity. These solutions cannot arise in models for direction selectivity with purely linear spatio-temporal filtering.
Resumo:
Uniformly distributed ZnO nanorods with diameter 70-100 nm and 1-2μm long have been successfully grown at low temperatures on GaN by using the inexpensive aqueous solution method. The formation of the ZnO nanorods and the growth parameters are controlled by reactant concentration, temperature and pH. No catalyst is required. The XRD studies show that the ZnO nanorods are single crystals and that they grow along the c axis of the crystal plane. The room temperature photoluminescence measurements have shown ultraviolet peaks at 388nm with high intensity, which are comparable to those found in high quality ZnO films. The mechanism of the nanorod growth in the aqueous solution is proposed. The dependence of the ZnO nanorods on the growth parameters was also investigated. While changing the growth temperature from 60°C to 150°C, the morphology of the ZnO nanorods changed from sharp tip (needle shape) to flat tip (rod shape). These kinds of structure are useful in laser and field emission application.
Resumo:
Uniformly distributed ZnO nanorods with diameter 80-120 nm and 1-2µm long have been successfully grown at low temperatures on GaN by using the inexpensive aqueous solution method. The formation of the ZnO nanorods and the growth parameters are controlled by reactant concentration, temperature and pH. No catalyst is required. The XRD studies show that the ZnO nanorods are single crystals and that they grow along the c axis of the crystal plane. The room temperature photoluminescence measurements have shown ultraviolet peaks at 388nm with high intensity, which are comparable to those found in high quality ZnO films. The mechanism of the nanorod growth in the aqueous solution is proposed. The dependence of the ZnO nanorods on the growth parameters was also investigated. While changing the growth temperature from 60°C to 150°C, the morphology of the ZnO nanorods changed from sharp tip with high aspect ratio to flat tip with smaller aspect ratio. These kinds of structure are useful in laser and field emission application.
Resumo:
Many online services access a large number of autonomous data sources and at the same time need to meet different user requirements. It is essential for these services to achieve semantic interoperability among these information exchange entities. In the presence of an increasing number of proprietary business processes, heterogeneous data standards, and diverse user requirements, it is critical that the services are implemented using adaptable, extensible, and scalable technology. The COntext INterchange (COIN) approach, inspired by similar goals of the Semantic Web, provides a robust solution. In this paper, we describe how COIN can be used to implement dynamic online services where semantic differences are reconciled on the fly. We show that COIN is flexible and scalable by comparing it with several conventional approaches. With a given ontology, the number of conversions in COIN is quadratic to the semantic aspect that has the largest number of distinctions. These semantic aspects are modeled as modifiers in a conceptual ontology; in most cases the number of conversions is linear with the number of modifiers, which is significantly smaller than traditional hard-wiring middleware approach where the number of conversion programs is quadratic to the number of sources and data receivers. In the example scenario in the paper, the COIN approach needs only 5 conversions to be defined while traditional approaches require 20,000 to 100 million. COIN achieves this scalability by automatically composing all the comprehensive conversions from a small number of declaratively defined sub-conversions.
Resumo:
We present a technique for the rapid and reliable evaluation of linear-functional output of elliptic partial differential equations with affine parameter dependence. The essential components are (i) rapidly uniformly convergent reduced-basis approximations — Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N (optimally) selected points in parameter space; (ii) a posteriori error estimation — relaxations of the residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs; and (iii) offline/online computational procedures — stratagems that exploit affine parameter dependence to de-couple the generation and projection stages of the approximation process. The operation count for the online stage — in which, given a new parameter value, we calculate the output and associated error bound — depends only on N (typically small) and the parametric complexity of the problem. The method is thus ideally suited to the many-query and real-time contexts. In this paper, based on the technique we develop a robust inverse computational method for very fast solution of inverse problems characterized by parametrized partial differential equations. The essential ideas are in three-fold: first, we apply the technique to the forward problem for the rapid certified evaluation of PDE input-output relations and associated rigorous error bounds; second, we incorporate the reduced-basis approximation and error bounds into the inverse problem formulation; and third, rather than regularize the goodness-of-fit objective, we may instead identify all (or almost all, in the probabilistic sense) system configurations consistent with the available experimental data — well-posedness is reflected in a bounded "possibility region" that furthermore shrinks as the experimental error is decreased.
Resumo:
We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
A new method for the automated selection of colour features is described. The algorithm consists of two stages of processing. In the first, a complete set of colour features is calculated for every object of interest in an image. In the second stage, each object is mapped into several n-dimensional feature spaces in order to select the feature set with the smallest variables able to discriminate the remaining objects. The evaluation of the discrimination power for each concrete subset of features is performed by means of decision trees composed of linear discrimination functions. This method can provide valuable help in outdoor scene analysis where no colour space has been demonstrated as being the most suitable. Experiment results recognizing objects in outdoor scenes are reported