929 resultados para Lattice theory - Computer programs
Resumo:
In this article we study the effects of adsorbed phase compression, lattice structure, and pore size distribution on the analysis of adsorption in microporous activated carbon. The lattice gas approach of Ono-Kondo is modified to account for the above effects. Data of nitrogen adsorption at 77 K onto a number of activated carbon samples are analyzed to investigate the pore filling pressure versus pore width, the packing effect, and the compression of the adsorbed phase. It is found that the PSDs obtained from this analysis are comparable to those obtained by the DFT method. The discrete nature of the PSDs derived from the modified lattice gas theory is due to the inherent assumption of discrete layers of molecules. Nevertheless, it does provide interesting information on the evolution of micropores during the activation process.
Resumo:
The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A new approach is developed to analyze the thermodynamic properties of a sub-critical fluid adsorbed in a slit pore of activated carbon. The approach is based on a representation that an adsorbed fluid forms an ordered structure close to a smoothed solid surface. This ordered structure is modelled as a collection of parallel molecular layers. Such a structure allows us to express the Helmholtz free energy of a molecular layer as the sum of the intrinsic Helmholtz free energy specific to that layer and the potential energy of interaction of that layer with all other layers and the solid surface. The intrinsic Helmholtz free energy of a molecular layer is a function (at given temperature) of its two-dimensional density and it can be readily obtained from bulk-phase properties, while the interlayer potential energy interaction is determined by using the 10-4 Lennard-Jones potential. The positions of all layers close to the graphite surface or in a slit pore are considered to correspond to the minimum of the potential energy of the system. This model has led to accurate predictions of nitrogen and argon adsorption on carbon black at their normal boiling points. In the case of adsorption in slit pores, local isotherms are determined from the minimization of the grand potential. The model provides a reasonable description of the 0-1 monolayer transition, phase transition and packing effect. The adsorption of nitrogen at 77.35 K and argon at 87.29 K on activated carbons is analyzed to illustrate the potential of this theory, and the derived pore-size distribution is compared favourably with that obtained by the Density Functional Theory (DFT). The model is less time-consuming than methods such as the DFT and Monte-Carlo simulation, and most importantly it can be readily extended to the adsorption of mixtures and capillary condensation phenomena.
Resumo:
The application of nonlocal density functional theory (NLDFT) to determine pore size distribution (PSD) of activated carbons using a nongraphitized carbon black, instead of graphitized thermal carbon black, as a reference system is explored. We show that in this case nitrogen and argon adsorption isotherms in activated carbons are precisely correlated by the theory, and such an excellent correlation would never be possible if the pore wall surface was assumed to be identical to that of graphitized carbon black. It suggests that pore wall surfaces of activated carbon are closer to that of amorphous solids because of defects of crystalline lattice, finite pore length, and the presence of active centers.. etc. Application of the NLDFT adapted to amorphous solids resulted in quantitative description of N-2 and Ar adsorption isotherms on nongraphitized carbon black BP280 at their respective boiling points. In the present paper we determined solid-fluid potentials from experimental adsorption isotherms on nongraphitized carbon black and subsequently used those potentials to model adsorption in slit pores and generate a corresponding set of local isotherms, which we used to determine the PSD functions of different activated carbons. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
What is the minimal size quantum circuit required to exactly implement a specified n-qubit unitary operation, U, without the use of ancilla qubits? We show that a lower bound on the minimal size is provided by the length of the minimal geodesic between U and the identity, I, where length is defined by a suitable Finsler metric on the manifold SU(2(n)). The geodesic curves on these manifolds have the striking property that once an initial position and velocity are set, the remainder of the geodesic is completely determined by a second order differential equation known as the geodesic equation. This is in contrast with the usual case in circuit design, either classical or quantum, where being given part of an optimal circuit does not obviously assist in the design of the rest of the circuit. Geodesic analysis thus offers a potentially powerful approach to the problem of proving quantum circuit lower bounds. In this paper we construct several Finsler metrics whose minimal length geodesics provide lower bounds on quantum circuit size. For each Finsler metric we give a procedure to compute the corresponding geodesic equation. We also construct a large class of solutions to the geodesic equation, which we call Pauli geodesics, since they arise from isometries generated by the Pauli group. For any unitary U diagonal in the computational basis, we show that: (a) provided the minimal length geodesic is unique, it must be a Pauli geodesic; (b) finding the length of the minimal Pauli geodesic passing from I to U is equivalent to solving an exponential size instance of the closest vector in a lattice problem (CVP); and (c) all but a doubly exponentially small fraction of such unitaries have minimal Pauli geodesics of exponential length.
Resumo:
High-level language program compilation strategies can be proven correct by modelling the process as a series of refinement steps from source code to a machine-level description. We show how this can be done for programs containing recursively-defined procedures in the well-established predicate transformer semantics for refinement. To do so the formalism is extended with an abstraction of the way stack frames are created at run time for procedure parameters and variables.
Resumo:
Defeasible reasoning is a simple but efficient approach to nonmonotonic reasoning that has recently attracted considerable interest and that has found various applications. Defeasible logic and its variants are an important family of defeasible reasoning methods. So far no relationship has been established between defeasible logic and mainstream nonmonotonic reasoning approaches. In this paper we establish close links to known semantics of logic programs. In particular, we give a translation of a defeasible theory D into a meta-program P(D). We show that under a condition of decisiveness, the defeasible consequences of D correspond exactly to the sceptical conclusions of P(D) under the stable model semantics. Without decisiveness, the result holds only in one direction (all defeasible consequences of D are included in all stable models of P(D)). If we wish a complete embedding for the general case, we need to use the Kunen semantics of P(D), instead.
Resumo:
A numerical method is introduced to determine the nuclear magnetic resonance frequency of a donor (P-31) doped inside a silicon substrate under the influence of an applied electric field. This phosphorus donor has been suggested for operation as a qubit for the realization of a solid-state scalable quantum computer. The operation of the qubit is achieved by a combination of the rotation of the phosphorus nuclear spin through a globally applied magnetic field and the selection of the phosphorus nucleus through a locally applied electric field. To realize the selection function, it is required to know the relationship between the applied electric field and the change of the nuclear magnetic resonance frequency of phosphorus. In this study, based on the wave functions obtained by the effective-mass theory, we introduce an empirical correction factor to the wave functions at the donor nucleus. Using the corrected wave functions, we formulate a first-order perturbation theory for the perturbed system under the influence of an electric field. In order to calculate the potential distributions inside the silicon and the silicon dioxide layers due to the applied electric field, we use the multilayered Green's functions and solve an integral equation by the moment method. This enables us to consider more realistic, arbitrary shape, and three-dimensional qubit structures. With the calculation of the potential distributions, we have investigated the effects of the thicknesses of silicon and silicon dioxide layers, the relative position of the donor, and the applied electric field on the nuclear magnetic resonance frequency of the donor.
Resumo:
This paper describes a logic of progress for concurrent programs. The logic is based on that of UNITY, molded to fit a sequential programming model. Integration of the two is achieved by using auxiliary variables in a systematic way that incorporates program counters into the program text. The rules for progress in UNITY are then modified to suit this new system. This modification is however subtle enough to allow the theory of Owicki and Gries to be used without change.
Resumo:
The particle-based lattice solid model developed to study the physics of rocks and the nonlinear dynamics of earthquakes is refined by incorporating intrinsic friction between particles. The model provides a means for studying the causes of seismic wave attenuation, as well as frictional heat generation, fault zone evolution, and localisation phenomena. A modified velocity-Verlat scheme that allows friction to be precisely modelled is developed. This is a difficult computational problem given that a discontinuity must be accurately simulated by the numerical approach (i.e., the transition from static to dynamical frictional behaviour). This is achieved using a half time step integration scheme. At each half time step, a nonlinear system is solved to compute the static frictional forces and states of touching particle-pairs. Improved efficiency is achieved by adaptively adjusting the time step increment, depending on the particle velocities in the system. The total energy is calculated and verified to remain constant to a high precision during simulations. Numerical experiments show that the model can be applied to the study of earthquake dynamics, the stick-slip instability, heat generation, and fault zone evolution. Such experiments may lead to a conclusive resolution of the heat flow paradox and improved understanding of earthquake precursory phenomena and dynamics. (C) 1999 Academic Press.
Resumo:
In component-based software engineering programs are constructed from pre-defined software library modules. However, if the library's subroutines do not exactly match the programmer's requirements, the subroutines' code must be adapted accordingly. For this process to be acceptable in safety or mission-critical applications, where all code must be proven correct, it must be possible to verify the correctness of the adaptations themselves. In this paper we show how refinement theory can be used to model typical adaptation steps and to define the conditions that must be proven to verify that a library subroutine has been adapted correctly.
Resumo:
Research in verification and validation (V&V) for concurrent programs can be guided by practitioner information. A survey was therefore run to gain state-of-practice information in this context. The survey presented in this paper collected state-of-practice information on V&V technology in concurrency from 35 respondents. The results of the survey can help refine existing V&V technology by providing a better understanding of the context of V&V technology usage. Responses to questions regarding the motivation for selecting V&V technologies can help refine a systematic approach to V&V technology selection.