914 resultados para Canonical number systems
Resumo:
We investigate quantum many-body systems where all low-energy states are entangled. As a tool for quantifying such systems, we introduce the concept of the entanglement gap, which is the difference in energy between the ground-state energy and the minimum energy that a separable (unentangled) state may attain. If the energy of the system lies within the entanglement gap, the state of the system is guaranteed to be entangled. We find Hamiltonians that have the largest possible entanglement gap; for a system consisting of two interacting spin-1/2 subsystems, the Heisenberg antiferromagnet is one such example. We also introduce a related concept, the entanglement-gap temperature: the temperature below which the thermal state is certainly entangled, as witnessed by its energy. We give an example of a bipartite Hamiltonian with an arbitrarily high entanglement-gap temperature for fixed total energy range. For bipartite spin lattices we prove a theorem demonstrating that the entanglement gap necessarily decreases as the coordination number is increased. We investigate frustrated lattices and quantum phase transitions as physical phenomena that affect the entanglement gap.
Resumo:
Proposed by M. Stutzer (1996), canonical valuation is a new method for valuing derivative securities under the risk-neutral framework. It is non-parametric, simple to apply, and, unlike many alternative approaches, does not require any option data. Although canonical valuation has great potential, its applicability in realistic scenarios has not yet been widely tested. This article documents the ability of canonical valuation to price derivatives in a number of settings. In a constant-volatility world, canonical estimates of option prices struggle to match a Black-Scholes estimate based on historical volatility. However, in a more realistic stochastic-volatility setting, canonical valuation outperforms the Black-Scholes model. As the volatility generating process becomes further removed from the constant-volatility world, the relative performance edge of canonical valuation is more evident. In general, the results are encouraging that canonical valuation is a useful technique for valuing derivatives. (C) 2005 Wiley Periodicals, Inc.
Resumo:
Enterprise systems interoperability (ESI) is an important topic for business currently. This situation is evidenced, at least in part, by the number and extent of potential candidate protocols for such process interoperation, viz., ebXML, BPML, BPEL, and WSCI. Wide-ranging support for each of these candidate standards already exists. However, despite broad acceptance, a sound theoretical evaluation of these approaches has not yet been provided. We use the Bunge-Wand-Weber (BWW) models, in particular, the representation model, to provide the basis for such a theoretical evaluation. We, and other researchers, have shown the usefulness of the representation model for analyzing, evaluating, and engineering techniques in the areas of traditional and structured systems analysis, object-oriented modeling, and process modeling. In this work, we address the question, what are the potential semantic weaknesses of using ebXML alone for process interoperation between enterprise systems? We find that users will lack important implementation information because of representational deficiencies; due to ontological redundancy, the complexity of the specification is unnecessarily increased; and, users of the specification will have to bring in extra-model knowledge to understand constructs in the specification due to instances of ontological excess.
Resumo:
Functional-structural plant models that include detailed mechanistic representation of underlying physiological processes can be expensive to construct and the resulting models can also be extremely complicated. On the other hand, purely empirical models are not able to simulate plant adaptability and response to different conditions. In this paper, we present an intermediate approach to modelling plant function that can simulate plant response without requiring detailed knowledge of underlying physiology. Plant function is modelled using a 'canonical' modelling approach, which uses compartment models with flux functions of a standard mathematical form, while plant structure is modelled using L-systems. Two modelling examples are used to demonstrate that canonical modelling can be used in conjunction with L-systems to create functional-structural plant models where function is represented either in an accurate and descriptive way, or in a more mechanistic and explanatory way. We conclude that canonical modelling provides a useful, flexible and relatively simple approach to modelling plant function at an intermediate level of abstraction.
Resumo:
The purpose of this study was to systematically investigate the effect of lipid chain length and number of lipid chains present on lipopeptides on their ability to be incorporated within liposomes. The peptide KAVYNFATM was synthesized and conjugated to lipoamino acids having acyl chain lengths of C-8, C-12 and C-16. The C-12 construct was also prepared in the monomeric, dimeric and trimeric form. Liposomes were prepared by two techniques: hydration of dried lipid films (Bangham method) and hydration of freeze-dried monophase systems. Encapsulation of lipopeptide within liposomes prepared by hydration of dried lipid films was incomplete in all cases ranging from an entrapment efficiency of 70% for monomeric lipoamino acids at a 5% (w/w) loading to less than 20% for di- and trimeric forms at loadings of 20% (w/w). The incomplete entrapment of lipopeptides within liposomes appeared to be a result of the different solubilities of the lipopeptide and the phospholipids in the solvent used for the preparation of the lipid film. In contrast, encapsulation of lipopeptide within liposomes prepared by hydration of freeze-dried monophase systems was high, even up to a loading of 20% (w/w) and was much less affected by the acyl chain length and number than when liposomes were prepared by hydration of dried lipid films. Freeze drying of monophase systems is better at maintaining a molecular dispersion of the lipopeptide within the solid phospholipid matrix compared to preparation of lipid film by evaporation, particularly if the solubility of the lipopeptide in solvents is markedly different from that of the polar lipids used for liposome preparation. Consequently, upon hydration, the lipopeptide is more efficiently intercalated within the phospholipid bilayers. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
In this paper we consider the adsorption of argon on the surface of graphitized thermal carbon black and in slit pores at temperatures ranging from subcritical to supercritical conditions by the method of grand canonical Monte Carlo simulation. Attention is paid to the variation of the adsorbed density when the temperature crosses the critical point. The behavior of the adsorbed density versus pressure (bulk density) shows interesting behavior at temperatures in the vicinity of and those above the critical point and also at extremely high pressures. Isotherms at temperatures greater than the critical temperature exhibit a clear maximum, and near the critical temperature this maximum is a very sharp spike. Under the supercritical conditions and very high pressure the excess of adsorbed density decreases towards zero value for a graphite surface, while for slit pores negative excess density is possible at extremely high pressures. For imperfect pores (defined as pores that cannot accommodate an integral number of parallel layers under moderate conditions) the pressure at which the excess pore density becomes negative is less than that for perfect pores, and this is due to the packing effect in those imperfect pores. However, at extremely high pressure molecules can be packed in parallel layers once chemical potential is great enough to overcome the repulsions among adsorbed molecules. (c) 2005 American Institute of Physics.
Resumo:
Pharmacogenomics promotes an understanding of the genetic basis for differences in efficacy or toxicity of drugs in different individuals. Implementation of the outcomes of pharmacogenomic research into clinical practice presents a number of difficulties for healthcare. This paper aims to highlight one of the Unique ethical challenges which pharmacogenomics presents for the utilisation of cost-effectiveness analysis by public health systems. This paper contends that pharmacogenomics provides a challenge to fundamental principles which underlie most systems for deciding which drugs should be publicly subsidised. Pharmacogenomics brings into focus the conflict between equality and utility in the context of using cost-effectiveness analysis to aid distribution of a limited national drug budget.
Resumo:
Grand canonical Monte Carlo (GCMC) simulation was used for the systematic investigation of the supercritical methane adsorption at 273 K on an open graphite surface and in slitlike micropores of different sizes. For both considered adsorption systems the calculated excess adsorption isotherms exhibit a maximum. The effect of the pore size on the maximum surface excess and isosteric enthalpy of adsorption for methane storage at 273 K is discussed. The microscopic detailed picture of methane densification near the homogeneous graphite wall and in slitlike pores at 273 K is presented with selected local density profiles and snapshots. Finally, the reliable pore size distributions, obtained in the range of the microporosity, for two pitch-based microporous activated carbon fibers are calculated from the local excess adsorption isotherms obtained via the GCMC simulation. The current systematic study of supercritical methane adsorption both on an open graphite surface and in slitlike micropores performed by the GCMC summarizes recent investigations performed at slightly different temperatures and usually a lower pressure range by advanced methods based on the statistical thermodynamics.
Resumo:
We show how to efficiently simulate a quantum many-body system with tree structure when its entanglement (Schmidt number) is small for any bipartite split along an edge of the tree. As an application, we show that any one-way quantum computation on a tree graph can be efficiently simulated with a classical computer.
Resumo:
The Thames Estuary, UK, and the Brisbane River, Australia, are comparable in size and catchment area. Both are representative of the large and growing number of the world's estuaries associated with major cities. Principle differences between the two systems relate to climate and human population pressures. In order to assess the potential phytotoxic impact of herbicide residues in the estuaries, surface waters were analysed with a PAM fluorometry-based bioassay that employs the photosynthetic efficiency (photosystem II quantum yield) of laboratory cultured microalgae, as an endpoint measure of phytotoxicity. In addition, surface waters were chemically analysed for a limited number of herbicides. Diuron atrazine and simazine were detected in both systems at comparable concentrations. In contrast, bioassay results revealed that whilst detected herbicides accounted for the observed phytotoxicity of Brisbane River extracts with great accuracy, they consistently explained only around 50% of the phytotoxicity induced by Thames Estuary extracts. Unaccounted for phytotoxicity in Thames surface waters is indicative of unidentified phytotoxins. The greatest phytotoxic response was measured at Charing Cross, Thames Estuary, and corresponded to a diuron equivalent concentration of 180 ng L-1. The study employs relative potencies (REP) of PSII impacting herbicides and demonstrates that chemical analysis alone is prone to omission of valuable information. Results of the study provide support for the incorporation of bioassays into routine monitoring programs where bioassay data may be used to predict and verify chemical contamination data, alert to unidentified compounds and provide the user with information regarding cumulative toxicity of complex mixtures. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Two experiments were conducted to test the hypothesis that toddlers have access to an analog-magnitude number representation that supports numerical reasoning about relatively large numbers. Three-year-olds were presented with subtraction problems in which initial set size and proportions subtracted were systematically varied. Two sets of cookies were presented and then covered The experimenter visibly subtracted cookies from the hidden sets, and the children were asked to choose which of the resulting sets had more. In Experiment 1, performance was above chance when high proportions of objects (3 versus 6) were subtracted from large sets (of 9) and for the subset of older participants (older than 3 years, 5 months; n = 15), performance was also above chance when high proportions (10 versus 20) were subtracted from the very large sets (of 30). In Experiment 2, which was conducted exclusively with older 3-year-olds and incorporated an important methodological control, the pattern of results for the subtraction tasks was replicated In both experiments, success on the tasks was not related to counting ability. The results of these experiments support the hypothesis that young children have access to an analog-magnitude system for representing large approximate quantities, as performance on these subtraction tasks showed a Webers Law signature, and was independent of conventional number knowledge.
Resumo:
Despite the number of computer-assisted methods described for the derivation of steady-state equations of enzyme systems, most of them are focused on strict steady-state conditions or are not able to solve complex reaction mechanisms. Moreover, many of them are based on computer programs that are either not readily available or have limitations. We present here a computer program called WinStes, which derives equations for both strict steady-state systems and those with the assumption of rapid equilibrium, for branched or unbranched mechanisms, containing both reversible and irreversible conversion steps. It solves reaction mechanisms involving up to 255 enzyme species, connected by up to 255 conversion steps. The program provides all the advantages of the Windows programs, such as a user-friendly graphical interface, and has a short computation time. WinStes is available free of charge on request from the authors. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Starting with a UML specification that captures the underlying functionality of some given Java-based concurrent system, we describe a systematic way to construct, from this specification, test sequences for validating an implementation of the system. The approach is to first extend the specification to create UML state machines that directly address those aspects of the system we wish to test. To be specific, the extended UML state machines can capture state information about the number of waiting threads or the number of threads blocked on a given object. Using the SAL model checker we can generate from the extended UML state machines sequences that cover all the various possibilities of events and states. These sequences can then be directly transformed into test sequences suitable for input into a testing tool such as ConAn. As an illustration, the methodology is applied to generate sequences for testing a Java implementation of the producer-consumer system. © 2005 IEEE
Resumo:
Location information is commonly used in context-aware applications and pervasive systems. These applications and systems may require knowledge, of the location of users, devices and services. This paper presents a location management system able to gather, process and manage location information from a variety of physical and virtual location sensors. The system scales to the complexity of context-aware applications, to a variety of types and large number of location sensors and clients, and to geographical size of the system. The proposed location management system provides conflict resolution of location information and mechanisms to ensure privacy.
Resumo:
The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.