34 resultados para Canonical number systems
Resumo:
This paper is concerned with methods for refinement of specifications written using a combination of Object-Z and CSP. Such a combination has proved to be a suitable vehicle for specifying complex systems which involve state and behaviour, and several proposals exist for integrating these two languages. The basis of the integration in this paper is a semantics of Object-Z classes identical to CSP processes. This allows classes specified in Object-Z to be combined using CSP operators. It has been shown that this semantic model allows state-based refinement relations to be used on the Object-Z components in an integrated Object-Z/CSP specification. However, the current refinement methodology does not allow the structure of a specification to be changed in a refinement, whereas a full methodology would, for example, allow concurrency to be introduced during the development life-cycle. In this paper, we tackle these concerns and discuss refinements of specifications written using Object-Z and CSP where we change the structure of the specification when performing the refinement. In particular, we develop a set of structural simulation rules which allow single components to be refined to more complex specifications involving CSP operators. The soundness of these rules is verified against the common semantic model and they are illustrated via a number of examples.
Resumo:
A 4-cycle system of order n, denoted by 4CS(n), exists if and only if nequivalent to1 (mod 8). There are four configurations which can be formed by two 4-cycles in a 4CS(n). Formulas connecting the number of occurrences of each such configuration in a 4CS(n) are given. The number of occurrences of each configuration is determined completely by the number d of occurrences of the configuration D consisting of two 4-cycles sharing a common diagonal. It is shown that for every nequivalent to1 (mod 8) there exists a 4CS(n) which avoids the configuration D, i.e. for which d=0. The exact upper bound for d in a 4CS(n) is also determined.
Resumo:
Recently, a 3D phantom that can provide a comprehensive and accurate measurement of the geometric distortion in MRI has been developed. Using this phantom, a full assessment of the geometric distortion in a number of clinical MRI systems (GE and Siemens) has been carried out and detailed results are presented in this paper. As expected, the main source of geometric distortion in modern superconducting MRI systems arises from the gradient field nonlinearity. Significantly large distortions with maximum absolute geometric errors ranged between 10 and 25 mm within a volume of 240 x 240 x 240 mm(3) were observed when imaging with the new generation of gradient systems that employs shorter coils. By comparison, the geometric distortion was much less in the older-generation gradient systems. With the vendor's correction method, the geometric distortion measured was significantly reduced but only within the plane in which these 2D correction methods were applied. Distortion along the axis normal to the plane was, as expected, virtually unchanged. Two-dimensional correction methods are a convenient approach and in principle they are the only methods that can be applied to correct geometric distortion in a single slice or in multiple noncontiguous slices. However, these methods only provide an incomplete solution to the problem and their value can be significantly reduced if the distortion along the normal of the correction plane is not small. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Aims: To determine the prevalence and concentration of Escherichia coli O157 shed in faeces at slaughter, by beef cattle from different production systems. Methods and Results: Faecal samples were collected from grass-fed (pasture) and lot-fed (feedlot) cattle at slaughter and tested for the presence of E. coli O157 using automated immunomagnetic separation (AIMS). Escherichia coli O157 was enumerated in positive samples using the most probable number (MPN) technique and AIMS and total E. coli were enumerated using Petrifilm. A total of 310 faecal samples were tested (155 from each group). The geometric mean count of total E. coli was 5 x 10(5) and 2.5 x 10(5) CFU g(-1) for lot- and grass-fed cattle, respectively. Escherichia coli O157 was isolated from 13% of faeces with no significant difference between grass-fed (10%) and lot-fed cattle (15%). The numbers of E. coli O157 in cattle faeces varied from undetectable (
Resumo:
The recent summary report of a Department of Energy Workshop on Plant Systems Biology (P.V. Minorsky [2003] Plant Physiol 132: 404-409) offered a welcomed advocacy for systems analysis as essential in understanding plant development, growth, and production. The goal of the Workshop was to consider methods for relating the results of molecular research to real-world challenges in plant production for increased food supplies, alternative energy sources, and environmental improvement. The rather surprising feature of this report, however, was that the Workshop largely overlooked the rich history of plant systems analysis extending over nearly 40 years (Sinclair and Seligman, 1996) that has considered exactly those challenges targeted by the Workshop. Past systems research has explored and incorporated biochemical and physiological knowledge into plant simulation models from a number of perspectives. The research has resulted in considerable understanding and insight about how to simulate plant systems and the relative contribution of various factors in influencing plant production. These past activities have contributed directly to research focused on solving the problems of increasing biomass production and crop yields. These modeling approaches are also now providing an avenue to enhance integration of molecular genetic technologies in plant improvement (Hammer et al., 2002).
Resumo:
We investigate quantum many-body systems where all low-energy states are entangled. As a tool for quantifying such systems, we introduce the concept of the entanglement gap, which is the difference in energy between the ground-state energy and the minimum energy that a separable (unentangled) state may attain. If the energy of the system lies within the entanglement gap, the state of the system is guaranteed to be entangled. We find Hamiltonians that have the largest possible entanglement gap; for a system consisting of two interacting spin-1/2 subsystems, the Heisenberg antiferromagnet is one such example. We also introduce a related concept, the entanglement-gap temperature: the temperature below which the thermal state is certainly entangled, as witnessed by its energy. We give an example of a bipartite Hamiltonian with an arbitrarily high entanglement-gap temperature for fixed total energy range. For bipartite spin lattices we prove a theorem demonstrating that the entanglement gap necessarily decreases as the coordination number is increased. We investigate frustrated lattices and quantum phase transitions as physical phenomena that affect the entanglement gap.
Resumo:
Proposed by M. Stutzer (1996), canonical valuation is a new method for valuing derivative securities under the risk-neutral framework. It is non-parametric, simple to apply, and, unlike many alternative approaches, does not require any option data. Although canonical valuation has great potential, its applicability in realistic scenarios has not yet been widely tested. This article documents the ability of canonical valuation to price derivatives in a number of settings. In a constant-volatility world, canonical estimates of option prices struggle to match a Black-Scholes estimate based on historical volatility. However, in a more realistic stochastic-volatility setting, canonical valuation outperforms the Black-Scholes model. As the volatility generating process becomes further removed from the constant-volatility world, the relative performance edge of canonical valuation is more evident. In general, the results are encouraging that canonical valuation is a useful technique for valuing derivatives. (C) 2005 Wiley Periodicals, Inc.
Resumo:
Enterprise systems interoperability (ESI) is an important topic for business currently. This situation is evidenced, at least in part, by the number and extent of potential candidate protocols for such process interoperation, viz., ebXML, BPML, BPEL, and WSCI. Wide-ranging support for each of these candidate standards already exists. However, despite broad acceptance, a sound theoretical evaluation of these approaches has not yet been provided. We use the Bunge-Wand-Weber (BWW) models, in particular, the representation model, to provide the basis for such a theoretical evaluation. We, and other researchers, have shown the usefulness of the representation model for analyzing, evaluating, and engineering techniques in the areas of traditional and structured systems analysis, object-oriented modeling, and process modeling. In this work, we address the question, what are the potential semantic weaknesses of using ebXML alone for process interoperation between enterprise systems? We find that users will lack important implementation information because of representational deficiencies; due to ontological redundancy, the complexity of the specification is unnecessarily increased; and, users of the specification will have to bring in extra-model knowledge to understand constructs in the specification due to instances of ontological excess.
Resumo:
Functional-structural plant models that include detailed mechanistic representation of underlying physiological processes can be expensive to construct and the resulting models can also be extremely complicated. On the other hand, purely empirical models are not able to simulate plant adaptability and response to different conditions. In this paper, we present an intermediate approach to modelling plant function that can simulate plant response without requiring detailed knowledge of underlying physiology. Plant function is modelled using a 'canonical' modelling approach, which uses compartment models with flux functions of a standard mathematical form, while plant structure is modelled using L-systems. Two modelling examples are used to demonstrate that canonical modelling can be used in conjunction with L-systems to create functional-structural plant models where function is represented either in an accurate and descriptive way, or in a more mechanistic and explanatory way. We conclude that canonical modelling provides a useful, flexible and relatively simple approach to modelling plant function at an intermediate level of abstraction.
Resumo:
The purpose of this study was to systematically investigate the effect of lipid chain length and number of lipid chains present on lipopeptides on their ability to be incorporated within liposomes. The peptide KAVYNFATM was synthesized and conjugated to lipoamino acids having acyl chain lengths of C-8, C-12 and C-16. The C-12 construct was also prepared in the monomeric, dimeric and trimeric form. Liposomes were prepared by two techniques: hydration of dried lipid films (Bangham method) and hydration of freeze-dried monophase systems. Encapsulation of lipopeptide within liposomes prepared by hydration of dried lipid films was incomplete in all cases ranging from an entrapment efficiency of 70% for monomeric lipoamino acids at a 5% (w/w) loading to less than 20% for di- and trimeric forms at loadings of 20% (w/w). The incomplete entrapment of lipopeptides within liposomes appeared to be a result of the different solubilities of the lipopeptide and the phospholipids in the solvent used for the preparation of the lipid film. In contrast, encapsulation of lipopeptide within liposomes prepared by hydration of freeze-dried monophase systems was high, even up to a loading of 20% (w/w) and was much less affected by the acyl chain length and number than when liposomes were prepared by hydration of dried lipid films. Freeze drying of monophase systems is better at maintaining a molecular dispersion of the lipopeptide within the solid phospholipid matrix compared to preparation of lipid film by evaporation, particularly if the solubility of the lipopeptide in solvents is markedly different from that of the polar lipids used for liposome preparation. Consequently, upon hydration, the lipopeptide is more efficiently intercalated within the phospholipid bilayers. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
In this paper we consider the adsorption of argon on the surface of graphitized thermal carbon black and in slit pores at temperatures ranging from subcritical to supercritical conditions by the method of grand canonical Monte Carlo simulation. Attention is paid to the variation of the adsorbed density when the temperature crosses the critical point. The behavior of the adsorbed density versus pressure (bulk density) shows interesting behavior at temperatures in the vicinity of and those above the critical point and also at extremely high pressures. Isotherms at temperatures greater than the critical temperature exhibit a clear maximum, and near the critical temperature this maximum is a very sharp spike. Under the supercritical conditions and very high pressure the excess of adsorbed density decreases towards zero value for a graphite surface, while for slit pores negative excess density is possible at extremely high pressures. For imperfect pores (defined as pores that cannot accommodate an integral number of parallel layers under moderate conditions) the pressure at which the excess pore density becomes negative is less than that for perfect pores, and this is due to the packing effect in those imperfect pores. However, at extremely high pressure molecules can be packed in parallel layers once chemical potential is great enough to overcome the repulsions among adsorbed molecules. (c) 2005 American Institute of Physics.
Resumo:
Pharmacogenomics promotes an understanding of the genetic basis for differences in efficacy or toxicity of drugs in different individuals. Implementation of the outcomes of pharmacogenomic research into clinical practice presents a number of difficulties for healthcare. This paper aims to highlight one of the Unique ethical challenges which pharmacogenomics presents for the utilisation of cost-effectiveness analysis by public health systems. This paper contends that pharmacogenomics provides a challenge to fundamental principles which underlie most systems for deciding which drugs should be publicly subsidised. Pharmacogenomics brings into focus the conflict between equality and utility in the context of using cost-effectiveness analysis to aid distribution of a limited national drug budget.
Resumo:
Grand canonical Monte Carlo (GCMC) simulation was used for the systematic investigation of the supercritical methane adsorption at 273 K on an open graphite surface and in slitlike micropores of different sizes. For both considered adsorption systems the calculated excess adsorption isotherms exhibit a maximum. The effect of the pore size on the maximum surface excess and isosteric enthalpy of adsorption for methane storage at 273 K is discussed. The microscopic detailed picture of methane densification near the homogeneous graphite wall and in slitlike pores at 273 K is presented with selected local density profiles and snapshots. Finally, the reliable pore size distributions, obtained in the range of the microporosity, for two pitch-based microporous activated carbon fibers are calculated from the local excess adsorption isotherms obtained via the GCMC simulation. The current systematic study of supercritical methane adsorption both on an open graphite surface and in slitlike micropores performed by the GCMC summarizes recent investigations performed at slightly different temperatures and usually a lower pressure range by advanced methods based on the statistical thermodynamics.
Resumo:
We show how to efficiently simulate a quantum many-body system with tree structure when its entanglement (Schmidt number) is small for any bipartite split along an edge of the tree. As an application, we show that any one-way quantum computation on a tree graph can be efficiently simulated with a classical computer.
Resumo:
The Thames Estuary, UK, and the Brisbane River, Australia, are comparable in size and catchment area. Both are representative of the large and growing number of the world's estuaries associated with major cities. Principle differences between the two systems relate to climate and human population pressures. In order to assess the potential phytotoxic impact of herbicide residues in the estuaries, surface waters were analysed with a PAM fluorometry-based bioassay that employs the photosynthetic efficiency (photosystem II quantum yield) of laboratory cultured microalgae, as an endpoint measure of phytotoxicity. In addition, surface waters were chemically analysed for a limited number of herbicides. Diuron atrazine and simazine were detected in both systems at comparable concentrations. In contrast, bioassay results revealed that whilst detected herbicides accounted for the observed phytotoxicity of Brisbane River extracts with great accuracy, they consistently explained only around 50% of the phytotoxicity induced by Thames Estuary extracts. Unaccounted for phytotoxicity in Thames surface waters is indicative of unidentified phytotoxins. The greatest phytotoxic response was measured at Charing Cross, Thames Estuary, and corresponded to a diuron equivalent concentration of 180 ng L-1. The study employs relative potencies (REP) of PSII impacting herbicides and demonstrates that chemical analysis alone is prone to omission of valuable information. Results of the study provide support for the incorporation of bioassays into routine monitoring programs where bioassay data may be used to predict and verify chemical contamination data, alert to unidentified compounds and provide the user with information regarding cumulative toxicity of complex mixtures. (c) 2005 Elsevier B.V. All rights reserved.