922 resultados para Computer arithmetic and logic units.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing systems? How can autonomic computing approaches be extended towards building reliable systems? How can existing technologies be merged to provide a solution for self-managing systems? The work reported in this paper aims to answer these questions by proposing Swarm-Array Computing, a novel technique inspired from swarm robotics and built on the foundations of autonomic and parallel computing paradigms. Two approaches based on intelligent cores and intelligent agents are proposed to achieve autonomy in parallel computing systems. The feasibility of the proposed approaches is validated on a multi-agent simulator.
Resumo:
The work reported in this paper is motivated by the need to investigate general methods for pattern transformation. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some agents in the pattern are introduced. The need for a mathematical tool and simulations for visualizing the behavior of a transformation method is highlighted. A mathematical method based on the Moebius transformation is proposed. The transformation method involves discretization of events for planning paths of individual robots in a pattern. Simulations on a particle physics simulator are used to validate the feasibility of the proposed method.
Resumo:
The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.
Resumo:
The work reported in this paper is motivated by the need to investigate general methods for pattern transformation. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some agents in the pattern are introduced. The need for a mathematical tool and simulations for visualizing the behavior of a transformation method is highlighted. A mathematical method based on the Moebius transformation is proposed. The transformation method involves discretization of events for planning paths of individual robots in a pattern. Simulations on a particle physics simulator are used to validate the feasibility of the proposed method.
Resumo:
This paper offers general guidelines for the development of effective visual languages. That is, languages for constructing diagrams that can be easily and readily interpreted and manipulated by the human reader. We use these guidelines first to examine classical AND/OR trees as a representation of logical proofs, and second to design and evaluate a visual language for representing proofs in LofA: a Logic of Dependability Arguments, for which we provide a brief motivation and overview.
Resumo:
This paper describes a new method for reconstructing 3D surface points and a wireframe on the surface of a freeform object using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed surface points are frontier points and the wireframe is a network of contour generators. Both of them are reconstructed by pairing apparent contours in the 2D images. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The unique pattern of the reconstructed points and contours may be used in 31) object recognition and measurement without computationally intensive full surface reconstruction. The results are obtained from both computer-generated and real objects. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The alignment of model amyloid peptide YYKLVFFC is investigated in bulk and at a solid surface using a range of spectroscopic methods employing polarized radiation. The peptide is based on a core sequence of the amyloid beta (A beta) peptide, KLVFF. The attached tyrosine and cysteine units are exploited to yield information on alignment and possible formation of disulfide or dityrosine links. Polarized Raman spectroscopy on aligned stalks provides information on tyrosine orientation, which complements data from linear dichroism (LD) on aqueous solutions subjected to shear in a Couette cell. LD provides a detailed picture of alignment of peptide strands and aromatic residues and was also used to probe the kinetics of self-assembly. This suggests initial association of phenylalanine residues, followed by subsequent registry of strands and orientation of tyrosine residues. X-ray diffraction (XRD) data from aligned stalks is used to extract orientational order parameters from the 0.48 nm reflection in the cross-beta pattern, from which an orientational distribution function is obtained. X-ray diffraction on solutions subject to capillary flow confirmed orientation in situ at the level of the cross-beta pattern. The information on fibril and tyrosine orientation from polarized Raman spectroscopy is compared with results from NEXAFS experiments on samples prepared as films on silicon. This indicates fibrils are aligned parallel to the surface, with phenyl ring normals perpendicular to the surface. Possible disulfide bridging leading to peptide dimer formation was excluded by Raman spectroscopy, whereas dityrosine formation was probed by fluorescence experiments and was found not to occur except under alkaline conditions. Congo red binding was found not to influence the cross-beta XRD pattern.
Resumo:
Time correlation functions yield profound information about the dynamics of a physical system and hence are frequently calculated in computer simulations. For systems whose dynamics span a wide range of time, currently used methods require significant computer time and memory. In this paper, we discuss the multiple-tau correlator method for the efficient calculation of accurate time correlation functions on the fly during computer simulations. The multiple-tau correlator is efficacious in terms of computational requirements and can be tuned to the desired level of accuracy. Further, we derive estimates for the error arising from the use of the multiple-tau correlator and extend it for use in the calculation of mean-square particle displacements and dynamic structure factors. The method described here, in hardware implementation, is routinely used in light scattering experiments but has not yet found widespread use in computer simulations.
Resumo:
We have examined the gut bacterial metabolism of pomegranate by-product (POMx) and major pomegranate polyphenols, punicalagins, using pH-controlled, stirred, batch culture fermentation systems reflective of the distal region of the human large intestine. Incubation of POMx or punicalagins with faecal bacteria resulted in formation of the dibenzopyranone-type urolithins. The time course profile confirmed the tetrahydroxylated urolithin D as the first product of microbial transformation, followed by compounds with decreasing number of phenolic hydroxy groups: the trihydroxy analogue urolithin C and dihydroxylated urolithin A. POMx exposure enhanced the growth of total bacteria, Bifidobacterium spp. and Lactobacillus spp., without influencing the Clostridium coccoides–Eubacterium rectale group and the C. histolyticum group. In addition, POMx increased concentrations of short chain fatty acids (SCFA) viz. acetate, propionate and butyrate in the fermentation medium. Punicalagins did not affect the growth of bacteria or production of SCFA. The results suggest that POMx oligomers, composed of gallic acid, ellagic acid and glucose units, may account for the enhanced growth of probiotic bacteria.
Resumo:
The aim of this book is to provide and introduction to microprocessor systems, their operation and design. It covers those topics needed by engineers and computer scientists who are interested in applying microprocessors in practical situations, namely computer hardware including logic and interfacing, software, in particular high level and assembly language programming, and the design and testing of such systems. The fundamental principles of micrprocessor systems are described and these are illustrated with reference to two microprocessors, the 32-bit MC68020 from Motorola and a single chip microcomputer, the 8051 from Intel; and in addition, interfacing to the general purpose STE bus is described. The details of the processors and the bus are concentrated in three chapters, thus allowing the presentation of the material to be independent of the microprocessors if that is desired, and permitting the specific details to be found easily.
Resumo:
This paper reports on a study of computer-mediated communication within the context of a distance MA in TEFL programme which used an e-mail discussion list and then a discussion board. The study focused on the computer/Internet access and skills of the target population and their CMC needs and wants. Data were collected from 63 questionnaires and 6 in-depth interviews with students. Findings indicate that computer use and access to the Internet are widespread within the target population. In addition, most respondents indicated some competence in Internet use. No single factor emerged as an overriding inhibiting factor for lack of personal use. There was limited use of the CMC tools provided on the course for student–student interaction, mainly attributable to time constraints. However, most respondents said that they would like more CMC interaction with tutors. The main factor which would contribute to greater Internet use was training. The paper concludes with recommendations and suggestions for learner training in this area.
Resumo:
We review the proposal of the International Committee for Weights and Measures (Comité International des Poids et Mesures, CIPM), currently being considered by the General Conference on Weights and Measures (Conférences Générales des Poids et Mesures, CGPM), to revise the International System of Units (Le Système International d’Unitès, SI). The proposal includes new definitions for four of the seven base units of the SI, and a new form of words to present the definitions of all the units. The objective of the proposed changes is to adopt definitions referenced to constants of nature, taken in the widest sense, so that the definitions may be based on what are believed to be true invariants. In particular, whereas in the current SI the kilogram, ampere, kelvin and mole are linked to exact numerical values of the mass of the international prototype of the kilogram, the magnetic constant (permeability of vacuum), the triple-point temperature of water and the molar mass of carbon-12, respectively, in the new SI these units are linked to exact numerical values of the Planck constant, the elementary charge, the Boltzmann constant and the Avogadro constant, respectively. The new wording used expresses the definitions in a simple and unambiguous manner without the need for the distinction between base and derived units. The importance of relations among the fundamental constants to the definitions, and the importance of establishing a mise en pratique for the realization of each definition, are also discussed.
Resumo:
Aim: To determine the prevalence and nature of prescribing errors in general practice; to explore the causes, and to identify defences against error. Methods: 1) Systematic reviews; 2) Retrospective review of unique medication items prescribed over a 12 month period to a 2% sample of patients from 15 general practices in England; 3) Interviews with 34 prescribers regarding 70 potential errors; 15 root cause analyses, and six focus groups involving 46 primary health care team members Results: The study involved examination of 6,048 unique prescription items for 1,777 patients. Prescribing or monitoring errors were detected for one in eight patients, involving around one in 20 of all prescription items. The vast majority of the errors were of mild to moderate severity, with one in 550 items being associated with a severe error. The following factors were associated with increased risk of prescribing or monitoring errors: male gender, age less than 15 years or greater than 64 years, number of unique medication items prescribed, and being prescribed preparations in the following therapeutic areas: cardiovascular, infections, malignant disease and immunosuppression, musculoskeletal, eye, ENT and skin. Prescribing or monitoring errors were not associated with the grade of GP or whether prescriptions were issued as acute or repeat items. A wide range of underlying causes of error were identified relating to the prescriber, patient, the team, the working environment, the task, the computer system and the primary/secondary care interface. Many defences against error were also identified, including strategies employed by individual prescribers and primary care teams, and making best use of health information technology. Conclusion: Prescribing errors in general practices are common, although severe errors are unusual. Many factors increase the risk of error. Strategies for reducing the prevalence of error should focus on GP training, continuing professional development for GPs, clinical governance, effective use of clinical computer systems, and improving safety systems within general practices and at the interface with secondary care.