773 resultados para 100600 COMPUTER HARDWARE
Resumo:
Ubiquitous healthcare is an emerging area of technology that uses a large number of environmental and patient sensors and actuators to monitor and improve patients’ physical and mental condition. Tiny sensors gather data on almost any physiological characteristic that can be used to diagnose health problems. This technology faces some challenging ethical questions, ranging from the small-scale individual issues of trust and efficacy to the societal issues of health and longevity gaps related to economic status. It presents particular problems in combining developing computer/information/media ethics with established medical ethics. This article describes a practice-based ethics approach, considering in particular the areas of privacy, agency, equity and liability. It raises questions that ubiquitous healthcare will force practitioners to face as they develop ubiquitous healthcare systems. Medicine is a controlled profession whose practise is commonly restricted by government-appointed authorities, whereas computer software and hardware development is notoriously lacking in such regimes.
Resumo:
This paper describes a prototype grid infrastructure, called the eMinerals minigrid, for molecular simulation scientists. which is based on an integration of shared compute and data resources. We describe the key components, namely the use of Condor pools, Linux/Unix clusters with PBS and IBM's LoadLeveller job handling tools, the use of Globus for security handling, the use of Condor-G tools for wrapping globus job submit commands, Condor's DAGman tool for handling workflow, the Storage Resource Broker for handling data, and the CCLRC dataportal and associated tools for both archiving data with metadata and making data available to other workers.
Resumo:
Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.
Resumo:
We describe a compositional framework, together with its supporting toolset, for hardware/software co-design. Our framework is an integration of a formal approach within a traditional design flow. The formal approach is based on Interval Temporal Logic and its executable subset, Tempura. Refinement is the key element in our framework because it will derive from a single formal specification of the system the software and hardware parts of the implementation, while preserving all properties of the system specification. During refinement simulation is used to choose the appropriate refinement rules, which are applied automatically in the HOL system. The framework is illustrated with two case studies. The work presented is part of a UK collaborative research project between the Software Technology Research Laboratory at the De Montfort University and the Oxford University Computing Laboratory.
Resumo:
We describe a high-level design method to synthesize multi-phase regular arrays. The method is based on deriving component designs using classical regular (or systolic) array synthesis techniques and composing these separately evolved component design into a unified global design. Similarity transformations ar e applied to component designs in the composition stage in order to align data ow between the phases of the computations. Three transformations are considered: rotation, re ection and translation. The technique is aimed at the design of hardware components for high-throughput embedded systems applications and we demonstrate this by deriving a multi-phase regular array for the 2-D DCT algorithm which is widely used in many vide ocommunications applications.
Resumo:
The conformation of a model peptide AAKLVFF based on a fragment of the amyloid beta peptide A beta 16-20, KLVFF, is investigated in methanol and water via solution NMR experiments and Molecular dynamics computer simulations. In previous work, we have shown that AAKLVFF forms peptide nanotubes in methanol and twisted fibrils in water. Chemical shift measurements were used to investigate the solubility of the peptide as a function of concentration in methanol and water. This enabled the determination of critical aggregation concentrations, The Solubility was lower in water. In dilute solution, diffusion coefficients revealed the presence of intermediate aggregates in concentrated solution, coexisting with NMR-silent larger aggregates, presumed to be beta-sheets. In water, diffusion coefficients did not change appreciably with concentration, indicating the presence mainly of monomers, coexisting with larger aggregates in more concentrated solution. Concentration-dependent chemical shift measurements indicated a folded conformation for the monomers/intermediate aggregates in dilute methanol, with unfolding at higher concentration. In water, an antiparallel arrangement of strands was indicated by certain ROESY peak correlations. The temperature-dependent solubility of AAKLVFF in methanol was well described by a van't Hoff analysis, providing a solubilization enthalpy and entropy. This pointed to the importance of solvophobic interactions in the self-assembly process. Molecular dynamics Simulations constrained by NOE values from NMR suggested disordered reverse turn structures for the monomer, with an antiparallel twisted conformation for dimers. To model the beta-sheet structures formed at higher concentration, possible model arrangements of strands into beta-sheets with parallel and antiparallel configurations and different stacking sequences were used as the basis for MD simulations; two particular arrangements of antiparallel beta-sheets were found to be stable, one being linear and twisted and the other twisted in two directions. These structures Were used to simulate Circular dichroism spectra. The roles of aromatic stacking interactions and charge transfer effects were also examined. Simulated spectra were found to be similar to those observed experimentally.(in water or methanol) which show a maximum at 215 or 218 nm due to pi-pi* interactions, when allowance is made for a 15-18 nm red-shift that may be due to light scattering effects.
Resumo:
This paper presents a novel design of a virtual dental training system (hapTEL) using haptic technology. The system allows dental students to learn and practice procedures such as dental drilling, caries removal and cavity preparation for tooth restoration. This paper focuses on the hardware design, development and evaluation aspects in relation to the dental training and educational requirements. Detailed discussions on how the system offers dental students a natural operational position are documented. An innovative design of measuring and connecting the dental tools to the haptic device is also shown. Evaluation of the impact on teaching and learning is discussed.
Resumo:
The binding of NO to iron is involved in the biological function of many heme proteins. Contrary to ligands like CO and O-2, which only bind to ferrous (Fe-II) iron, NO binds to both ferrous and ferric (Fe-II) iron. In a particular protein, the natural oxidation state can therefore be expected to be tailored to the required function. Herein, we present an ob initio potential-energy surface for ferric iron interacting with NO. This potential-energy surface exhibits three minima corresponding to eta'-NO coordination (the global minimum), eta(1)-ON coordination and eta(2) coordination. This contrasts with the potential-energy surface for Fe-II-NO, which ex- hibits only two minima (the eta(2) coordination mode for Fe-II is a transition state, not a minimum). In addition, the binding energies of NO are substantially larger for Fe-III than for Fe-II. We have performed molecular dynamics simulations for NO bound to ferric myoglobin (Mb(III)) and compare these with results obtained for Mb(II). Over the duration of our simulations (1.5 ns), all three binding modes are found to be stable at 200 K and transiently stable at 300 K, with eventual transformation to the eta(1)-NO global-minimum conformation. We discuss the implication of these results related to studies of rebinding processes in myoglobin.
Resumo:
Myoglobin has been studied in considerable detail using different experimental and computational techniques over the past decades. Recent developments in time-resolved spectroscopy have provided experimental data amenable to detailed atomistic simulations. The main theme of the present review are results on the structures, energetics and dynamics of ligands ( CO, NO) interacting with myoglobin from computer simulations. Modern computational methods including free energy simulations, mixed quantum mechanics/molecular mechanics simulations, and reactive molecular dynamics simulations provide insight into the dynamics of ligand dynamics in confined spaces complementary to experiment. Application of these methods to calculate and understand experimental observations for myoglobin interacting with CO and NO are presented and discussed.
Resumo:
Routine computer tasks are often difficult for older adult computer users to learn and remember. People tend to learn new tasks by relating new concepts to existing knowledge. However, even for 'basic' computer tasks there is little, if any, existing knowledge on which older adults can base their learning. This paper investigates a custom file management interface that was designed to aid discovery and learnability by providing interface objects that are familiar to the user. A study was conducted which examined the differences between older and younger computer users when undertaking routine file management tasks using the standard Windows desktop as compared with the custom interface. Results showed that older adult computer users requested help more than ten times as often as younger users when using a standard windows/mouse configuration, made more mistakes and also required significantly more confirmations than younger users. The custom interface showed improvements over standard Windows/mouse, with fewer confirmations and less help being required. Hence, there is potential for an interface that closely mimics the real world to improve computer accessibility for older adults, aiding self-discovery and learnability.
Resumo:
The work reported in this paper is motivated towards handling single node failures for parallel summation algorithms in computer clusters. An agent based approach is proposed in which a task to be executed is decomposed to sub-tasks and mapped onto agents that traverse computing nodes. The agents intercommunicate across computing nodes to share information during the event of a predicted node failure. Two single node failure scenarios are considered. The Message Passing Interface is employed for implementing the proposed approach. Quantitative results obtained from experiments reveal that the agent based approach can handle failures more efficiently than traditional failure handling approaches.
Resumo:
Processor virtualization for process migration in distributed parallel computing systems has formed a significant component of research on load balancing. In contrast, the potential of processor virtualization for fault tolerance has been addressed minimally. The work reported in this paper is motivated towards extending concepts of processor virtualization towards ‘intelligent cores’ as a means to achieve fault tolerance in distributed parallel computing systems. Intelligent cores are an abstraction of the hardware processing cores, with the incorporation of cognitive capabilities, on which parallel tasks can be executed and migrated. When a processing core executing a task is predicted to fail the task being executed is proactively transferred onto another core. A parallel reduction algorithm incorporating concepts of intelligent cores is implemented on a computer cluster using Adaptive MPI and Charm ++. Preliminary results confirm the feasibility of the approach.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.