997 resultados para Membrane Computing
Resumo:
Immobilized liposome chromatography (ILC), the stationary phase of which has been regarded as a mimic biomembranes system was used to separate and analyze compounds interacting with liposome membrane in Danggui Buxue decoction, a combined prescription of traditional Chinese medicines (CPTCMs), and its compositions Radix Astragli and Radix Angelica Sinensis. More than 10 main peaks in the extract of Danggui Buxue decoction were resolved on the ILC column, suggesting that more than 10 components in the prescription have significant retention on ILC column. Ligustilide, astragaloside, TV and formononetin, three main bioactive ingredients in Danggui Buxue decoction, were found to have relatively significant, while ferulic acid, another bioactive ingredient in the prescription, relatively weak retention on ILC column. Effects of the eluent pH and amount of immobilized phosphatidylcholine (PC) on separation of interactional compounds in the extract of Danggui Buxue decoction were also investigated. It was found that these two factors strongly affected the retention of some interactional compounds. In addition, the fractions partitioned with different solvents from water extract of this combined prescription were evaluated with this ILC column system. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Cloud computing is the technology prescription that will help the UK’s National Health Service (NHS) beat the budget constraints imposed as a consequence of the credit crunch. The internet based shared data and services resource will revolutionise the management of medical records and patient information while saving the NHS millions of pounds.
Resumo:
Simon, B., Hanks, B., Murphy, L., Fitzgerald, S., McCauley, R., Thomas, L., and Zander, C. 2008. Saying isn't necessarily believing: influencing self-theories in computing. In Proceeding of the Fourth international Workshop on Computing Education Research (Sydney, Australia, September 06 - 07, 2008). ICER '08. ACM, New York, NY, 173-184.
Resumo:
Eckerdal, A., McCartney, R., Mostr?m, J. E., Sanders, K., Thomas, L., and Zander, C. 2007. From Limen to Lumen: computing students in liminal spaces. In Proceedings of the Third international Workshop on Computing Education Research (Atlanta, Georgia, USA, September 15 - 16, 2007). ICER '07. ACM, New York, NY, 123-132.
Resumo:
STUDY QUESTION. Are significant abnormalities in outward (K+) conductance and resting membrane potential (Vm) present in the spermatozoa of patients undertaking IVF and ICSI and if so, what is their functional effect on fertilization success? SUMMARY ANSWER. Negligible outward conductance (≈5% of patients) or an enhanced inward conductance (≈4% of patients), both of which caused depolarization of Vm, were associated with a low rate of fertilization following IVF. WHAT IS KNOWN ALREADY. Sperm-specific potassium channel knockout mice are infertile with defects in sperm function, suggesting that these channels are essential for fertility. These observations suggest that malfunction of K+ channels in human spermatozoa might contribute significantly to the occurrence of subfertility in men. However, remarkably little is known of the nature of K+ channels in human spermatozoa or the incidence and functional consequences of K+ channel defects. STUDY DESIGN, SIZE AND DURATION. Spermatozoa were obtained from healthy volunteer research donors and subfertile IVF and ICSI patients attending a hospital assisted reproductive techniques clinic between May 2013 and December 2015. In total, 40 IVF patients, 41 ICSI patients and 26 normozoospermic donors took part in the study. PARTICIPANTS/MATERIALS, SETTING, METHODS. Samples were examined using electrophysiology (whole-cell patch clamping). Where abnormal electrophysiological characteristics were identified, spermatozoa were further examined for Ca2+ influx induced by progesterone and penetration into viscous media if sufficient sample was available. Full exome sequencing was performed to specifically evaluate potassium calcium-activated channel subfamily M α 1 (KCNMA1), potassium calcium-activated channel subfamily U member 1 (KCNU1) and leucine-rich repeat containing 52 (LRRC52) genes and others associated with K+ signalling. In IVF patients, comparison with fertilization rates was done to assess the functional significance of the electrophysiological abnormalities. MAIN RESULTS AND THE ROLE OF CHANCE. Patch clamp electrophysiology was used to assess outward (K+) conductance and resting membrane potential (Vm) and signalling/motility assays were used to assess functional characteristics of sperm from IVF and ICSI patient samples. The mean Vm and outward membrane conductance in sperm from IVF and ICSI patients were not significantly different from those of control (donor) sperm prepared under the same conditions, but variation between individuals was significantly greater (P< 0.02) with a large number of outliers (>25%). In particular, in ≈10% of patients (7/81), we observed either a negligible outward conductance (4 patients) or an enhanced inward current (3 patients), both of which caused depolarization of Vm. Analysis of clinical data from the IVF patients showed significant association of depolarized Vm (≥0 mV) with low fertilization rate (P= 0.012). Spermatozoa with electrophysiological abnormities (conductance and Vm) responded normally to progesterone with elevation of [Ca2+]i and penetration of viscous medium, indicating retention of cation channel of sperm (CatSper) channel function. LIMITATIONS, REASONS FOR CAUTION. For practical, technical, ethical and logistical reasons, we could not obtain sufficient additional semen samples from men with conductance abnormalities to establish the cause of the conductance defects. Full exome sequencing was only available in two men with conductance defects. WIDER IMPLICATIONS OF THE FINDINGS. These data add significantly to the understanding of the role of ion channels in human sperm function and its impact on male fertility. Impaired potassium channel conductance (Gm) and/or Vm regulation is both common and complex in human spermatozoa and importantly is associated with impaired fertilization capacity when the Vm of cells is completely depolarized.
Resumo:
The proliferation of inexpensive workstations and networks has prompted several researchers to use such distributed systems for parallel computing. Attempts have been made to offer a shared-memory programming model on such distributed memory computers. Most systems provide a shared-memory that is coherent in that all processes that use it agree on the order of all memory events. This dissertation explores the possibility of a significant improvement in the performance of some applications when they use non-coherent memory. First, a new formal model to describe existing non-coherent memories is developed. I use this model to prove that certain problems can be solved using asynchronous iterative algorithms on shared-memory in which the coherence constraints are substantially relaxed. In the course of the development of the model I discovered a new type of non-coherent behavior called Local Consistency. Second, a programming model, Mermera, is proposed. It provides programmers with a choice of hierarchically related non-coherent behaviors along with one coherent behavior. Thus, one can trade-off the ease of programming with coherent memory for improved performance with non-coherent memory. As an example, I present a program to solve a linear system of equations using an asynchronous iterative algorithm. This program uses all the behaviors offered by Mermera. Third, I describe the implementation of Mermera on a BBN Butterfly TC2000 and on a network of workstations. The performance of a version of the equation solving program that uses all the behaviors of Mermera is compared with that of a version that uses coherent behavior only. For a system of 1000 equations the former exhibits at least a 5-fold improvement in convergence time over the latter. The version using coherent behavior only does not benefit from employing more than one workstation to solve the problem while the program using non-coherent behavior continues to achieve improved performance as the number of workstations is increased from 1 to 6. This measurement corroborates our belief that non-coherent shared memory can be a performance boon for some applications.
Resumo:
Programmers of parallel processes that communicate through shared globally distributed data structures (DDS) face a difficult choice. Either they must explicitly program DDS management, by partitioning or replicating it over multiple distributed memory modules, or be content with a high latency coherent (sequentially consistent) memory abstraction that hides the DDS' distribution. We present Mermera, a new formalism and system that enable a smooth spectrum of noncoherent shared memory behaviors to coexist between the above two extremes. Our approach allows us to define known noncoherent memories in a new simple way, to identify new memory behaviors, and to characterize generic mixed-behavior computations. The latter are useful for programming using multiple behaviors that complement each others' advantages. On the practical side, we show that the large class of programs that use asynchronous iterative methods (AIM) can run correctly on slow memory, one of the weakest, and hence most efficient and fault-tolerant, noncoherence conditions. An example AIM program to solve linear equations, is developed to illustrate: (1) the need for concurrently mixing memory behaviors, and, (2) the performance gains attainable via noncoherence. Other program classes tolerate weak memory consistency by synchronizing in such a way as to yield executions indistinguishable from coherent ones. AIM computations on noncoherent memory yield noncoherent, yet correct, computations. We report performance data that exemplifies the potential benefits of noncoherence, in terms of raw memory performance, as well as application speed.
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
We discuss the design principles of TCP within the context of heterogeneous wired/wireless networks and mobile networking. We identify three shortcomings in TCP's behavior: (i) the protocol's error detection mechanism, which does not distinguish different types of errors and thus does not suffice for heterogeneous wired/wireless environments, (ii) the error recovery, which is not responsive to the distinctive characteristics of wireless networks such as transient or burst errors due to handoffs and fading channels, and (iii) the protocol strategy, which does not control the tradeoff between performance measures such as goodput and energy consumption, and often entails a wasteful effort of retransmission and energy expenditure. We discuss a solution-framework based on selected research proposals and the associated evaluation criteria for the suggested modifications. We highlight an important angle that did not attract the required attention so far: the need for new performance metrics, appropriate for evaluating the impact of protocol strategies on battery-powered devices.
Resumo:
A difficulty in lung image registration is accounting for changes in the size of the lungs due to inspiration. We propose two methods for computing a uniform scale parameter for use in lung image registration that account for size change. A scaled rigid-body transformation allows analysis of corresponding lung CT scans taken at different times and can serve as a good low-order transformation to initialize non-rigid registration approaches. Two different features are used to compute the scale parameter. The first method uses lung surfaces. The second uses lung volumes. Both approaches are computationally inexpensive and improve the alignment of lung images over rigid registration. The two methods produce different scale parameters and may highlight different functional information about the lungs.
Resumo:
Many people suffer from conditions that lead to deterioration of motor control and makes access to the computer using traditional input devices difficult. In particular, they may loose control of hand movement to the extent that the standard mouse cannot be used as a pointing device. Most current alternatives use markers or specialized hardware to track and translate a user's movement to pointer movement. These approaches may be perceived as intrusive, for example, wearable devices. Camera-based assistive systems that use visual tracking of features on the user's body often require cumbersome manual adjustment. This paper introduces an enhanced computer vision based strategy where features, for example on a user's face, viewed through an inexpensive USB camera, are tracked and translated to pointer movement. The main contributions of this paper are (1) enhancing a video based interface with a mechanism for mapping feature movement to pointer movement, which allows users to navigate to all areas of the screen even with very limited physical movement, and (2) providing a customizable, hierarchical navigation framework for human computer interaction (HCI). This framework provides effective use of the vision-based interface system for accessing multiple applications in an autonomous setting. Experiments with several users show the effectiveness of the mapping strategy and its usage within the application framework as a practical tool for desktop users with disabilities.
Resumo:
Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.
Resumo:
Science Foundation Ireland (CSET - Centre for Science, Engineering and Technology, Grant No. 07/CE/11147)
Resumo:
The technological role of handheld devices is fundamentally changing. Portable computers were traditionally application specific. They were designed and optimised to deliver a specific task. However, it is now commonly acknowledged that future handheld devices need to be multi-functional and need to be capable of executing a range of high-performance applications. This thesis has coined the term pervasive handheld computing systems to refer to this type of mobile device. Portable computers are faced with a number of constraints in trying to meet these objectives. They are physically constrained by their size, their computational power, their memory resources, their power usage, and their networking ability. These constraints challenge pervasive handheld computing systems in achieving their multi-functional and high-performance requirements. This thesis proposes a two-pronged methodology to enable pervasive handheld computing systems meet their future objectives. The methodology is a fusion of two independent and yet complementary concepts. The first step utilises reconfigurable technology to enhance the physical hardware resources within the environment of a handheld device. This approach recognises that reconfigurable computing has the potential to dynamically increase the system functionality and versatility of a handheld device without major loss in performance. The second step of the methodology incorporates agent-based middleware protocols to support handheld devices to effectively manage and utilise these reconfigurable hardware resources within their environment. The thesis asserts the combined characteristics of reconfigurable computing and agent technology can meet the objectives of pervasive handheld computing systems.
Resumo:
The work in this thesis concerns the advanced development of polymeric membranes of two types; pervaporation and lateral-flow. The former produced from a solution casting method and the latter from a phase separation. All membranes were produced from casting lacquers. Early research centred on the development of viable membranes. This led to a supported polymer blend pervaporation membrane. Selective layer: plasticized 4:1 mass ratio sodium-alginate: poly(vinyl-alcohol) polymer blend. Using this membrane, pervaporation separation of ethanol/water mixtures was carefully monitored as a function of film thickness and time. Contrary to literature expectations, these films showed increased selectivity and decreased flux as film thickness was reduced. It is argued that morphology and structure of the polymer blend changes with thickness and that these changes define membrane efficiency. Mixed matrix membrane development was done using spherical, discreet, size-monodisperse mesoporous silica particles of 1.8 - 2μm diameter, with pore diameters of ~1.8 nm were incorporated into a poly(vinyl alcohol) [PVA] matrix. Inclusion of silica benefitted pervaporation performance for the dehydration of ethanol, improving flux and selectivity throughout in all but the highest silica content samples. Early lateral-flow membrane research produced a membrane from a basic lacquer composition required for phase inversion; polymer, solvent and non-solvent. Results showed that bringing lacquers to cloud point benefits both the pore structure and skin layers of the membranes. Advancement of this work showed that incorporation of ethanol as a mesosolvent into the lacquer effectively enhances membrane pore structure resulting in an improvement in lateral flow rates of the final membranes. This project details the formation mechanics of pervaporation and lateral-flow membranes and how these can be controlled. The principle methods of control can be applied to the formation of any other flat sheet polymer membranes, opening many avenues of future membrane research and industrial application.