997 resultados para Membrane Computing
Resumo:
Anions such as Cl(-) and HCO3 (-) are well known to play an important role in glucose-stimulated insulin secretion (GSIS). In this study, we demonstrate that glucose-induced Cl(-) efflux from β-cells is mediated by the Ca(2+)-activated Cl(-) channel anoctamin 1 (Ano1). Ano1 expression in rat β-cells is demonstrated by reverse transcriptase-polymerase chain reaction, western blotting, and immunohistochemistry. Typical Ano1 currents are observed in whole-cell and inside-out patches in the presence of intracellular Ca(++): at 1 μM, the Cl(-) current is outwardly rectifying, and at 2 μM, it becomes almost linear. The relative permeabilities of monovalent anions are NO3 (-) (1.83 ± 0.10) > Br(-) (1.42 ± 0.07) > Cl(-) (1.0). A linear single-channel current-voltage relationship shows a conductance of 8.37 pS. These currents are nearly abolished by blocking Ano1 antibodies or by the inhibitors 2-(5-ethyl-4-hydroxy-6-methylpyrimidin-2-ylthio)-N-(4-(4-methoxyphenyl)thiazol-2-yl)acetamide (T-AO1) and tannic acid (TA). These inhibitors induce a strong decrease of 16.7-mM glucose-stimulated action potential rate (at least 87 % on dispersed cells) and a partial membrane repolarization with T-AO1. They abolish or strongly inhibit the GSIS increment at 8.3 mM and at 16.7 mM glucose. Blocking Ano1 antibodies also abolish the 16.7-mM GSIS increment. Combined treatment with bumetanide and acetazolamide in low Cl(-) and HCO3 (-) media provokes a 65 % reduction in action potential (AP) amplitude and a 15-mV AP peak repolarization. Although the mechanism triggering Ano1 opening remains to be established, the present data demonstrate that Ano1 is required to sustain glucose-stimulated membrane potential oscillations and insulin secretion.
Resumo:
Studies [Zhou, D. Chen, L.-M. Hernandez, L. Shears, S.B. and Galán, J.E. (2001) A Salmonella inositol polyphosphatase acts in conjunction with other bacterial effectors to promote host-cell actin cytoskeleton rearrangements and bacterial internalization. Mol. Microbiol. 39, 248-259] with engineered Salmonella mutants showed that deletion of SopE attenuated the pathogen's ability to deplete host-cell InsP5 and remodel the cytoskeleton. We pursued these observations: In SopE-transfected host-cells, membrane ruffling was induced, but SopE did not dephosphorylate InsP5, nor did it recruit PTEN (a cytosolic InsP5 phosphatase) for this task. However, PTEN strengthened SopE-mediated membrane ruffling. We conclude SopE promotes host-cell InsP5 hydrolysis only with the assistance of other Salmonella proteins. Our demonstration that Salmonella-mediated cytoskeletal modifications are independent of inositolphosphates will focus future studies on elucidating alternate pathogenic consequences of InsP5 metabolism, including ion channel conductance and apoptosis.
Resumo:
Review of: Rosalind W. Picard, Affective Computing
Resumo:
We report on practical experience using the Oxford BSP Library to parallelize a large electromagnetic code, the British Aerospace finite-difference time-domain code EMMA T:FD3D. The Oxford BS Library is one of the first realizations of the Bulk Synchronous Parallel computational model to be targeted at numerically intensive scientific (typically Fortran) computing. The BAe EMMA code is one of the first large-scale applications to be parallelized using this library, and it is an important demonstration of the cost effectiveness of the BSP approach. We illustrate how BSP cost-modelling techniques can be used to predict and optimize performance for single-source programs across different parallel platforms. We provide predicted and observed performance figures for an industrial-strength, single-source parallel code for a variety of real parallel architectures: shared memory multiprocessors, workstation clusters and massively parallel platforms.
Resumo:
Social network analysts have tried to capture the idea of social role explicitly by proposing a framework that precisely gives conditions under which group actors are playing equivalent roles. They term these methods positional analysis techniques. The most general definition is regular equivalence which captures the idea that equivalent actors are related in a similar way to equivalent alters. Regular equivalence gives rise to a whole class of partitions on a network. Given a network we have two different computational problems. The first is how to find a particular regular equivalence. An algorithm exists to find the largest regular partition but there are not efficient algorithms to test whether there is a regular k-partition. That is a partition in k groups that is regular. In addition, when dealing with real data, it is unlikely that any regular partitions exist. To overcome this problem relaxations of regular equivalence have been proposed along with optimisation techniques to find nearly regular partitions. In this paper we review the algorithms that have developed to find particular regular equivalences and look at some of the recent theoretical results which give an insight into the complexity of finding regular partitions.
Resumo:
In this article, suggestions are made for introducing an individual element into formative assessment of the ability to use computer software for statistics.
Resumo:
Computer equipment, once viewed as leading edge, is quickly condemned as obsolete and banished to basement store rooms or rubbish bins. The magpie instincts of some of the academics and technicians at the University of Greenwich, London, preserved some such relics in cluttered offices and garages to the dismay of colleagues and partners. When the University moved into its new campus in the historic buildings of the Old Royal Naval College in the center of Greenwich, corridor space in King William Court provided an opportunity to display some of this equipment so that students could see these objects and gain a more vivid appreciation of their subject's history.
Resumo:
This paper addresses some controversial issues relating to two main questions. Firstly, we discuss 'man-in-the loop' issues in SAACS. Some people advocate this must always be so that man's decisions can override autonomic components. In this case, the system has two subsystems - man and machine. Can we, however, have a fully autonomic machine - with no man in sight; even for short periods of time? What kinds of systems require man to always be in the loop? What is the optimum balance in self-to-human control? How do we determine the optimum? How far can we go in describing self-behaviour? How does a SAACS system handle unexpected behaviour? Secondly, what are the challenges/obstacles in testing SAACS in the context of self/human dilemma? Are there any lesson to be learned from other programmes e.g. Star-wars, aviation and space explorations? What role human factors and behavioural models play whilst in interacting with SAACS?.
Resumo:
A simulation program has been developed to calculate the power-spectral density of thin avalanche photodiodes, which are used in optical networks. The program extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. We describe our experiences in parallelizing the code using both MPI and OpenMP. Several array partitioning schemes and scheduling policies are implemented and tested Our results show that the OpenMP code is scalable up to 64 processors on an SGI Origin 2000 machine and has small average errors.
Resumo:
This panel paper sets out to discuss what self-adaptation means, and to explore the extent to which current autonomic systems exhibit truly self-adaptive behaviour. Many of the currently cited examples are clearly adaptive, but debate remains as to what extent they are simply following prescribed adaptation rules within preset bounds, and to what extent they have the ability to truly learn new behaviour. Is there a standard test that can be applied to differentiate? Is adaptive behaviour sufficient anyway? Other autonomic computing issues are also discussed.
Resumo:
Fractal video compression is a relatively new video compression method. Its attraction is due to the high compression ratio and the simple decompression algorithm. But its computational complexity is high and as a result parallel algorithms on high performance machines become one way out. In this study we partition the matching search, which occupies the majority of the work in a fractal video compression process, into small tasks and implement them in two distributed computing environments, one using DCOM and the other using .NET Remoting technology, based on a local area network consists of loosely coupled PCs. Experimental results show that the parallel algorithm is able to achieve a high speedup in these distributed environments.
Resumo:
This paper presents innovative work in the development of policy-based autonomic computing. The core of the work is a powerful and flexible policy-expression language AGILE, which facilitates run-time adaptable policy configuration of autonomic systems. AGILE also serves as an integrating platform for other self-management technologies including signal processing, automated trend analysis and utility functions. Each of these technologies has specific advantages and applicability to different types of dynamic adaptation. The AGILE platform enables seamless interoperability of the different technologies to each perform various aspects of self-management within a single application. The various technologies are implemented as object components. Self-management behaviour is specified using the policy language semantics to bind the various components together as required. Since the policy semantics support run-time re-configuration, the self-management architecture is dynamically composable. Additional benefits include the standardisation of the application programmer interface, terminology and semantics, and only a single point of embedding is required.
Resumo:
Financial modelling in the area of option pricing involves the understanding of the correlations between asset and movements of buy/sell in order to reduce risk in investment. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. In turn, analysis tools rely on fast numerical algorithms for the solution of financial mathematical models. There are many different financial activities apart from shares buy/sell activities. The main aim of this chapter is to discuss a distributed algorithm for the numerical solution of a European option. Both linear and non-linear cases are considered. The algorithm is based on the concept of the Laplace transform and its numerical inverse. The scalability of the algorithm is examined. Numerical tests are used to demonstrate the effectiveness of the algorithm for financial analysis. Time dependent functions for volatility and interest rates are also discussed. Applications of the algorithm to non-linear Black-Scholes equation where the volatility and the interest rate are functions of the option value are included. Some qualitative results of the convergence behaviour of the algorithm is examined. This chapter also examines the various computational issues of the Laplace transformation method in terms of distributed computing. The idea of using a two-level temporal mesh in order to achieve distributed computation along the temporal axis is introduced. Finally, the chapter ends with some conclusions.