88 resultados para Lysosomotropic Agents
Resumo:
We describe a model of computation of the parallel type, which we call 'computing with bio-agents', based on the concept that motions of biological objects such as bacteria or protein molecular motors in confined spaces can be regarded as computations. We begin with the observation that the geometric nature of the physical structures in which model biological objects move modulates the motions of the latter. Consequently, by changing the geometry, one can control the characteristic trajectories of the objects; on the basis of this, we argue that such systems are computing devices. We investigate the computing power of mobile bio-agent systems and show that they are computationally universal in the sense that they are capable of computing any Boolean function in parallel. We argue also that using appropriate conditions, bio-agent systems can solve NP-complete problems in probabilistic polynomial time.
Resumo:
Agents make up an important part of game worlds, ranging from the characters and monsters that live in the world to the armies that the player controls. Despite their importance, agents in current games rarely display an awareness of their environment or react appropriately, which severely detracts from the believability of the game. Some games have included agents with a basic awareness of other agents, but they are still unaware of important game events or environmental conditions. This paper presents an agent design we have developed, which combines cellular automata for environmental modeling with influence maps for agent decision-making. The agents were implemented into a 3D game environment we have developed, the EmerGEnT system, and tuned through three experiments. The result is simple, flexible game agents that are able to respond to natural phenomena (e.g. rain or fire), while pursuing a goal.
Resumo:
The Web has become a worldwide repository of information which individuals, companies, and organizations utilize to solve or address various information problems. Many of these Web users utilize automated agents to gather this information for them. Some assume that this approach represents a more sophisticated method of searching. However, there is little research investigating how Web agents search for online information. In this research, we first provide a classification for information agent using stages of information gathering, gathering approaches, and agent architecture. We then examine an implementation of one of the resulting classifications in detail, investigating how agents search for information on Web search engines, including the session, query, term, duration and frequency of interactions. For this temporal study, we analyzed three data sets of queries and page views from agents interacting with the Excite and AltaVista search engines from 1997 to 2002, examining approximately 900,000 queries submitted by over 3,000 agents. Findings include: (1) agent sessions are extremely interactive, with sometimes hundreds of interactions per second (2) agent queries are comparable to human searchers, with little use of query operators, (3) Web agents are searching for a relatively limited variety of information, wherein only 18% of the terms used are unique, and (4) the duration of agent-Web search engine interaction typically spans several hours. We discuss the implications for Web information agents and search engines.
Resumo:
Agents make up an important part of game worlds, ranging from the characters and monsters that live in the world to the armies the player controls. Despite their importance, agents in current games rarely display an awareness of their environment or react appropriately, which severely detracts from the believability of the game. Most games use agents that have a basic awareness of the player and other agents, but are still unaware of important game events or environmental conditions. This article describes an agent design that combines cellular automata for environmental modeling with influence maps for agent decision-making. The result is simple, flexible game agents that are able to respond to dynamic changes to the environment (e.g., rain or fire) while pursuing a goal.
Resumo:
Deep Raman spectroscopy has been utilized for the standoff detection of concealed chemical threat agents from a distance of 15 meters under real life background illumination conditions. By using combined time and space resolved measurements, various explosive precursors hidden in opaque plastic containers were identified non-invasively. Our results confirm that combined time and space resolved Raman spectroscopy leads to higher selectivity towards the sub-layer over the surface layer as well as enhanced rejection of fluorescence from the container surface when compared to standoff spatially offset Raman spectroscopy. Raman spectra that have minimal interference from the packaging material and good signal-to-noise ratio were acquired within 5 seconds of measurement time. A new combined time and space resolved Raman spectrometer has been designed with nanosecond laser excitation and gated detection, making it of lower cost and complexity than picosecond-based laboratory systems.
Resumo:
Background: Catheter ablation for atrial fibrillation (AF) is more efficacious than antiarrhythmic therapy. Post ablation recurrences reduce ablation effectiveness and are contributed by lesion discontinuity in the fibrotic linear ablation lesions. The anti-fibrotic role of statins in reducing AF is being assessed in current trials. By reducing the chronic pathological fibrosis that occurs in AF they may reduce AF. However if statins also have an effect on the acute therapeutic fibrosis of an ablation, this could exacerbate lesion discontinuity and AF recurrence. We tested the hypothesis that statins attenuate ablation lesion continuity in a recognised pig atrial linear ablation model. Aims: To assess whether Atorvastatin diminishes the bi-directional conduction block produced by a linear atrial ablation lesion. Methods: Sixteen pigs were randomised to statin (n=8) or placebo (n=8) with drug pre-treatment for 3 days and a further 4 weeks. At initial electrophysiological study (EPS1) 3D right atrium (RA) mapping and a vertical ablation linear lesion in the posterior RA with bidirectional conduction block were completed (Gepstein Circ 1999). Follow-up electrophysiological assessment (EPS2) at 28 days assessed bidirectional conduction block maintenance. Results: Data of 15/16 (statin=7) pigs were analysed. Mean lesion length was 3.7 ± 0.8cm with a mean of 17.9 ± 5.7 lesion applications. Bi-directional conduction block was confirmed in 15/15 pigs (100%) at EPS1 and EPS2. Conclusions: Atorvastatin did not affect ablation lesion continuity in this pig atrial linear ablation model. If patients are on long-term statins for AF reduction, periablation cessation is probably not necessary.
Resumo:
The ubiquitin (Ub)-proteasome pathway is the major nonlysosomal pathway of proteolysis in human cells and accounts for the degradation of most short-lived, misfolded or damaged proteins. This pathway is important in the regulation of a number of key biological regulatory mechanisms. Proteins are usually targeted for proteasome-mediated degradation by polyubiquitinylation, the covalent addition of multiple units of the 76 amino acid protein Ub, which are ligated to 1-amino groups of lysine residues in the substrate. Polyubiquitinylated proteins are degraded by the 26S proteasome, a large, ATP-dependent multicatalytic protease complex, which also regenerates monomeric Ub. The targets of this pathway include key regulators of cell proliferation and cell death. An alternative form of the proteasome, termed the immunoproteasome, also has important functions in the generation of peptides for presentation by MHC class I molecules. In recent years there has been a great deal of interest in the possibility that proteasome inhibitors, through elevation of the levels of proteasome targets, might prove useful as a novel class of anti-cancer drugs. Here we review the progress made to date in this area and highlight the potential advantages and weaknesses of this approach.
Resumo:
This article addresses in depth the question of whether section 420A of the Corporations Act 2001 (Cth) imposes ‘strict liability’ upon a controller for the failure of an agent or expert to take reasonable care. The weight of existing authority appears to suggest that controllers are liable under s 420A for the carelessness of their agents or expert advisers. However, a closer analysis of the text of the provision and relevant Australian and UK case law demonstrates that this aspect of the statutory construction of s 420A remains very much an open question. This article ultimately contends for a construction of s 420A which requires a controller to adequately supervise and scrutinise, but which does not render a blameless controller strictly liable for all careless acts and omissions of agents and expert advisers.