923 resultados para XYZ compliant parallel mechanism
Resumo:
Aims/hypothesis Recent evidence suggests that a particular gut microbial community may favour occurrence of the metabolic diseases. Recently, we reported that high-fat (HF) feeding was associated with higher endotoxaemia and lower Bifidobacterium species (spp.) caecal content in mice. We therefore tested whether restoration of the quantity of caecal Bifidobacterium spp. could modulate metabolic endotoxaemia, the inflammatory tone and the development of diabetes. Methods Since bifidobacteria have been reported to reduce intestinal endotoxin levels and improve mucosal barrier function, we specifically increased the gut bifidobacterial content of HF-diet-fed mice through the use of a prebiotic (oligofructose [OFS]). Results Compared with normal chow-fed control mice, HF feeding significantly reduced intestinal Gram-negative and Gram-positive bacteria including levels of bifidobacteria, a dominant member of the intestinal microbiota, which is seen as physiologically positive. As expected, HF-OFS-fed mice had totally restored quantities of bifidobacteria. HF-feeding significantly increased endotoxaemia, which was normalised to control levels in HF-OFS-treated mice. Multiple-correlation analyses showed that endotoxaemia significantly and negatively correlated with Bifidobacterium spp., but no relationship was seen between endotoxaemia and any other bacterial group. Finally, in HF-OFS-treated-mice, Bifidobacterium spp. significantly and positively correlated with improved glucose tolerance, glucose-induced insulin secretion and normalised inflammatory tone (decreased endotoxaemia, plasma and adipose tissue proinflammatory cytokines). Conclusions/interpretation Together, these findings suggest that the gut microbiota contribute towards the pathophysiological regulation of endotoxaemia and set the tone of inflammation for occurrence of diabetes and/or obesity. Thus, it would be useful to develop specific strategies for modifying gut microbiota in favour of bifidobacteria to prevent the deleterious effect of HF-diet-induced metabolic diseases.
Resumo:
Background: Cannabinoids from cannabis (Cannabis sativa) are anti-inflammatory and have inhibitory effects on the proliferation of a number of tumorigenic cell lines, some of which are mediated via cannabinoid receptors. Cannabinoid (CB) receptors are present in human skin and anandamide, an endogenous CB receptor ligand, inhibits epidermal keratinocyte differentiation. Psoriasis is an inflammatory disease also characterised in part by epidermal keratinocyte hyper-proliferation. Objective: We investigated the plant cannabinoids Delta-9 tetrahydrocannabinol, cannabidiol, cannabinol and cannabigerol for their ability to inhibit the proliferation of a hyper-proliferating human keratinocyte cell line and for any involvement of cannabinoid receptors. Methods: A keratinocyte proliferation assay was used to assess the effect of treatment with cannabinoids. Cell integrity and metabolic competence confirmed using lactate-dehydrogenase and adenosine tri-phosphate assays. To determine the involvement of the receptors, specific agonist and antagonist were used in conjunction with some phytocannabinoids. Western blot and RT-PCR analysis confirmed presence of CB1 and CB2 receptors. Results: The cannabinoids tested all inhibited keratinocyte proliferation in a concentration-dependent manner. The selective CB2 receptor agonists JWH015 and BML190 elicited only partial inhibition, the non-selective CB agonist HU210 produced a concentration-dependent response, the activity of theses agonists were not blocked by either C81 /C82 antagonists. Conclusion: The results indicate that while CB receptors may have a circumstantial role in keratinocyte proliferation, they do not contribute significantly to this process. Our results show that cannabinoids inhibit keratinocyte proliferation, and therefore support a potential role for cannabinoids in the treatment of psoriasis. (c) 2006 Japanese Society for Investigative Dermatology. Published by Elsevier Ireland Ltd. All rights reserved.
Resumo:
Recently we have described an HPMA copolymer conjugate carrying both the aromatase inhibitor aminoglutethimide (AGM) and doxorubicin (Dox) as combination therapy. This showed markedly enhanced in vitro cytotoxicity compared to the HPMA copolymer-Dox (FCE28068), a conjugate that demonstrated activity in chemotherapy refractory breast cancer patients during early clinical trials. To better understand the superior activity of HPMA copolymer-Dox-AGM, here experiments were undertaken using MCF-7 and MCF-7ca (aromatase-transfected) breast cancer cell lines to: further probe the synergistic cytotoxic effects of AGM and Dox in free and conjugated form; to compare the endocytic properties of HPMA copolymer-Dox-AGM and HPMA copolymer-Dox (binding, rate and mechanism of cellular uptake); the rate of drug liberation by lysosomal thiol-dependant proteases (i.e. conjugate activation), and also, using immunocytochemistry, to compare their molecular mechanism of action. It was clearly shown that attachment of both drugs to the same polymer backbone was a requirement for enhanced cytotoxicity. FACS studies indicated both conjugates have a similar pattern of cell binding and endocytic uptake (at least partially via a cholesterol-dependent pathway), however, the pattern of enzyme-mediated drug liberation was distinctly different. Dox release from PK1 was linear with time, whereas the release of both Dox and AGM from HPMA copolymer-Dox-AGM was not, and the initial rate of AGM release was much faster than that seen for the anthracycline. Immunocytochemistry showed that both conjugates decreased the expression of ki67. However, this effect was more marked for HPMA copolymer-Dox-AGM and, moreover, only this conjugate decreased the expression of the anti-apoptotic protein bcl-2. In conclusion, the superior in vitro activity of HPMA copolymer-Dox-AGM cannot be attributed to differences in endocytic uptake, and it seems likely that the synergistic effect of Dox and AGM is due to the kinetics of intracellular drug liberation which leads to enhanced activity. (c) 2006 Elsevier B.V All rights reserved.
Resumo:
A novel two-step paradigm was used to investigate the parallel programming of consecutive, stimulus-elicited ('reflexive') and endogenous ('voluntary') saccades. The mean latency of voluntary saccades, made following the first reflexive saccades in two-step conditions, was significantly reduced compared to that of voluntary saccades made in the single-step control trials. The latency of the first reflexive saccades was modulated by the requirement to make a second saccade: first saccade latency increased when a second voluntary saccade was required in the opposite direction to the first saccade, and decreased when a second saccade was required in the same direction as the first reflexive saccade. A second experiment confirmed the basic effect and also showed that a second reflexive saccade may be programmed in parallel with a first voluntary saccade. The results support the view that voluntary and reflexive saccades can be programmed in parallel on a common motor map. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Carruthers' "mindreading is prior" model postulates one unitary mindreading mechanism working identically for self and other. While we agree about shared mindreading mechanisms, there is also evidence from neuroimaging and mentalizing about dissimilar others that suggest factors that differentially affect self-versus-other mentalizing. Such dissociations suggest greater complexity than the mindreading is prior model allows.
Resumo:
The Java language first came to public attention in 1995. Within a year, it was being speculated that Java may be a good language for parallel and distributed computing. Its core features, including being objected oriented and platform independence, as well as having built-in network support and threads, has encouraged this view. Today, Java is being used in almost every type of computer-based system, ranging from sensor networks to high performance computing platforms, and from enterprise applications through to complex research-based.simulations. In this paper the key features that make Java a good language for parallel and distributed computing are first discussed. Two Java-based middleware systems, namely MPJ Express, an MPI-like Java messaging system, and Tycho, a wide-area asynchronous messaging framework with an integrated virtual registry are then discussed. The paper concludes by highlighting the advantages of using Java as middleware to support distributed applications.
Resumo:
Improving admittance of robotic joints is the key issue for making rehabilitation robots safe. This paper describes a design of Redundant Drive Joint (RD-Joint) which allows greater flexibility in the design of robotic mechanisms. The design strategy of the RD-Joint employs a systematic approach which consists of 1) adopting a redundant joint mechanism with internal kinematical redundancy to reduce effective joint inertia, and 2) adopting an adjustable admittance mechanism with a novel Cross link Reduction Mechanism and mechanical springs and dampers as a passive second actuator. First, the basic concepts used to construct the redundant drive joint mechanism are explained, in particular the method that allows a reduction in effective inertia at the output joint. The basic structure of the RD-Joint is introduced based on the idea of reduced inertia along with a method to include effective stiffness and damping. Then, the basic design of the adjustable admittance mechanism is described. Finally, a prototype of RD-joint is described and its expected characteristics are discussed.
Resumo:
The design space of emerging heterogenous multi-core architectures with re-configurability element makes it feasible to design mixed fine-grained and coarse-grained parallel architectures. This paper presents a hierarchical composite array design which extends the curret design space of regular array design by combining a sequence of transformations. This technique is applied to derive a new design of a pipelined parallel regular array with different dataflow between phases of computation.
Resumo:
This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA., Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance or the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA, Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
This paper presents an improved parallel Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. Motion Vectors (MV) are generated from the first-pass LHMEA and used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We used bashtable into video processing and completed parallel implementation. The hashtable structure of LHMEA is improved compared to the original TPA and LHMEA. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. The implementation contains spatial and temporal approaches. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
A Neural Mass model is coupled with a novel method to generate realistic Phase reset ERPs. The power spectra of these synthetic ERPs are compared with the spectra of real ERPs and synthetic ERPs generated via the Additive model. Real ERP spectra show similarities with synthetic Phase reset ERPs and synthetic Additive ERPs.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.