996 resultados para Inverse modelling
Resumo:
Many G protein-coupled receptors have been shown to exist as oligomers, but the oligomerization state and the effects of this on receptor function are unclear. For some G protein-coupled receptors, in ligand binding assays, different radioligands provide different maximal binding capacities. Here we have developed mathematical models for co-expressed dimeric and tetrameric species of receptors. We have considered models where the dimers and tetramers are in equilibrium and where they do not interconvert and we have also considered the potential influence of the ligands on the degree of oligomerization. By analogy with agonist efficacy, we have considered ligands that promote, inhibit or have no effect on oligomerization. Cell surface receptor expression and the intrinsic capacity of receptors to oligomerize are quantitative parameters of the equations. The models can account for differences in the maximal binding capacities of radioligands in different preparations of receptors and provide a conceptual framework for simulation and data fitting in complex oligomeric receptor situations.
Resumo:
1 Mechanisms of inverse agonist action at the D-2(short) dopamine receptor have been examined. 2 Discrimination of G-protein-coupled and -uncoupled forms of the receptor by inverse agonists was examined in competition ligand-binding studies versus the agonist [H-3]NPA at a concentration labelling both G-protein-coupled and -uncoupled receptors. 3 Competition of inverse agonists versus [H-3] NPA gave data that were fitted best by a two-binding site model in the absence of GTP but by a one-binding site model in the presence of GTP. K-i values were derived from the competition data for binding of the inverse agonists to G-protein-uncoupled and -coupled receptors. K-coupled and K-uncoupled were statistically different for the set of compounds tested ( ANOVA) but the individual values were different in a post hoc test only for (+)-butaclamol. 4 These observations were supported by simulations of these competition experiments according to the extended ternary complex model. 5 Inverse agonist efficacy of the ligands was assessed from their ability to reduce agonist-independent [S-35]GTPγ S binding to varying degrees in concentration-response curves. Inverse agonism by (+)-butaclamol and spiperone occurred at higher potency when GDP was added to assays, whereas the potency of (-)-sulpiride was unaffected. 6 These data show that some inverse agonists ((+)-butaclamol, spiperone) achieve inverse agonism by stabilising the uncoupled form of the receptor at the expense of the coupled form. For other compounds tested, we were unable to define the mechanism.
Resumo:
Unlike nuclear localization signals, there is no obvious consensus sequence for the targeting of proteins to the nucleolus. The nucleolus is a dynamic subnuclear structure which is crucial to the normal operation of the eukaryotic cell. Studying nucleolar trafficking signals is problematic as many nucleolar retention signals (NoRSs) are part of classical nuclear localization signals (NLSs). In addition, there is no known consensus signal with which to inform a study. The avian infectious bronchitis virus (IBV), coronavirus nucleocapsid (N) protein, localizes to the cytoplasm and the nucleolus. Mutagenesis was used to delineate a novel eight amino acid motif that was necessary and sufficient for nucleolar retention of N protein and colocalize with nucleolin and fibrillarin. Additionally, a classical nuclear export signal (NES) functioned to direct N protein to the cytoplasm. Comparison of the coronavirus NoRSs with known cellular and other viral NoRSs revealed that these motifs have conserved arginine residues. Molecular modelling, using the solution structure of severe acute respiratory (SARS) coronavirus N-protein, revealed that this motif is available for interaction with cellular factors which may mediate nucleolar localization. We hypothesise that the N-protein uses these signals to traffic to and from the nucleolus and the cytoplasm.
Resumo:
A recent report in Consciousness and Cognition provided evidence from a study of the rubber hand illusion (RHI) that supports the multisensory principle of inverse effectiveness (PoIE). I describe two methods of assessing the principle of inverse effectiveness ('a priori' and 'post-hoc'), and discuss how the post-hoc method is affected by the statistical artefact of,regression towards the mean'. I identify several cases where this artefact may have affected particular conclusions about the PoIE, and relate these to the historical origins of 'regression towards the mean'. Although the conclusions of the recent report may not have been grossly affected, some of the inferential statistics were almost certainly biased by the methods used. I conclude that, unless such artefacts are fully dealt with in the future, and unless the statistical methods for assessing the PoIE evolve, strong evidence in support of the PoIE will remain lacking. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.
Resumo:
Finding the smallest eigenvalue of a given square matrix A of order n is computationally very intensive problem. The most popular method for this problem is the Inverse Power Method which uses LU-decomposition and forward and backward solving of the factored system at every iteration step. An alternative to this method is the Resolvent Monte Carlo method which uses representation of the resolvent matrix [I -qA](-m) as a series and then performs Monte Carlo iterations (random walks) on the elements of the matrix. This leads to great savings in computations, but the method has many restrictions and a very slow convergence. In this paper we propose a method that includes fast Monte Carlo procedure for finding the inverse matrix, refinement procedure to improve approximation of the inverse if necessary, and Monte Carlo power iterations to compute the smallest eigenvalue. We provide not only theoretical estimations about accuracy and convergence but also results from numerical tests performed on a number of test matrices.
Resumo:
Design for low power in FPGA is rather limited since technology factors affecting power are either fixed or limited for FPGA families. This paper investigates opportunities for power savings of a pipelined 2D IDCT design at the architecture and logic level. We report power consumption savings of over 25% achieved in FPGA circuits obtained from clock gating implementation of optimizations made at the algorithmic level(1).
Resumo:
An enterprise is viewed as a complex system which can be engineered to accomplish organisational objectives. Systems analysis and modelling will enable to the planning and development of the enterprise and IT systems. Many IT systems design methods focus on functional and non-functional requirements of the IT systems. Most methods are normally capable of one but leave out other aspects. Analysing and modelling of both business and IT systems may often have to call on techniques from various suites of methods which may be placed on different philosophic and methodological underpinnings. Coherence and consistency between the analyses are hard to ensure. This paper introduces the Problem Articulation Method (PAM) which facilitates the design of an enterprise system infrastructure on which an IT system is built. Outcomes of this analysis represent requirements which can be further used for planning and designing a technical system. As a case study, a finance system, Agresso, for e-procurement has been used in this paper to illustrate the applicability of PAM in modelling complex systems.
Resumo:
Supplier selection has a great impact on supply chain management. The quality of supplier selection also affects profitability of organisations which work in the supply chain. As suppliers can provide variety of services and customers demand higher quality of service provision, the organisation is facing challenges for making the right choice of supplier for the right needs. The existing methods for supplier selection, such as data envelopment analysis (DEA) and analytical hierarchy process (AHP) can automatically perform selection of competitive suppliers and further decide winning supplier(s). However, these methods are not capable of determining the right selection criteria which should be derived from the business strategy. An ontology model described in this paper integrates the strengths of DEA and AHP with new mechanisms which ensure the right supplier to be selected by the right criteria for the right customer's needs.
Resumo:
To enhance the throughput of ad hoc networks, dual-hop relay-enabled transmission schemes have recently been proposed. Since in ad hoc networks throughput is normally related to their energy consumption, it is important to examine the impact of using relay-enabled transmissions on energy consumption. In this paper, we present an analytical energy consumption model for dual-hop relay-enabled medium access control (MAC) protocols. Based on the recently reported relay-enabled distributed coordination function (rDCF), we have shown the efficacy of the proposed analytical model. This is a generalized model and can be used to predict energy consumption in saturated relay-enabled ad hoc networks via energy decomposition. This is helpful in designing MAC protocols for cooperative communications and it is shown that using a relay results not only in a better throughput but also better energy efficiency.