972 resultados para Fisher information matrix
Resumo:
Finding motifs that can elucidate rules that govern peptide binding to medically important receptors is important for screening targets for drugs and vaccines. This paper focuses on elucidation of peptide binding to I-A(g7) molecule of the non-obese diabetic (NOD) mouse - an animal model for insulin-dependent diabetes mellitus (IDDM). A number of proposed motifs that describe peptide binding to I-A(g7) have been proposed. These motifs results from independent experimental studies carried out on small data sets. Testing with multiple data sets showed that each of the motifs at best describes only a subset of the solution space, and these motifs therefore lack generalization ability. This study focuses on seeking a motif with higher generalization ability so that it can predict binders in all A(g7) data sets with high accuracy. A binding score matrix representing peptide binding motif to A(g7) was derived using genetic algorithm (GA). The evolved score matrix significantly outperformed previously reported
Resumo:
Network building and exchange of information by people within networks is crucial to the innovation process. Contrary to older models, in social networks the flow of information is noncontinuous and nonlinear. There are critical barriers to information flow that operate in a problematic manner. New models and new analytic tools are needed for these systems. This paper introduces the concept of virtual circuits and draws on recent concepts of network modelling and design to introduce a probabilistic switch theory that can be described using matrices. It can be used to model multistep information flow between people within organisational networks, to provide formal definitions of efficient and balanced networks and to describe distortion of information as it passes along human communication channels. The concept of multi-dimensional information space arises naturally from the use of matrices. The theory and the use of serial diagonal matrices have applications to organisational design and to the modelling of other systems. It is hypothesised that opinion leaders or creative individuals are more likely to emerge at information-rich nodes in networks. A mathematical definition of such nodes is developed and it does not invariably correspond with centrality as defined by early work on networks.
Resumo:
Determining the dimensionality of G provides an important perspective on the genetic basis of a multivariate suite of traits. Since the introduction of Fisher's geometric model, the number of genetically independent traits underlying a set of functionally related phenotypic traits has been recognized as an important factor influencing the response to selection. Here, we show how the effective dimensionality of G can be established, using a method for the determination of the dimensionality of the effect space from a multivariate general linear model introduced by AMEMIYA (1985). We compare this approach with two other available methods, factor-analytic modeling and bootstrapping, using a half-sib experiment that estimated G for eight cuticular hydrocarbons of Drosophila serrata. In our example, eight pheromone traits were shown to be adequately represented by only two underlying genetic dimensions by Amemiya's approach and factor-analytic modeling of the covariance structure at the sire level. In, contrast, bootstrapping identified four dimensions with significant genetic variance. A simulation study indicated that while the performance of Amemiya's method was more sensitive to power constraints, it performed as well or better than factor-analytic modeling in correctly identifying the original genetic dimensions at moderate to high levels of heritability. The bootstrap approach consistently overestimated the number of dimensions in all cases and performed less well than Amemiya's method at subspace recovery.
Resumo:
New tools derived from advances in molecular biology have not been widely adopted in plant breeding because of the inability to connect information at gene level to the phenotype in a manner that is useful for selection. We explore whether a crop growth and development modelling framework can link phenotype complexity to underlying genetic systems in a way that strengthens molecular breeding strategies. We use gene-to-phenotype simulation studies on sorghum to consider the value to marker-assisted selection of intrinsically stable QTLs that might be generated by physiological dissection of complex traits. The consequences on grain yield of genetic variation in four key adaptive traits – phenology, osmotic adjustment, transpiration efficiency, and staygreen – were simulated for a diverse set of environments by placing the known extent of genetic variation in the context of the physiological determinants framework of a crop growth and development model. It was assumed that the three to five genes associated with each trait, had two alleles per locus acting in an additive manner. The effects on average simulated yield, generated by differing combinations of positive alleles for the traits incorporated, varied with environment type. The full matrix of simulated phenotypes, which consisted of 547 location-season combinations and 4235 genotypic expression states, was analysed for genetic and environmental effects. The analysis was conducted in stages with gradually increased understanding of gene-to-phenotype relationships, which would arise from physiological dissection and modelling. It was found that environmental characterisation and physiological knowledge helped to explain and unravel gene and environment context dependencies. We simulated a marker-assisted selection (MAS) breeding strategy based on the analyses of gene effects. When marker scores were allocated based on the contribution of gene effects to yield in a single environment, there was a wide divergence in rate of yield gain over all environments with breeding cycle depending on the environment chosen for the QTL analysis. It was suggested that knowledge resulting from trait physiology and modelling would overcome this dependency by identifying stable QTLs. The improved predictive power would increase the utility of the QTLs in MAS. Developing and implementing this gene-to-phenotype capability in crop improvement requires enhanced attention to phenotyping, ecophysiological modelling, and validation studies to test the stability of candidate QTLs.
Resumo:
The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.
Resumo:
We analyse the dynamics of a number of second order on-line learning algorithms training multi-layer neural networks, using the methods of statistical mechanics. We first consider on-line Newton's method, which is known to provide optimal asymptotic performance. We determine the asymptotic generalization error decay for a soft committee machine, which is shown to compare favourably with the result for standard gradient descent. Matrix momentum provides a practical approximation to this method by allowing an efficient inversion of the Hessian. We consider an idealized matrix momentum algorithm which requires access to the Hessian and find close correspondence with the dynamics of on-line Newton's method. In practice, the Hessian will not be known on-line and we therefore consider matrix momentum using a single example approximation to the Hessian. In this case good asymptotic performance may still be achieved, but the algorithm is now sensitive to parameter choice because of noise in the Hessian estimate. On-line Newton's method is not appropriate during the transient learning phase, since a suboptimal unstable fixed point of the gradient descent dynamics becomes stable for this algorithm. A principled alternative is to use Amari's natural gradient learning algorithm and we show how this method provides a significant reduction in learning time when compared to gradient descent, while retaining the asymptotic performance of on-line Newton's method.
Resumo:
The up-regulation and trafficking of tissue transglutaminase (TG2) by tubular epithelial cells (TEC) has been implicated in the development of kidney scarring. TG2 catalyses the crosslinking of proteins via the formation of highly stable e(?-glutamyl) lysine bonds. We have proposed that TG2 may contribute to kidney scarring by accelerating extracellular matrix (ECM) deposition and by stabilising the ECM against proteolytic decay. To investigate this, we have studied ECM metabolism in Opossum kidney (OK) TEC induced to over-express TG2 by stable transfection and in tubular cells isolated from TG2 knockout mice. Increasing the expression of TG2 led to increased extracellular TG2 activity (p < 0.05), elevated e(?-glutamyl) lysine crosslinking in the ECM and higher levels of ECM collagen per cell by 3H-proline labelling. Immunofluorescence demonstrated that this was attributable to increased collagen III and IV levels. Higher TG2 levels were associated with an accelerated collagen deposition rate and a reduced ECM breakdown by matrix metalloproteinases (MMPs). In contrast, a lack of TG2 was associated with reduced e(?-glutamyl) lysine crosslinking in the ECM, causing reduced ECM collagen levels and lower ECM per cell. We report that TG2 contributes to ECM accumulation primarily by accelerating collagen deposition, but also by altering the susceptibility of the tubular ECM to decay. These findings support a role for TG2 in the expansion of the ECM associated with kidney scarring.
Resumo:
Diabetic nephropathy affects 30-40% of diabetics leading to end-stage kidney failure through progressive scarring and fibrosis. Previous evidence suggests that tissue transglutaminase (tTg) and its protein cross-link product epsilon(gamma-glutamyl)lysine contribute to the expanding renal tubulointerstitial and glomerular basement membranes in this disease. Using an in vitro cell culture model of renal proximal tubular epithelial cells we determined the link between elevated glucose levels with changes in expression and activity of tTg and then, by using a highly specific site directed inhibitor of tTg (1,3-dimethyl-2[(oxopropyl)thio]imidazolium), determined the contribution of tTg to glucose-induced matrix accumulation. Exposure of cells to 36 mm glucose over 96 h caused an mRNA-dependent increase in tTg activity with a 25% increase in extracellular matrix (ECM)-associated tTg and a 150% increase in ECM epsilon(gamma-glutamyl)lysine cross-linking. This was paralleled by an elevation in total deposited ECM resulting from higher levels of deposited collagen and fibronectin. These were associated with raised mRNA for collagens III, IV, and fibronectin. The specific site-directed inhibitor of tTg normalized both tTg activity and ECM-associated epsilon(gamma-glutamyl)lysine. Levels of ECM per cell returned to near control levels with non-transcriptional reductions in deposited collagen and fibronectin. No changes in transforming growth factor beta1 (expression or biological activity) occurred that could account for our observations, whereas incubation of tTg with collagen III indicated that cross-linking could directly increase the rate of collagen fibril/gel formation. We conclude that Tg inhibition reduces glucose-induced deposition of ECM proteins independently of changes in ECM and transforming growth factor beta1 synthesis thus opening up its possible application in the treatment other fibrotic and scarring diseases where tTg has been implicated.
Resumo:
Background: Currently, no review has been completed regarding the information-gathering process for the provision of medicines for self-medication in community pharmacies in developing countries. Objective: To review the rate of information gathering and the types of information gathered when patients present for self-medication requests. Methods: Six databases were searched for studies that described the rate of information gathering and/or the types of information gathered in the provision of medicines for self-medication in community pharmacies in developing countries. The types of information reported were classified as: signs and symptoms, patient identity, action taken, medications, medical history, and others. Results: Twenty-two studies met the inclusion criteria. Variations in the study populations, types of scenarios, research methods, and data reporting were observed. The reported rate of information gathering varied from 18% to 97%, depending on the research methods used. Information on signs and symptoms and patient identity was more frequently reported to be gathered compared with information on action taken, medications, and medical history. Conclusion: Evidence showed that the information-gathering process for the provision of medicines for self-medication via community pharmacies in developing countries is inconsistent. There is a need to determine the barriers to appropriate information-gathering practice as well as to develop strategies to implement effective information-gathering processes. It is also recommended that international and national pharmacy organizations, including pharmacy academics and pharmacy researchers, develop a consensus on the types of information that should be reported in the original studies. This will facilitate comparison across studies so that areas that need improvement can be identified. © 2013 Elsevier Inc.
Resumo:
The morphology, chemical composition, and mechanical properties in the surface region of α-irradiated polytetrafluoroethylene (PTFE) have been examined and compared to unirradiated specimens. Samples were irradiated with 5.5 MeV 4He2+ ions from a tandem accelerator to doses between 1 × 106 and 5 × 1010 Rad. Static time-of-flight secondary ion mass spectrometry (ToF-SIMS), using a 20 keV C60+ source, was employed to probe chemical changes as a function of a dose. Chemical images and high resolution spectra were collected and analyzed to reveal the effects of a particle radiation on the chemical structure. Residual gas analysis (RGA) was utilized to monitor the evolution of volatile species during vacuum irradiation of the samples. Scanning electron microscopy (SEM) was used to observe the morphological variation of samples with increasing a particle dose, and nanoindentation was engaged to determine the hardness and elastic modulus as a function of a dose. The data show that PTFE nominally retains its innate chemical structure and morphology at a doses <109 Rad. At α doses ≥109 Rad the polymer matrix experiences increased chemical degradation and morphological roughening which are accompanied by increased hardness and declining elasticity. At α doses >1010 Rad the polymer matrix suffers severe chemical degradation and material loss. Chemical degradation is observed in ToF-SIMS by detection of ions that are indicative of fragmentation, unsaturation, and functionalization of molecules in the PTFE matrix. The mass spectra also expose the subtle trends of crosslinking within the α-irradiated polymer matrix. ToF-SIMS images support the assertion that chemical degradation is the result of a particle irradiation and show morphological roughening of the sample with increased a dose. High resolution SEM images more clearly illustrate the morphological roughening and the mass loss that accompanies high doses of a particles. RGA confirms the supposition that the outcome of chemical degradation in the PTFE matrix with continuing irradiation is evolution of volatile species resulting in morphological roughening and mass loss. Finally, we reveal and discuss relationships between chemical structure and mechanical properties such as hardness and elastic modulus.
Resumo:
Descriptions of vegetation communities are often based on vague semantic terms describing species presence and dominance. For this reason, some researchers advocate the use of fuzzy sets in the statistical classification of plant species data into communities. In this study, spatially referenced vegetation abundance values collected from Greek phrygana were analysed by ordination (DECORANA), and classified on the resulting axes using fuzzy c-means to yield a point data-set representing local memberships in characteristic plant communities. The fuzzy clusters matched vegetation communities noted in the field, which tended to grade into one another, rather than occupying discrete patches. The fuzzy set representation of the community exploited the strengths of detrended correspondence analysis while retaining richer information than a TWINSPAN classification of the same data. Thus, in the absence of phytosociological benchmarks, meaningful and manageable habitat information could be derived from complex, multivariate species data. We also analysed the influence of the reliability of different surveyors' field observations by multiple sampling at a selected sample location. We show that the impact of surveyor error was more severe in the Boolean than the fuzzy classification. © 2007 Springer.
Resumo:
Possibilities for investigations of 43 varieties of file formats (objects), joined in 10 groups; 89 information attacks, joined in 33 groups and 73 methods of compression, joined in 10 groups are described in the paper. Experimental, expert, possible and real relations between attacks’ groups, method’ groups and objects’ groups are determined by means of matrix transformations and the respective maximum and potential sets are defined. At the end assessments and conclusions for future investigation are proposed.
Resumo:
This work presents a theoretical-graph method of determining the fault tolerance degree of the computer network interconnections and nodes. Experimental results received from simulations of this method over a distributed computing network environment are also presented.
Resumo:
* Work is partially supported by the Lithuanian State Science and Studies Foundation.
Resumo:
* Work supported by the Lithuanian State Science and Studies Foundation.