959 resultados para In-memory databases
Resumo:
The field of molecule-based magnets is a relatively new branch of chemistry, which involves the design and study of molecular compounds that exhibit a spontaneous magnetic ordering below a critical temperature, Tc. One major goal involves the design of materials with tuneable Tc's for specific applications in memory storage devices. Molecule-based magnets with high magnetic ordering temperatures have recently been obtained from bimetallic and mixed-valence transition metal μ-cyanide complexes of the Prussian blue family. Since the μ-cyanide linkages permit an interaction between paramagnetic metal ions, cyanometalate building blocks have found useful applications in the field of molecule-based magnets. Our work involves the use of octacyanometalate building blocks for the self-assembly of two new classes of magnetic materials namely, high-spin molecular clusters which exhibit both ferromagnetic intra- and intercluster coupling, and specific extended network topologies which show long-range ferromagnetic ordering.
Advantages and controversies of depot antipsychotics in the treatment of patients with schizophrenia
Resumo:
BACKGROUND The objective of this article is to give an overview of the advantages and disadvantages of the use of depot antipsychotics in the treatment of schizophrenia. The focus is on efficacy, tolerability, relapse prevention, patient compliance and satisfaction compared to oral administration forms. MATERIAL AND METHODS A literature search was conducted in medical databases. The results of meta-analyses, randomized controlled trials and systematic reviews from the years 1999-2014 were included. RESULTS AND DISCUSSION Depot antipsychotics ensure maintenance of constant blood levels and a continuous medication delivery. The efficacy and tolerability of depot antipsychotics are comparable to oral administration forms. Due to an improved medication compliance a reduction of relapse and hospitalization rates can be achieved. This is a key focus for improving outcomes and reducing costs in the treatment of schizophrenia.
Resumo:
A large body of research demonstrated that participants preferably look back to the encoding location when retrieving visual information from memory. However, the role of this 'looking back to nothing' is still debated. The goal of the present study was to extend this line of research by examining whether an important area in the cortical representation of the oculomotor system, the frontal eye field (FEF), is involved in memory retrieval. To interfere with the activity of the FEF, we used inhibitory continuous theta burst stimulation (cTBS). Before stimulation was applied, participants encoded a complex scene and performed a short-term (immediately after encoding) or long-term (after 24 h) recall task, just after cTBS over the right FEF or sham stimulation. cTBS did not affect overall performance, but stimulation and statement type (object vs. location) interacted. cTBS over the right FEF tended to impair object recall sensitivity, whereas there was no effect on location recall sensitivity. These findings suggest that the FEF is involved in retrieving object information from scene memory, supporting the hypothesis that the oculomotor system contributes to memory recall.
Resumo:
It is generally recognized that information about the runtime cost of computations can be useful for a variety of applications, including program transformation, granularity control during parallel execution, and query optimization in deductive databases. Most of the work to date on compile-time cost estimation of logic programs has focused on the estimation of upper bounds on costs. However, in many applications, such as parallel implementations on distributed-memory machines, one would prefer to work with lower bounds instead. The problem with estimating lower bounds is that in general, it is necessary to account for the possibility of failure of head unification, leading to a trivial lower bound of 0. In this paper, we show how, given type and mode information about procedures in a logic program, it is possible to (semi-automatically) derive nontrivial lower bounds on their computational costs. We also discuss the cost analysis for the special and frequent case of divide-and-conquer programs and show how —as a pragmatic short-term solution —it may be possible to obtain useful results simply by identifying and treating divide-and-conquer programs specially.
Resumo:
Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.
Resumo:
El aprendizaje automático y la cienciometría son las disciplinas científicas que se tratan en esta tesis. El aprendizaje automático trata sobre la construcción y el estudio de algoritmos que puedan aprender a partir de datos, mientras que la cienciometría se ocupa principalmente del análisis de la ciencia desde una perspectiva cuantitativa. Hoy en día, los avances en el aprendizaje automático proporcionan las herramientas matemáticas y estadísticas para trabajar correctamente con la gran cantidad de datos cienciométricos almacenados en bases de datos bibliográficas. En este contexto, el uso de nuevos métodos de aprendizaje automático en aplicaciones de cienciometría es el foco de atención de esta tesis doctoral. Esta tesis propone nuevas contribuciones en el aprendizaje automático que podrían arrojar luz sobre el área de la cienciometría. Estas contribuciones están divididas en tres partes: Varios modelos supervisados (in)sensibles al coste son aprendidos para predecir el éxito científico de los artículos y los investigadores. Los modelos sensibles al coste no están interesados en maximizar la precisión de clasificación, sino en la minimización del coste total esperado derivado de los errores ocasionados. En este contexto, los editores de revistas científicas podrían disponer de una herramienta capaz de predecir el número de citas de un artículo en el fututo antes de ser publicado, mientras que los comités de promoción podrían predecir el incremento anual del índice h de los investigadores en los primeros años. Estos modelos predictivos podrían allanar el camino hacia nuevos sistemas de evaluación. Varios modelos gráficos probabilísticos son aprendidos para explotar y descubrir nuevas relaciones entre el gran número de índices bibliométricos existentes. En este contexto, la comunidad científica podría medir cómo algunos índices influyen en otros en términos probabilísticos y realizar propagación de la evidencia e inferencia abductiva para responder a preguntas bibliométricas. Además, la comunidad científica podría descubrir qué índices bibliométricos tienen mayor poder predictivo. Este es un problema de regresión multi-respuesta en el que el papel de cada variable, predictiva o respuesta, es desconocido de antemano. Los índices resultantes podrían ser muy útiles para la predicción, es decir, cuando se conocen sus valores, el conocimiento de cualquier valor no proporciona información sobre la predicción de otros índices bibliométricos. Un estudio bibliométrico sobre la investigación española en informática ha sido realizado bajo la cultura de publicar o morir. Este estudio se basa en una metodología de análisis de clusters que caracteriza la actividad en la investigación en términos de productividad, visibilidad, calidad, prestigio y colaboración internacional. Este estudio también analiza los efectos de la colaboración en la productividad y la visibilidad bajo diferentes circunstancias. ABSTRACT Machine learning and scientometrics are the scientific disciplines which are covered in this dissertation. Machine learning deals with the construction and study of algorithms that can learn from data, whereas scientometrics is mainly concerned with the analysis of science from a quantitative perspective. Nowadays, advances in machine learning provide the mathematical and statistical tools for properly working with the vast amount of scientometrics data stored in bibliographic databases. In this context, the use of novel machine learning methods in scientometrics applications is the focus of attention of this dissertation. This dissertation proposes new machine learning contributions which would shed light on the scientometrics area. These contributions are divided in three parts: Several supervised cost-(in)sensitive models are learned to predict the scientific success of articles and researchers. Cost-sensitive models are not interested in maximizing classification accuracy, but in minimizing the expected total cost of the error derived from mistakes in the classification process. In this context, publishers of scientific journals could have a tool capable of predicting the citation count of an article in the future before it is published, whereas promotion committees could predict the annual increase of the h-index of researchers within the first few years. These predictive models would pave the way for new assessment systems. Several probabilistic graphical models are learned to exploit and discover new relationships among the vast number of existing bibliometric indices. In this context, scientific community could measure how some indices influence others in probabilistic terms and perform evidence propagation and abduction inference for answering bibliometric questions. Also, scientific community could uncover which bibliometric indices have a higher predictive power. This is a multi-output regression problem where the role of each variable, predictive or response, is unknown beforehand. The resulting indices could be very useful for prediction purposes, that is, when their index values are known, knowledge of any index value provides no information on the prediction of other bibliometric indices. A scientometric study of the Spanish computer science research is performed under the publish-or-perish culture. This study is based on a cluster analysis methodology which characterizes the research activity in terms of productivity, visibility, quality, prestige and international collaboration. This study also analyzes the effects of collaboration on productivity and visibility under different circumstances.
Resumo:
One of the most demanding needs in cloud computing and big data is that of having scalable and highly available databases. One of the ways to attend these needs is to leverage the scalable replication techniques developed in the last decade. These techniques allow increasing both the availability and scalability of databases. Many replication protocols have been proposed during the last decade. The main research challenge was how to scale under the eager replication model, the one that provides consistency across replicas. This thesis provides an in depth study of three eager database replication systems based on relational systems: Middle-R, C-JDBC and MySQL Cluster and three systems based on In-Memory Data Grids: JBoss Data Grid, Oracle Coherence and Terracotta Ehcache. Thesis explore these systems based on their architecture, replication protocols, fault tolerance and various other functionalities. It also provides experimental analysis of these systems using state-of-the art benchmarks: TPC-C and TPC-W (for relational systems) and Yahoo! Cloud Serving Benchmark (In- Memory Data Grids). Thesis also discusses three Graph Databases, Neo4j, Titan and Sparksee based on their architecture and transactional capabilities and highlights the weaker transactional consistencies provided by these systems. It discusses an implementation of snapshot isolation in Neo4j graph database to provide stronger isolation guarantees for transactions.
Resumo:
A rapidly growing area of genome research is the generation of expressed sequence tags (ESTs) in which large numbers of randomly selected cDNA clones are partially sequenced. The collection of ESTs reflects the level and complexity of gene expression in the sampled tissue. To date, the majority of plant ESTs are from nonwoody plants such as Arabidopsis, Brassica, maize, and rice. Here, we present a large-scale production of ESTs from the wood-forming tissues of two poplars, Populus tremula L. × tremuloides Michx. and Populus trichocarpa ‘Trichobel.’ The 5,692 ESTs analyzed represented a total of 3,719 unique transcripts for the two cDNA libraries. Putative functions could be assigned to 2,245 of these transcripts that corresponded to 820 protein functions. Of specific interest to forest biotechnology are the 4% of ESTs involved in various processes of cell wall formation, such as lignin and cellulose synthesis, 5% similar to developmental regulators and members of known signal transduction pathways, and 2% involved in hormone biosynthesis. An additional 12% of the ESTs showed no significant similarity to any other DNA or protein sequences in existing databases. The absence of these sequences from public databases may indicate a specific role for these proteins in wood formation. The cDNA libraries and the accompanying database are valuable resources for forest research directed toward understanding the genetic control of wood formation and future endeavors to modify wood and fiber properties for industrial use.
Resumo:
In an attempt to improve behavioral memory, we devised a strategy to amplify the signal-to-noise ratio of the cAMP pathway, which plays a central role in hippocampal synaptic plasticity and behavioral memory. Multiple high-frequency trains of electrical stimulation induce long-lasting long-term potentiation, a form of synaptic strengthening in hippocampus that is greater in both magnitude and persistence than the short-lasting long-term potentiation generated by a single tetanic train. Studies using pharmacological inhibitors and genetic manipulations have shown that this difference in response depends on the activity of cAMP-dependent protein kinase A. Genetic studies have also indicated that protein kinase A and one of its target transcription factors, cAMP response element binding protein, are important in memory in vivo. These findings suggested that amplification of signals through the cAMP pathway might lower the threshold for generating long-lasting long-term potentiation and increase behavioral memory. We therefore examined the biochemical, physiological, and behavioral effects in mice of partial inhibition of a hippocampal cAMP phosphodiesterase. Concentrations of a type IV-specific phosphodiesterase inhibitor, rolipram, which had no significant effect on basal cAMP concentration, increased the cAMP response of hippocampal slices to stimulation with forskolin and induced persistent long-term potentiation in CA1 after a single tetanic train. In both young and aged mice, rolipram treatment before training increased long- but not short-term retention in freezing to context, a hippocampus-dependent memory task.
Resumo:
Large quantities of DNA sequence information about plant genes are rapidly accumulating in public databases, but to progress from DNA sequence to biological function a mutant allele for each of the genes ideally should be available. Here we describe a gene trap construct that allowed us to disrupt transcribed genes with a high efficiency in Arabidopsis thaliana. In the T-DNA vector used, the expression of a bacterial reporter gene coding for neomycin phosphotransferase II (nptII) depends on the in vivo generation of a translation fusion upon the T-DNA integration into the Arabidopsis genome. Analysis of 20 selected transgenic lines showed that 12 lines are T-DNA insertion mutants. The disrupted genes analyzed encoded ribosomal proteins (three lines), aspartate tRNA synthase, DNA ligase, basic-domain leucine zipper DNA binding protein, ATP-binding cassette transporter, and five proteins of unknown function. Four tagged genes were new for Arabidopsis. The results presented here suggest that gene trapping, using nptII as a reporter gene, can be as high as 80% and opens novel perspectives for systematic gene tagging in A. thaliana.
Resumo:
C2-α-Mannosyltryptophan was discovered in human RNase 2, an enzyme that occurs in eosinophils and is involved in host defense. It represents a novel way of attaching carbohydrate to a protein in addition to the well-known N- and O-glycosylations. The reaction is specific, as in RNase 2 Trp-7, but never Trp-10, which is modified. In this article, we address which structural features provide the specificity of the reaction. Expression of chimeras of RNase 2 and nonglycosylated RNase 4 and deletion mutants in HEK293 cells identified residues 1–13 to be sufficient for C-mannosylation. Site-directed mutagenesis revealed the sequence Trp-x-x-Trp, in which the first Trp becomes mannosylated, as the specificity determinant. The Trp residue at position +3 can be replaced by Phe, which reduces the efficiency of the reaction threefold. Interpretation of the data in the context of the three-dimensional structure of RNase 2 strongly suggests that the primary, rather than the tertiary, structure forms the determinant. The sequence motif occurs in 336 mammalian proteins currently present in protein databases. Two of these proteins were analyzed protein chemically, which showed partial C-glycosylation of recombinant human interleukin 12. The frequent occurrence of the protein recognition motif suggests that C-glycosides could be part of the structure of more proteins than assumed so far.
Resumo:
In humans declarative or explicit memory is supported by the hippocampus and related structures of the medial temporal lobe working in concert with the cerebral cortex. This paper reviews our progress in developing an animal model for studies of cortical–hippocampal interactions in memory processing. Our findings support the view that the cortex maintains various forms of memory representation and that hippocampal structures extend the persistence and mediate the organization of these codings. Specifically, the parahippocampal region, through direct and reciprocal interconnections with the cortex, is sufficient to support the convergence and extended persistence of cortical codings. The hippocampus itself is critical to the organization cortical representations in terms of relationships among items in memory and in the flexible memory expression that is the hallmark of declarative memory.
Resumo:
Age-associated memory impairment occurs frequently in primates. Based on the established importance of both the perforant path and N-methyl-D-aspartate (NMDA) receptors in memory formation, we investigated the glutamate receptor distribution and immunofluorescence intensity within the dentate gyrus of juvenile, adult, and aged macaque monkeys with the combined use of subunit-specific antibodies and quantitative confocal laser scanning microscopy. Here we demonstrate that aged monkeys, compared to adult monkeys, exhibit a 30.6% decrease in the ratio of NMDA receptor subunit 1 (NMDAR1) immunofluorescence intensity within the distal dendrites of the dentate gyrus granule cells, which receive the perforant path input from the entorhinal cortex, relative to the proximal dendrites, which receive an intrinsic excitatory input from the dentate hilus. The intradendritic alteration in NMDAR1 immunofluorescence occurs without a similar alteration of non-NMDA receptor subunits. Further analyses using synaptophysin as a reflection of total synaptic density and microtubule-associated protein 2 as a dendritic structural marker demonstrated no significant difference in staining intensity or area across the molecular layer in aged animals compared to the younger animals. These findings suggest that, in aged monkeys, a circuit-specific alteration in the intradendritic concentration of NMDAR1 occurs without concomitant gross structural changes in dendritic morphology or a significant change in the total synaptic density across the molecular layer. This alteration in the NMDA receptor-mediated input to the hippocampus from the entorhinal cortex may represent a molecular/cellular substrate for age-associated memory impairments.
Resumo:
A dissociation between human neural systems that participate in the encoding and later recognition of new memories for faces was demonstrated by measuring memory task-related changes in regional cerebral blood flow with positron emission tomography. There was almost no overlap between the brain structures associated with these memory functions. A region in the right hippocampus and adjacent cortex was activated during memory encoding but not during recognition. The most striking finding in neocortex was the lateralization of prefrontal participation. Encoding activated left prefrontal cortex, whereas recognition activated right prefrontal cortex. These results indicate that the hippocampus and adjacent cortex participate in memory function primarily at the time of new memory encoding. Moreover, face recognition is not mediated simply by recapitulation of operations performed at the time of encoding but, rather, involves anatomically dissociable operations.
Resumo:
Currently there are an overwhelming number of scientific publications in Life Sciences, especially in Genetics and Biotechnology. This huge amount of information is structured in corporate Data Warehouses (DW) or in Biological Databases (e.g. UniProt, RCSB Protein Data Bank, CEREALAB or GenBank), whose main drawback is its cost of updating that makes it obsolete easily. However, these Databases are the main tool for enterprises when they want to update their internal information, for example when a plant breeder enterprise needs to enrich its genetic information (internal structured Database) with recently discovered genes related to specific phenotypic traits (external unstructured data) in order to choose the desired parentals for breeding programs. In this paper, we propose to complement the internal information with external data from the Web using Question Answering (QA) techniques. We go a step further by providing a complete framework for integrating unstructured and structured information by combining traditional Databases and DW architectures with QA systems. The great advantage of our framework is that decision makers can compare instantaneously internal data with external data from competitors, thereby allowing taking quick strategic decisions based on richer data.