795 resultados para access to knowledge
Resumo:
Presentación en la 4ta. Conferencia Regional del CLACAI. Reafirmando el legado de Cairo: Aborto legal y seguro. Lima, 21 y 22 de Agosto de 2014
Resumo:
This paper examines the factors that prevent slum children aged 5 to 14 from gaining access to schooling in light of the worsening urban poverty and sizable increase in rural-to-urban migration. Bias against social disadvantage in terms of gender and caste is not clearly manifested in schooling, while migrated children are less likely to attend school. I argue that the lack of preparation for schooling in the pre-schooling ages and school admission procedures are the main obstacles for migrated children. The most important implication for universal elementary education in urban India is raising parental awareness and simplifying the admission procedures.
Resumo:
Countries classified as least developed countries (LDCs) were granted duty-free quota-free (DFQF) access to the Japanese market. This study examines the impact of that access and finds that, in general, it did not benefit the LDCs. The construction of concordance tables for Japan's 9 digit tariff line codes enables analysis at the tariff line level, which overcomes a possible aggregation bias. The exogenous nature of DFQF access mitigates the endogeneity problem. Various estimation models, including the triple difference estimator, show that in general the LDCs did not benefit from DFQF access to the Japanese market. The total value of imports from LDCs has been increasing, but the imports granted both zero tariffs and substantial preference margins over non-LDC countries were not successful. These findings suggest that for LDCs the tariff barrier is a relatively small obstacle: Trade is affected more strongly by other factors, such as infrastructure, nontariff barriers, geographic distance, and cultural differences.
Resumo:
The paper is to introduce the institutional repository (IR) as a powerful tool to support the researchers of the institution to archive and disseminate their research findings freely to the scholarly community on the Internet. The IR can improve the access to an institution’s research output enormously. The operations of an IR also require various interactions with researchers, which enables the library to gain a solid understanding of research needs and expectations. Through such interaction, the relationship and mutual trust between researchers and the library are strengthened. The experiences of the Institute of Developing Economies (IDE) library can be useful to other special libraries.
Resumo:
Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.
Resumo:
The goal of the W3C's Media Annotation Working Group (MAWG) is to promote interoperability between multimedia metadata formats on the Web. As experienced by everybody, audiovisual data is omnipresent on today's Web. However, different interaction interfaces and especially diverse metadata formats prevent unified search, access, and navigation. MAWG has addressed this issue by developing an interlingua ontology and an associated API. This article discusses the rationale and core concepts of the ontology and API for media resources. The specifications developed by MAWG enable interoperable contextualized and semantic annotation and search, independent of the source metadata format, and connecting multimedia data to the Linked Data cloud. Some demonstrators of such applications are also presented in this article.
Resumo:
Protein folding occurs on a time scale ranging from milliseconds to minutes for a majority of proteins. Computer simulation of protein folding, from a random configuration to the native structure, is nontrivial owing to the large disparity between the simulation and folding time scales. As an effort to overcome this limitation, simple models with idealized protein subdomains, e.g., the diffusion–collision model of Karplus and Weaver, have gained some popularity. We present here new results for the folding of a four-helix bundle within the framework of the diffusion–collision model. Even with such simplifying assumptions, a direct application of standard Brownian dynamics methods would consume 10,000 processor-years on current supercomputers. We circumvent this difficulty by invoking a special Brownian dynamics simulation. The method features the calculation of the mean passage time of an event from the flux overpopulation method and the sampling of events that lead to productive collisions even if their probability is extremely small (because of large free-energy barriers that separate them from the higher probability events). Using these developments, we demonstrate that a coarse-grained model of the four-helix bundle can be simulated in several days on current supercomputers. Furthermore, such simulations yield folding times that are in the range of time scales observed in experiments.
Resumo:
Defined model systems consisting of physiologically spaced arrays of H3/H4 tetramer⋅5S rDNA complexes have been assembled in vitro from pure components. Analytical hydrodynamic and electrophoretic studies have revealed that the structural features of H3/H4 tetramer arrays closely resemble those of naked DNA. The reptation in agarose gels of H3/H4 tetramer arrays is essentially indistinguishable from naked DNA, the gel-free mobility of H3/H4 tetramer arrays relative to naked DNA is reduced by only 6% compared with 20% for nucleosomal arrays, and H3/H4 tetramer arrays are incapable of folding under ionic conditions where nucleosomal arrays are extensively folded. We further show that the cognate binding sites for transcription factor TFIIIA are significantly more accessible when the rDNA is complexed with H3/H4 tetramers than with histone octamers. These results suggest that the processes of DNA replication and transcription have evolved to exploit the unique structural properties of H3/H4 tetramer arrays.
Resumo:
The recent ability to sequence whole genomes allows ready access to all genetic material. The approaches outlined here allow automated analysis of sequence for the synthesis of optimal primers in an automated multiplex oligonucleotide synthesizer (AMOS). The efficiency is such that all ORFs for an organism can be amplified by PCR. The resulting amplicons can be used directly in the construction of DNA arrays or can be cloned for a large variety of functional analyses. These tools allow a replacement of single-gene analysis with a highly efficient whole-genome analysis.
Resumo:
For proteins to enter the secretory pathway, the membrane attachment site (M-site) on ribosomes must bind cotranslationally to the Sec61 complex present in the endoplasmic reticulum membrane. The signal recognition particle (SRP) and its receptor (SR) are required for targeting, and the nascent polypeptide associated complex (NAC) prevents inappropriate targeting of nonsecretory nascent chains. In the absence of NAC, any ribosome, regardless of the polypeptide being synthesized, binds to the endoplasmic reticulum membrane, and even nonsecretory proteins are translocated across the endoplasmic reticulum membrane. By occupying the M-site, NAC prevents all ribosome binding unless a signal peptide and SRP are present. The mechanism by which SRP overcomes the NAC block is unknown. We show that signal peptide-bound SRP occupies the M-site and therefore keeps it free of NAC. To expose the M-site and permit ribosome binding, SR can pull SRP away from the M-site without prior release of SRP from the signal peptide.
Resumo:
WormBase (http://www.wormbase.org) is a web-based resource for the Caenorhabditis elegans genome and its biology. It builds upon the existing ACeDB database of the C.elegans genome by providing data curation services, a significantly expanded range of subject areas and a user-friendly front end.
Resumo:
The Mouse Tumor Biology (MTB) Database serves as a curated, integrated resource for information about tumor genetics and pathology in genetically defined strains of mice (i.e., inbred, transgenic and targeted mutation strains). Sources of information for the database include the published scientific literature and direct data submissions by the scientific community. Researchers access MTB using Web-based query forms and can use the database to answer such questions as ‘What tumors have been reported in transgenic mice created on a C57BL/6J background?’, ‘What tumors in mice are associated with mutations in the Trp53 gene?’ and ‘What pathology images are available for tumors of the mammary gland regardless of genetic background?’. MTB has been available on the Web since 1998 from the Mouse Genome Informatics web site (http://www.informatics.jax.org). We have recently implemented a number of enhancements to MTB including new query options, redesigned query forms and results pages for pathology and genetic data, and the addition of an electronic data submission and annotation tool for pathology data.
Resumo:
High throughput genome (HTG) and expressed sequence tag (EST) sequences are currently the most abundant nucleotide sequence classes in the public database. The large volume, high degree of fragmentation and lack of gene structure annotations prevent efficient and effective searches of HTG and EST data for protein sequence homologies by standard search methods. Here, we briefly describe three newly developed resources that should make discovery of interesting genes in these sequence classes easier in the future, especially to biologists not having access to a powerful local bioinformatics environment. trEST and trGEN are regularly regenerated databases of hypothetical protein sequences predicted from EST and HTG sequences, respectively. Hits is a web-based data retrieval and analysis system providing access to precomputed matches between protein sequences (including sequences from trEST and trGEN) and patterns and profiles from Prosite and Pfam. The three resources can be accessed via the Hits home page (http://hits.isb-sib.ch).
Resumo:
Conflicts can occur between the principle of freedom of information treasured by librarians and ethical standards of scientific research involving the propriety of using data derived from immoral or dishonorable experimentation. A prime example of this conflict was brought to the attention of the medical and library communities in 1995 when articles claiming that the subjects of the illustrations in the classic anatomy atlas, Eduard Pernkopf's Topographische Anatomie des Menschen, were victims of the Nazi holocaust. While few have disputed the accuracy, artistic, or educational value of the Pernkopf atlas, some have argued that the use of such subjects violates standards of medical ethics involving inhuman and degrading treatment of subjects or disrespect of a human corpse. Efforts were made to remove the book from medical libraries. In this article, the history of the Pernkopf atlas and the controversy surrounding it are reviewed. The results of a survey of academic medical libraries concerning their treatment of the Pernkopf atlas are reported, and the ethical implications of these issues as they affect the responsibilities of librarians is discussed.