927 resultados para Web-Access
Resumo:
Access to the right information at the right time is a challenge facing health professionals across the globe. HEART Online (www.heartonline.org.au) is a website designed to support the delivery of evidence based care for the prevention and rehabilitation of heart disease. It was developed by the Queensland Government and the National Heart Foundation of Australia and launched May 2013.
Resumo:
What is Universal Access-NY? Universal Access-NY is a complete online planning toolkit, www.UniversalAccessNY.org, where a One-Stop Delivery System can assess its practices, and develop work plans to improve physical and programmatic accessibility for all One-Stop customers. This web site and manual was developed by Cornell University’s Employment and Disability Institute, through the support and guidance of the New York State Department of Labor, with funding from two U.S. Department of Labor Work Incentive Grants (WIG 1 and 2). This web site was designed for use in a collaborative manner, bringing together One-Stop personnel, agency partners, business leaders and customers with disabilities. Universal Access-NY supports continuous improvement, with features that encourage multiple uses and incremental systems change.
Resumo:
Background Psychotic-like experiences (PLEs) are subclinical delusional ideas and perceptual disturbances that have been associated with a range of adverse mental health outcomes. This study reports a qualitative and quantitative analysis of the acceptability, usability and short term outcomes of Get Real, a web program for PLEs in young people. Methods Participants were twelve respondents to an online survey, who reported at least one PLE in the previous 3 months, and were currently distressed. Ratings of the program were collected after participants trialled it for a month. Individual semi-structured interviews then elicited qualitative feedback, which was analyzed using Consensual Qualitative Research (CQR) methodology. PLEs and distress were reassessed at 3 months post-baseline. Results User ratings supported the program's acceptability, usability and perceived utility. Significant reductions in the number, frequency and severity of PLE-related distress were found at 3 months follow-up. The CQR analysis identified four qualitative domains: initial and current understandings of PLEs, responses to the program, and context of its use. Initial understanding involved emotional reactions, avoidance or minimization, limited coping skills and non-psychotic attributions. After using the program, participants saw PLEs as normal and common, had greater self-awareness and understanding of stress, and reported increased capacity to cope and accept experiences. Positive responses to the program focused on its normalization of PLEs, usefulness of its strategies, self-monitoring of mood, and information putting PLEs into perspective. Some respondents wanted more specific and individualized information, thought the program would be more useful for other audiences, or doubted its effectiveness. The program was mostly used in low-stress situations. Conclusions The current study provided initial support for the acceptability, utility and positive short-term outcomes of Get Real. The program now requires efficacy testing in randomized controlled trials.
Resumo:
Australian preschool teachers’ use of Web-searching in their classroom practice was examined (N = 131). Availability of Internet-enabled digital technology and the contribution of teacher demographic characteristics, comfort with digital technologies and beliefs about their use were assessed. Internet-enabled technologies were available in 53% (n = 69) of classrooms. Within these classrooms, teacher age and beliefs predicted Web-searching practice. Although comfortable with digital access of knowledge in their everyday life, teachers reported less comfort with Web-searching in the context of their classroom practice. The findings identify the provision of Internet-enabled technologies and professional development as actions to support effective and confident inclusion of Web-searching in classrooms. Such actions are necessary to align with national policy documents that define acquisition of digital literacies as a goal and assert digital access to knowledge as an issue of equity.
Resumo:
Many teachers working in remote and regional areas have limited access to collegial support networks. This research aimed to examine the existing strategies that were being undertaken by the Department of Education in Western Australia, to provide professional learning to teachers in regional and remote areas. It was important to establish the perceptions of teachers’ access to professional learning from those working at the coalface in geographically dispersed areas. Consequently, the possible opportunity for improving the amount and variety of professional learning, through the application of both synchronous and asynchronous technologies was proposed. The study was guided by the primary research question: “In what ways might technology be used to support professional development of regional and remote teachers in Western Australia?” Generating descriptions of current practice of professional learning along with the teacher perceptions were central to this research endeavour. The study relied on a mixed method research approach in order to attend to the research question. The data were collected in phases, referred to as an explanatory mixed methods design. Quantitative data were collected from 104 participants to provide a general picture of the research problem. To further refine this general picture, qualitative data were collected through interviews and e-interviews of 10 teachers. Participants in the study included graduate teachers, teachers who had taught more than two years, senior teachers and Level Three teachers from seven teaching districts within Western Australia. An investigation into current practice was included in this phase and technologies available to support a professional learning community over distance were documented. The final phase incorporated the formulation of a conceptual framework where a model was developed to facilitate the successful implementation of a professional learning community through the application of synchronous and asynchronous technologies. The study has identified that travel time in order to access professional development is significant and impacts on teachers’ personal time. There are limited relief teachers available in these isolated areas which impacts on the opportunities to access professional development. Teachers face inequities, in terms of promotion, because professional development is explicitly linked to promotional opportunities. Importantly, it was found that professional learning communities are valued, but are often limited by small staff numbers at the geographic locality of the school. Teachers preferred to undertake professional learning in the local context of their district, school or classroom and this professional learning must be established at the need of the individual teacher in line with the school priorities. Teachers reported they were confident in using technology and accessing professional development online if required, however, much uncertainty surrounded the use of web 2.0 technologies for this purpose. The recommendations made from the study are intended to identify how a professional learning community might be enhanced through synchronous and asynchronous technologies.
Resumo:
Natural history collections are an invaluable resource housing a wealth of knowledge with a long tradition of contributing to a wide range of fields such as taxonomy, quarantine, conservation and climate change. It is recognized however [Smith and Blagoderov 2012] that such physical collections are often heavily underutilized as a result of the practical issues of accessibility. The digitization of these collections is a step towards removing these access issues, but other hurdles must be addressed before we truly unlock the potential of this knowledge.
Resumo:
Background: The Internet has recently made possible the free global availability of scientific journal articles. Open Access (OA) can occur either via OA scientific journals, or via authors posting manuscripts of articles published in subscription journals in open web repositories. So far there have been few systematic studies showing how big the extent of OA is, in particular studies covering all fields of science. Methodology/Principal Findings: The proportion of peer reviewed scholarly journal articles, which are available openly in full text on the web, was studied using a random sample of 1837 titles and a web search engine. Of articles published in 2008, 8,5% were freely available at the publishers’ sites. For an additional 11,9% free manuscript versions could be found using search engines, making the overall OA percentage 20,4%. Chemistry (13%) had the lowest overall share of OA, Earth Sciences (33%) the highest. In medicine, biochemistry and chemistry publishing in OA journals was more common. In all other fields author-posted manuscript copies dominated the picture. Conclusions/Significance: The results show that OA already has a significant positive impact on the availability of the scientific journal literature and that there are big differences between scientific disciplines in the uptake. Due to the lack of awareness of OA-publishing among scientists in most fields outside physics, the results should be of general interest to all scholars. The results should also interest academic publishers, who need to take into account OA in their business strategies and copyright policies, as well as research funders, who like the NIH are starting to require OA availability of results from research projects they fund. The method and search tools developed also offer a good basis for more in-depth studies as well as longitudinal studies.
Resumo:
Introduction. We estimate the total yearly volume of peer-reviewed scientific journal articles published world-wide as well as the share of these articles available openly on the Web either directly or as copies in e-print repositories. Method. We rely on data from two commercial databases (ISI and Ulrich's Periodicals Directory) supplemented by sampling and Google searches. Analysis. A central issue is the finding that ISI-indexed journals publish far more articles per year (111) than non ISI-indexed journals (26), which means that the total figure we obtain is much lower than many earlier estimates. Our method of analysing the number of repository copies (green open access) differs from several earlier studies which have studied the number of copies in identified repositories, since we start from a random sample of articles and then test if copies can be found by a Web search engine. Results. We estimate that in 2006 the total number of articles published was approximately 1,350,000. Of this number 4.6% became immediately openly available and an additional 3.5% after an embargo period of, typically, one year. Furthermore, usable copies of 11.3% could be found in subject-specific or institutional repositories or on the home pages of the authors. Conclusions. We believe our results are the most reliable so far published and, therefore, should be useful in the on-going debate about Open Access among both academics and science policy makers. The method is replicable and also lends itself to longitudinal studies in the future.
Resumo:
In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.
Resumo:
Many of the research institutions and universities across the world are facilitating open-access (OA) to their intellectual outputs through their respective OA institutional repositories (IRs) or through the centralized subject-based repositories. The registry of open access repositories (ROAR) lists more than 2850 such repositories across the world. The awareness about the benefits of OA to scholarly literature and OA publishing is picking up in India, too. As per the ROAR statistics, to date, there are more than 90 OA repositories in the country. India is doing particularly well in publishing open-access journals (OAJ). As per the directory of open-access journals (DOAJ), to date, India with 390 OAJs, is ranked 5th in the world in terms of numbers of OAJs being published. Much of the research done in India is reported in the journals published from India. These journals have limited readership and many of them are not being indexed by Web of Science, Scopus or other leading international abstracting and indexing databases. Consequently, research done in the country gets hidden not only from the fellow countrymen, but also from the international community. This situation can be easily overcome if all the researchers facilitate OA to their publications. One of the easiest ways to facilitate OA to scientific literature is through the institutional repositories. If every research institution and university in India set up an open-access IR and ensure that copies of the final accepted versions of all the research publications are uploaded in the IRs, then the research done in India will get far better visibility. The federation of metadata from all the distributed, interoperable OA repositories in the country will serve as a window to the research done across the country. Federation of metadata from the distributed OAI-compliant repositories can be easily achieved by setting up harvesting software like the PKP Harvester. In this paper, we share our experience in setting up a prototype metadata harvesting service using the PKP harvesting software for the OAI-compliant repositories in India.
Resumo:
Background: Haemophilus influenzae (H. Influenzae) is the causative agent of pneumonia, bacteraemia and meningitis. The organism is responsible for large number of deaths in both developed and developing countries. Even-though the first bacterial genome to be sequenced was that of H. Influenzae, there is no exclusive database dedicated for H. Influenzae. This prompted us to develop the Haemophilus influenzae Genome Database (HIGDB). Methods: All data of HIGDB are stored and managed in MySQL database. The HIGDB is hosted on Solaris server and developed using PERL modules. Ajax and JavaScript are used for the interface development. Results: The HIGDB contains detailed information on 42,741 proteins, 18,077 genes including 10 whole genome sequences and also 284 three dimensional structures of proteins of H. influenzae. In addition, the database provides ``Motif search'' and ``GBrowse''. The HIGDB is freely accessible through the URL:http://bioserverl.physicslisc.ernetin/HIGDB/. Discussion: The HIGDB will be a single point access for bacteriological, clinical, genomic and proteomic information of H. influenzae. The database can also be used to identify DNA motifs within H. influenzae genomes and to compare gene or protein sequences of a particular strain with other strains of H. influenzae. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
An online computing server, Online_DPI (where DPI denotes the diffraction precision index), has been created to calculate the `Cruickshank DPI' value for a given three-dimensional protein or macromolecular structure. It also estimates the atomic coordinate error for all the atoms available in the structure. It is an easy-to-use web server that enables users to visualize the computed values dynamically on the client machine. Users can provide the Protein Data Bank (PDB) identification code or upload the three-dimensional atomic coordinates from the client machine. The computed DPI value for the structure and the atomic coordinate errors for all the atoms are included in the revised PDB file. Further, users can graphically view the atomic coordinate error along with `temperature factors' (i.e. atomic displacement parameters). In addition, the computing engine is interfaced with an up-to-date local copy of the Protein Data Bank. New entries are updated every week, and thus users can access all the structures available in the Protein Data Bank. The computing engine is freely accessible online at http://cluster.physics.iisc.ernet.in/dpi/.
Resumo:
Impreso por la Diputación Foral de Álava, D.L. VI-430/99.
Resumo:
Background: Two distinct trends are emerging with respect to how data is shared, collected, and analyzed within the bioinformatics community. First, Linked Data, exposed as SPARQL endpoints, promises to make data easier to collect and integrate by moving towards the harmonization of data syntax, descriptive vocabularies, and identifiers, as well as providing a standardized mechanism for data access. Second, Web Services, often linked together into workflows, normalize data access and create transparent, reproducible scientific methodologies that can, in principle, be re-used and customized to suit new scientific questions. Constructing queries that traverse semantically-rich Linked Data requires substantial expertise, yet traditional RESTful or SOAP Web Services cannot adequately describe the content of a SPARQL endpoint. We propose that content-driven Semantic Web Services can enable facile discovery of Linked Data, independent of their location. Results: We use a well-curated Linked Dataset - OpenLifeData - and utilize its descriptive metadata to automatically configure a series of more than 22,000 Semantic Web Services that expose all of its content via the SADI set of design principles. The OpenLifeData SADI services are discoverable via queries to the SHARE registry and easy to integrate into new or existing bioinformatics workflows and analytical pipelines. We demonstrate the utility of this system through comparison of Web Service-mediated data access with traditional SPARQL, and note that this approach not only simplifies data retrieval, but simultaneously provides protection against resource-intensive queries. Conclusions: We show, through a variety of different clients and examples of varying complexity, that data from the myriad OpenLifeData can be recovered without any need for prior-knowledge of the content or structure of the SPARQL endpoints. We also demonstrate that, via clients such as SHARE, the complexity of federated SPARQL queries is dramatically reduced.
Resumo:
[EN]Nowadays the use of web applications is a routine not only for companies but also for anyone interested in them. Thus, this market has risen hugely since the introduction of The Internet in our daily lives. Everyone has experienced the moment when you have to choose an access service and you do not know which one to select. At that moment, it is when this web application comes into action. It provides a useful interface in order to choose between access services as well as an analysis tool for the different access technologies in the market. Written in Java language, this web application is as simple as it can be, offering a complete interface that meets the needs of everyone, from the people at home to the largest company.