916 resultados para web tool
Resumo:
Background: Understanding channel structures that lead to active sites or traverse the molecule is important in the study of molecular functions such as ion, ligand, and small molecule transport. Efficient methods for extracting, storing, and analyzing protein channels are required to support such studies. Further, there is a need for an integrated framework that supports computation of the channels, interactive exploration of their structure, and detailed visual analysis of their properties. Results: We describe a method for molecular channel extraction based on the alpha complex representation. The method computes geometrically feasible channels, stores both the volume occupied by the channel and its centerline in a unified representation, and reports significant channels. The representation also supports efficient computation of channel profiles that help understand channel properties. We describe methods for effective visualization of the channels and their profiles. These methods and the visual analysis framework are implemented in a software tool, CHEXVIS. We apply the method on a number of known channel containing proteins to extract pore features. Results from these experiments on several proteins show that CHEXVIS performance is comparable to, and in some cases, better than existing channel extraction techniques. Using several case studies, we demonstrate how CHEXVIS can be used to study channels, extract their properties and gain insights into molecular function. Conclusion: CHEXVIS supports the visual exploration of multiple channels together with their geometric and physico-chemical properties thereby enabling the understanding of the basic biology of transport through protein channels. The CHEXVIS web-server is freely available at http://vgl.serc.iisc.ernet.in/chexvis/. The web-server is supported on all modern browsers with latest Java plug-in.
Resumo:
John Latham, International Projects Co-ordinator at Lancaster and Morecambe College (LMC), got involved with the project Serious Computer Games as a Teaching Tool (SCOGATT) after using the game EnerCities with his students. The web based platform at www.scogatt.eu serves as a One Stop Toolkit for vocational teachers and trainers who want to use serious computer games (SCG) in their teaching environments but might need a helping hand. There they will be able to find a compendium of serious games, SCOGATT pilot results, teacher reports and the exemplar games, EnerCities.
Resumo:
[EN]Nowadays the use of web applications is a routine not only for companies but also for anyone interested in them. Thus, this market has risen hugely since the introduction of The Internet in our daily lives. Everyone has experienced the moment when you have to choose an access service and you do not know which one to select. At that moment, it is when this web application comes into action. It provides a useful interface in order to choose between access services as well as an analysis tool for the different access technologies in the market. Written in Java language, this web application is as simple as it can be, offering a complete interface that meets the needs of everyone, from the people at home to the largest company.
Resumo:
No campo da educação permanente na área da saúde, podem ser citadas diferentes iniciativas que visam formar profissionais com o uso das Tecnologias de Informação e Comunicação (TICs). No entanto, pouco se sabe ainda sobre o uso da web por profissionais da saúde como estratégia da aprendizagem formal, menos ainda quando se aborda a aprendizagem informal. Percebe-se que as ações no campo da educação com uso e, sobretudo, para o uso da tecnologia como ferramenta de aprendizagem ainda são feitas de forma muito intuitivas, por acerto e erro, tendo em vista a própria evolução da tecnologia em um curto período temporal. Sendo assim, o objetivo geral da pesquisa é compreender o perfil, as percepções e representações sociais sobre aprendizagem na web de médicos, enfermeiros e cirurgiões-dentistas e uma possível influência desse uso no cotidiano profissional. Para atingir o objetivo delimitado, foi empregada a metodologia quali-quantitativa através da utilização de um questionário on-line, contendo questões fechadas e questões abertas, respondido por 277 alunos do Curso de Especialização em Saúde da Família oferecido pelo núcleo da Universidade do Estado do Rio de Janeiro (UERJ) da Universidade Aberta do Sistema Único de Saúde (UNA-SUS). Para análise das questões fechadas, foi utilizada a estatística descritiva e testes bivariados não paramétricos. A análise das questões abertas foi feita à luz da teoria da representações sociais com emprego da técnica da análise do conteúdo e das evocações livres. Os resultados da pesquisa foram apresentados em formato de três trabalhos para apresentação em eventos e quatro artigos submetidos para publicação em revistas de alta qualidade acadêmica. Com base nos resultados, destaca-se como preocupação que o simples consumo de informações esteja justificando e a ele esteja restrito o uso da internet para os sujeitos, em detrimento às possibilidades educacionais da cibercultura. Acredita-se ser necessário o desenvolvimento de ações que subsidiem uma prática mais reflexiva a fim de reverter um possível uso reduzido das potencialidades da TICs.
Resumo:
The primary objective of this project, “the Assessment of Existing Information on Atlantic Coastal Fish Habitat”, is to inform conservation planning for the Atlantic Coastal Fish Habitat Partnership (ACFHP). ACFHP is recognized as a Partnership by the National Fish Habitat Action Plan (NFHAP), whose overall mission is to protect, restore, and enhance the nation’s fish and aquatic communities through partnerships that foster fish habitat conservation. This project is a cooperative effort of NOAA/NOS Center for Coastal Monitoring and Assessment (CCMA) Biogeography Branch and ACFHP. The Assessment includes three components; 1. a representative bibliographic and assessment database, 2. a Geographical Information System (GIS) spatial framework, and 3. a summary document with description of methods, analyses of habitat assessment information, and recommendations for further work. The spatial bibliography was created by linking the bibliographic table developed in Microsoft Excel and exported to SQL Server, with the spatial framework developed in ArcGIS and exported to GoogleMaps. The bibliography is a comprehensive, searchable database of over 500 selected documents and data sources on Atlantic coastal fish species and habitats. Key information captured for each entry includes basic bibliographic data, spatial footprint (e.g. waterbody or watershed), species and habitats covered, and electronic availability. Information on habitat condition indicators, threats, and conservation recommendations are extracted from each entry and recorded in a separate linked table. The spatial framework is a functional digital map based on polygon layers of watersheds, estuarine and marine waterbodies derived from NOAA’s Coastal Assessment Framework, MMS/NOAA’s Multipurpose Marine Cadastre, and other sources, providing spatial reference for all of the documents cited in the bibliography. Together, the bibliography and assessment tables and their spatial framework provide a powerful tool to query and assess available information through a publicly available web interface. They were designed to support the development of priorities for ACFHP’s conservation efforts within a geographic area extending from Maine to Florida, and from coastal watersheds seaward to the edge of the continental shelf. The Atlantic Coastal Fish Habitat Partnership has made initial use of the Assessment of Existing Information. Though it has not yet applied the AEI in a systematic or structured manner, it expects to find further uses as the draft conservation strategic plan is refined, and as regional action plans are developed. It also provides a means to move beyond an “assessment of existing information” towards an “assessment of fish habitat”, and is being applied towards the National Fish Habitat Action Plan (NFHAP) 2010 Assessment. Beyond the scope of the current project, there may be application to broader initiatives such as Integrated Ecosystem Assessments (IEAs), Ecosystem Based Management (EBM), and Marine Spatial Planning (MSP).
Resumo:
Security policies are increasingly being implemented by organisations. Policies are mapped to device configurations to enforce the policies. This is typically performed manually by network administrators. The development and management of these enforcement policies is a difficult and error prone task. This thesis describes the development and evaluation of an off-line firewall policy parser and validation tool. This provides the system administrator with a textual interface and the vendor specific low level languages they trust and are familiar with, but the support of an off-line compiler tool. The tool was created using the Microsoft C#.NET language, and the Microsoft Visual Studio Integrated Development Environment (IDE). This provided an object environment to create a flexible and extensible system, as well as simple Web and Windows prototyping facilities to create GUI front-end applications for testing and evaluation. A CLI was provided with the tool, for more experienced users, but it was also designed to be easily integrated into GUI based applications for non-expert users. The evaluation of the system was performed from a custom built GUI application, which can create test firewall rule sets containing synthetic rules, to supply a variety of experimental conditions, as well as record various performance metrics. The validation tool was created, based around a pragmatic outlook, with regard to the needs of the network administrator. The modularity of the design was important, due to the fast changing nature of the network device languages being processed. An object oriented approach was taken, for maximum changeability and extensibility, and a flexible tool was developed, due to the possible needs of different types users. System administrators desire, low level, CLI-based tools that they can trust, and use easily from scripting languages. Inexperienced users may prefer a more abstract, high level, GUI or Wizard that has an easier to learn process. Built around these ideas, the tool was implemented, and proved to be a usable, and complimentary addition to the many network policy-based systems currently available. The tool has a flexible design and contains comprehensive functionality. As opposed to some of the other tools which perform across multiple vendor languages, but do not implement a deep range of options for any of the languages. It compliments existing systems, such as policy compliance tools, and abstract policy analysis systems. Its validation algorithms were evaluated for both completeness, and performance. The tool was found to correctly process large firewall policies in just a few seconds. A framework for a policy-based management system, with which the tool would integrate, is also proposed. This is based around a vendor independent XML-based repository of device configurations, which could be used to bring together existing policy management and analysis systems.
Resumo:
Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.
Resumo:
ImageRover is a search by image content navigation tool for the world wide web. To gather images expediently, the image collection subsystem utilizes a distributed fleet of WWW robots running on different computers. The image robots gather information about the images they find, computing the appropriate image decompositions and indices, and store this extracted information in vector form for searches based on image content. At search time, users can iteratively guide the search through the selection of relevant examples. Search performance is made efficient through the use of an approximate, optimized k-d tree algorithm. The system employs a novel relevance feedback algorithm that selects the distance metrics appropriate for a particular query.
Resumo:
One role for workload generation is as a means for understanding how servers and networks respond to variation in load. This enables management and capacity planning based on current and projected usage. This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server. The tool, called Surge (Scalable URL Reference Generator) generates references matching empirical measurements of 1) server file size distribution; 2) request size distribution; 3) relative file popularity; 4) embedded file references; 5) temporal locality of reference; and 6) idle periods of individual users. This paper reviews the essential elements required in the generation of a representative Web workload. It also addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream, the solutions we adopted, and their associated accuracy. Finally, we present evidence that Surge exercises servers in a manner significantly different from other Web server benchmarks.
Resumo:
This paper examines how and why web server performance changes as the workload at the server varies. We measure the performance of a PC acting as a standalone web server, running Apache on top of Linux. We use two important tools to understand what aspects of software architecture and implementation determine performance at the server. The first is a tool that we developed, called WebMonitor, which measures activity and resource consumption, both in the operating system and in the web server. The second is the kernel profiling facility distributed as part of Linux. We vary the workload at the server along two important dimensions: the number of clients concurrently accessing the server, and the size of the documents stored on the server. Our results quantify and show how more clients and larger files stress the web server and operating system in different and surprising ways. Our results also show the importance of fixed costs (i.e., opening and closing TCP connections, and updating the server log) in determining web server performance.
Resumo:
As new multi-party edge services are deployed on the Internet, application-layer protocols with complex communication models and event dependencies are increasingly being specified and adopted. To ensure that such protocols (and compositions thereof with existing protocols) do not result in undesirable behaviors (e.g., livelocks) there needs to be a methodology for the automated checking of the "safety" of these protocols. In this paper, we present ingredients of such a methodology. Specifically, we show how SPIN, a tool from the formal systems verification community, can be used to quickly identify problematic behaviors of application-layer protocols with non-trivial communication models—such as HTTP with the addition of the "100 Continue" mechanism. As a case study, we examine several versions of the specification for the Continue mechanism; our experiments mechanically uncovered multi-version interoperability problems, including some which motivated revisions of HTTP/1.1 and some which persist even with the current version of the protocol. One such problem resembles a classic degradation-of-service attack, but can arise between well-meaning peers. We also discuss how the methods we employ can be used to make explicit the requirements for hardening a protocol's implementation against potentially malicious peers, and for verifying an implementation's interoperability with the full range of allowable peer behaviors.
Resumo:
This study evaluated the effect of an online diet-tracking tool on college students’ self-efficacy regarding fruit and vegetable intake. A convenience sample of students completed online self-efficacy surveys before and after a six-week intervention in which they tracked dietary intake with an online tool. Group one (n=22 fall, n=43 spring) accessed a tracking tool without nutrition tips; group two (n=20 fall, n=33 spring) accessed the tool and weekly nutrition tips. The control group (n=36 fall, n=60 spring) had access to neither. Each semester there were significant changes in self-efficacy from pre- to post-test for men and for women when experimental groups were combined (p<0.05 for all); however, these changes were inconsistent. Qualitative data showed that participants responded well to the simplicity of the tool, the immediacy of feedback, and the customized database containing foods available on campus. Future models should improve user engagement by increasing convenience, potentially by automation.
Resumo:
PURPOSE: Risk-stratified guidelines can improve quality of care and cost-effectiveness, but their uptake in primary care has been limited. MeTree, a Web-based, patient-facing risk-assessment and clinical decision support tool, is designed to facilitate uptake of risk-stratified guidelines. METHODS: A hybrid implementation-effectiveness trial of three clinics (two intervention, one control). PARTICIPANTS: consentable nonadopted adults with upcoming appointments. PRIMARY OUTCOME: agreement between patient risk level and risk management for those meeting evidence-based criteria for increased-risk risk-management strategies (increased risk) and those who do not (average risk) before MeTree and after. MEASURES: chart abstraction was used to identify risk management related to colon, breast, and ovarian cancer, hereditary cancer, and thrombosis. RESULTS: Participants = 488, female = 284 (58.2%), white = 411 (85.7%), mean age = 58.7 (SD = 12.3). Agreement between risk management and risk level for all conditions for each participant, except for colon cancer, which was limited to those <50 years of age, was (i) 1.1% (N = 2/174) for the increased-risk group before MeTree and 16.1% (N = 28/174) after and (ii) 99.2% (N = 2,125/2,142) for the average-risk group before MeTree and 99.5% (N = 2,131/2,142) after. Of those receiving increased-risk risk-management strategies at baseline, 10.5% (N = 2/19) met criteria for increased risk. After MeTree, 80.7% (N = 46/57) met criteria. CONCLUSION: MeTree integration into primary care can improve uptake of risk-stratified guidelines and potentially reduce "overuse" and "underuse" of increased-risk services.Genet Med 18 10, 1020-1028.
Resumo:
This paper uses a case study approach to consider the effectiveness of the electronic survey as a research tool to measure the learner voice about experiences of e-learning in a particular institutional case. Two large scale electronic surveys were carried out for the Student Experience of e-Learning (SEEL) project at the University of Greenwich in 2007 and 2008, funded by the UK Higher Education Academy (HEA). The paper considers this case to argue that, although the electronic web-based survey is a convenient method of quantitative and qualitative data collection, enabling higher education institutions swiftly to capture multiple views of large numbers of students regarding experiences of e-learning, for more robust analysis, electronic survey research is best combined with other methods of in-depth qualitative data collection. The advantages and disadvantages of the electronic survey as a research method to capture student experiences of e-learning are the focus of analysis in this short paper, which reports an overview of large-scale data collection (1,000+ responses) from two electronic surveys administered to students using surveymonkey as a web-based survey tool as part of the SEEL research project. Advantages of web-based electronic survey design include flexibility, ease of design, high degree of designer control, convenience, low costs, data security, ease of access and guarantee of confidentiality combined with researcher ability to identify users through email addresses. Disadvantages of electronic survey design include the self-selecting nature of web-enabled respondent participation, which tends to skew data collection towards students who respond effectively to email invitations. The relative inadequacy of electronic surveys to capture in-depth qualitative views of students is discussed with regard to prior recommendations from the JISC-funded Learners' Experiences of e-Learning (LEX) project, in consideration of the results from SEEL in-depth interviews with students. The paper considers the literature on web-based and email electronic survey design, summing up the relative advantages and disadvantages of electronic surveys as a tool for student experience of e-learning research. The paper concludes with a range of recommendations for designing future electronic surveys to capture the learner voice on e-learning, contributing to evidence-based learning technology research development in higher education.
Resumo:
In order to achieve progress towards sustainable resource management, it is essential to evaluate options for the reuse and recycling of secondary raw materials, in order to provide a robust evidence base for decision makers. This paper presents the research undertaken in the development of a web-based decision-support tool (the used tyres resource efficiency tool) to compare three processing routes for used tyres compared to their existing primary alternatives. Primary data on the energy and material flows for the three routes, and their alternatives were collected and analysed. The methodology used was a streamlined life-cycle assessment (sLCA) approach. Processes included were: car tyre baling against aggregate gabions; car tyre retreading against new car tyres; and car tyre shred used in landfill engineering against primary aggregates. The outputs of the assessment, and web-based tool, were estimates of raw materials used, carbon dioxide emissions and costs. The paper discusses the benefits of carrying out a streamlined LCA and using the outputs of this analysis to develop a decision-support tool. The strengths and weakness of this approach are discussed and future research priorities identified which could facilitate the use of life cycle approaches by designers and practitioners.