965 resultados para Web-Access
Resumo:
Background: Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results: In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions: The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.
Resumo:
Los rankings de productividad científica resultan cada vez más relevantes, tanto a nivel individual como institucional. Garantizar que se basan en información confiable y exhaustiva es, por tanto, importante. Este estudio indica que la posición de los individuos en esa clase de ranking puede cambiar sustancialmente cuando se consideran diversos indicadores bibliométricos internacionalmente reconocidos. Se usa, como ilustración, el caso de los diez profesores del área de ‘Personalidad, Evaluación y Tratamiento Psicológico’ consignados en el reciente análisis de Olivas-Ávila y Musi-Lechuga (Psicothema 2010. Vol. 22, nº 4, pp. 909-916).
Resumo:
El concepto web 2.0 se utiliza para denominar a un conjunto de aplicaciones que están siempre en evolución de acuerdo a los requerimientos que los usuarios van realizando. En la red podemos encontrar muchas herramientas desarrolladas en la línea de la web 2.0: blogs, wikis, herramientas para compartir marcadores, para compartir archivos, etc. Consideramos que el sistema educativo no puede estar al margen de esta evolución tecnológica y necesita adaptarse a todos los niveles. Las universidades también se encuentran en la necesidad de adecuarse a estos nuevos tiempos, y cada vez encontramos más experiencias formativas de trabajo colaborativo en red para favorecer el aprendizaje de los estudiantes. El trabajo que presentamos es un análisis de herramientas web 2.0 y de una recopilación de buenas prácticas docentes universitarias de desarrollo de metodologías colaborativas utilizando las TIC. Además, ofrecemos recomendaciones del uso de estas herramientas en los procesos de enseñanza y aprendizaje universitario.
Resumo:
This article introduces a new interface for T-Coffee, a consistency-based multiple sequence alignment program. This interface provides an easy and intuitive access to the most popular functionality of the package. These include the default T-Coffee mode for protein and nucleic acid sequences, the M-Coffee mode that allows combining the output of any other aligners, and template-based modes of T-Coffee that deliver high accuracy alignments while using structural or homology derived templates. These three available template modes are Expresso for the alignment of protein with a known 3D-Structure, R-Coffee to align RNA sequences with conserved secondary structures and PSI-Coffee to accurately align distantly related sequences using homology extension. The new server benefits from recent improvements of the T-Coffee algorithm and can align up to 150 sequences as long as 10 000 residues and is available from both http://www.tcoffee.org and its main mirror http://tcoffee.crg.cat.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
This paper presents the current state and development of a prototype web-GIS (Geographic Information System) decision support platform intended for application in natural hazards and risk management, mainly for floods and landslides. This web platform uses open-source geospatial software and technologies, particularly the Boundless (formerly OpenGeo) framework and its client side software development kit (SDK). The main purpose of the platform is to assist the experts and stakeholders in the decision-making process for evaluation and selection of different risk management strategies through an interactive participation approach, integrating web-GIS interface with decision support tool based on a compromise programming approach. The access rights and functionality of the platform are varied depending on the roles and responsibilities of stakeholders in managing the risk. The application of the prototype platform is demonstrated based on an example case study site: Malborghetto Valbruna municipality of North-Eastern Italy where flash floods and landslides are frequent with major events having occurred in 2003. The preliminary feedback collected from the stakeholders in the region is discussed to understand the perspectives of stakeholders on the proposed prototype platform.
Resumo:
BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers.
Resumo:
Instructor and student beliefs, attitudes and intentions toward contributing to local open courseware (OCW) sites have been investigated through campus-wide surveys at Universidad Politecnica de Valencia and the University of Michigan. In addition, at the University of Michigan, faculty have been queried about their participation in open access (OA) publishing. We compare the instructor and student data concerning OCW between the two institutions, and introduce the investigation of open access publishing in relation to open courseware publishing.
Resumo:
E-learning, understood as the intensive use of Information and Communication Technologies in mainly but not only) distance education, has radically changed the meaning of the latter. E-learning is an overused term which has been applied to any use of technology in education. Today, the most widely accepted meaning ofe-learning coincides with the fourth generation described by Taylor (1999), where there is an asynchronousprocess that allows students and teachers to interact in an educational process expressly designed in accordance with these principles. We prefer to speak of Internet-Based Learning or, better still, Web-Based Learning, for example, to explain the fact that distance education is carried out using the Internet, with the appearance of the virtual learning environment concept, a web space where the teaching and learning process is generated and supported (Sangrà, 2002). This entails overcoming the barriers of space and time of brickand mortar education (although we prefer the term face-to-face) or of classical distance education using broadcasting and adopting a completely asynchronous model that allows access to education by many more users, at any level (including secondary education, but primarily higher education and lifelong learning).
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This exploratory, descriptive action research study is based on a survey of a sample of convenience consisting of 172 college and university marketing students, and 5 professors who were experienced in teaching in an internet based environment. The students that were surveyed were studying e-commerce and international business in 3^^ and 4*'' year classes at a leading imiversity in Ontario and e-commerce in 5^ semester classes at a leading college. These classes were taught using a hybrid teaching style with the contribution of a large website that contained pertinent text and audio material. Hybrid teaching employs web based course materials (some in the form of Learning Objects) to deliver curriculimi material both during the attended lectures and also for students accessing the course web page outside of class hours. The survey was in the form on an online questionnaire. The research questions explored in this study were: 1. What factors influence the students' ability to access and learn from web based course content? 2. How likely are the students to use selected elements of internet based curriculum for learning academic content? 3. What is the preferred physical environment to facilitate learning in a hybrid environment? 4. How effective are selected teaching/learning strategies in a hybrid environment? The findings of this study suggest that students are very interested in being part of the learning process by contributing to a course web site. Specifically, students are interested in audio content being one of the formats of online course material, and have an interest in being part of the creation of small audio clips to be used in class.
Resumo:
Parent–school relationships contribute significantly to the quality of students’ education. The Internet, in turn, has started to influence individuals’ way social communication and most school boards in Ontario now use the Internet to communicate with parents, which helps build parent–school relationships. This project comprised a conceptual analysis of how the Internet enhances parent–school relationships to support Ontario school board administrators seeking to implement such technology. The study’s literature review identified the links between Web 2.0 technology, parent–school relationships, and effective parent engagement. A conceptual framework of the features of Web 2.0 tools that promote social interaction was developed and used to analyze websites of three Ontario school boards. The analysis revealed that school board websites used static features such as email, newsletters, and announcements for communication and did not provide access to parents for providing feedback through Web 2.0 features such as instant messaging. General recommendations were made so that school board administrators have the opportunity to implement changes in their school community with feasible modifications. Overall, Web 2.0-based technologies such as interactive communication tools and social media hold the most promise for enhancing parent–school relationships because they can help not only overcome barriers of time and distance, but also improve the parents’ desire to be engaged in children’s education experiences.
Resumo:
Affiliation: Département de biochimie, Faculté de médecine, Université de Montréal
Resumo:
Travail personnel dans le cadre de SCI6850, recherche individuelle, en vue d'obtenir trois crédits. Présenté à l'École de bibliothéconomie et des sciences de l'information.