982 resultados para Sistemi Web, database


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Protein pKa Database (PPD) v1.0 provides a compendium of protein residue-specific ionization equilibria (pKa values), as collated from the primary literature, in the form of a web-accessible postgreSQL relational database. Ionizable residues play key roles in the molecular mechanisms that underlie many biological phenomena, including protein folding and enzyme catalysis. The PPD serves as a general protein pKa archive and as a source of data that allows for the development and improvement of pKa prediction systems. The database is accessed through an HTML interface, which offers two fast, efficient search methods: an amino acid-based query and a Basic Local Alignment Search Tool search. Entries also give details of experimental techniques and links to other key databases, such as National Center for Biotechnology Information and the Protein Data Bank, providing the user with considerable background information.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we consider two computer systems and the dynamic Web technologies they are using. Different contemporary dynamic web technologies are described in details and their advantages and disadvantages have been shown. Specific applications are developed, clinic and studying systems, and their programming models are described. Finally we implement these two applications in the students education process: Online studying has been tested in the Technical University – Varna, Web based clinic system has been used for practical education of the students in the Medical College - Sofia, branch V. Tarnovo

Relevância:

40.00% 40.00%

Publicador:

Resumo:

ACM Computing Classification System (1998): H.5.2, H.2.8, J.2, H.5.3.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research is investigating the claim that Change Data Capture (CDC) technologies capture data changes in real-time. Based on theory, our hypothesis states that real-time CDC is not achievable with traditional approaches (log scanning, triggers and timestamps). Traditional approaches to CDC require a resource to be polled, which prevents true real-time CDC. We propose an approach to CDC that encapsulates the data source with a set of web services. These web services will propagate the changes to the targets and eliminate the need for polling. Additionally we propose a framework for CDC technologies that allow changes to flow from source to target. This paper discusses current CDC technologies and presents the theory about why they are unable to deliver changes in real-time. Following, we discuss our web service approach to CDC and accompanying framework, explaining how they can produce real-time CDC. The paper concludes with a discussion on the research required to investigate the real-time capabilities of CDC technologies. © 2010 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper discusses the advantages of database-backed websites and describes the model for a library website implemented at the University of Nottingham using open source software, PHP and MySQL. As websites continue to grow in size and complexity it becomes increasingly important to introduce automation to help manage them. It is suggested that a database-backed website offers many advantages over one built from static HTML pages. These include a consistency of style and content, the ability to present different views of the same data, devolved editing and enhanced security. The University of Nottingham Library Services website is described and issues surrounding its design, technological implementation and management are explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The World Wide Web (WWW) is useful for distributing scientific data. Most existing web data resources organize their information either in structured flat files or relational databases with basic retrieval capabilities. For databases with one or a few simple relations, these approaches are successful, but they can be cumbersome when there is a data model involving multiple relations between complex data. We believe that knowledge-based resources offer a solution in these cases. Knowledge bases have explicit declarations of the concepts in the domain, along with the relations between them. They are usually organized hierarchically, and provide a global data model with a controlled vocabulary, We have created the OWEB architecture for building online scientific data resources using knowledge bases. OWEB provides a shell for structuring data, providing secure and shared access, and creating computational modules for processing and displaying data. In this paper, we describe the translation of the online immunological database MHCPEP into an OWEB system called MHCWeb. This effort involved building a conceptual model for the data, creating a controlled terminology for the legal values for different types of data, and then translating the original data into the new structure. The 0 WEB environment allows for flexible access to the data by both users and computer programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While multimedia data, image data in particular, is an integral part of most websites and web documents, our quest for information so far is still restricted to text based search. To explore the World Wide Web more effectively, especially its rich repository of truly multimedia information, we are facing a number of challenging problems. Firstly, we face the ambiguous and highly subjective nature of defining image semantics and similarity. Secondly, multimedia data could come from highly diversified sources, as a result of automatic image capturing and generation processes. Finally, multimedia information exists in decentralised sources over the Web, making it difficult to use conventional content-based image retrieval (CBIR) techniques for effective and efficient search. In this special issue, we present a collection of five papers on visual and multimedia information management and retrieval topics, addressing some aspects of these challenges. These papers have been selected from the conference proceedings (Kluwer Academic Publishers, ISBN: 1-4020- 7060-8) of the Sixth IFIP 2.6 Working Conference on Visual Database Systems (VDB6), held in Brisbane, Australia, on 29–31 May 2002.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plant-antivenom is a computational Websystem about medicinal plants with anti-venom properties. The system consists of a database of these plants, including scientific publications on this subject and amino acid sequences of active principles from venomous animals. The system relates these data allowing their integration through different search applications. For the development of the system, the first surveys were conducted in scientific literature, allowing the creation of a publication database in a library for reading and user interaction. Then, classes of categories were created, allowing the use of tags and the organization of content. This database on medicinal plants has information such as family, species, isolated compounds, activity, inhibited animal venoms, among others. Provision is made for submission of new information by registered users, by the use of wiki tools. Content submitted is released in accordance to permission rules defined by the system. The database on biological venom protein amino acid sequences was structured from the essential information from National Center for Biotechnology Information (NCBI). Plant-antivenom`s interface is simple, contributing to a fast and functional access to the system and the integration of different data registered on it. Plant-antivenom system is available on the Internet at http://gbi.fmrp.usp.br/plantantivenom.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2002, an integrated basic science course was introduced into the Bachelor of Dental Sciences programme at the University of Queensland, Australia. Learning activities for the Metabolism and Nutrition unit within this integrated course included lectures, problem-based learning tutorials, computer-based self-directed learning exercises and practicals. To support student learning and assist students to develop the skills necessary to become lifelong learners, an extensive bank of formative assessment questions was set up using the commercially available package, WebCT®. Questions included short-answer, multiple-choice and extended matching questions. As significant staff time was involved in setting up the question database, the extent to which students used the formative assessment and their perceptions of its usefulness to their learning were evaluated to determine whether formative assessment should be extended to other units within the course. More than 90% of the class completed formative assessment tasks associated with learning activities scheduled in the first two weeks of the block, but this declined to less than 50% by the fourth and final week of the block. Patterns of usage of the formative assessment were also compared in students who scored in the top 10% for all assessment for the semester with those who scored in the lowest 10%. High-performing students accessed the Web-based formative assessment about twice as often as those who scored in the lowest band. However, marks for the formative assessment tests did not differ significantly between the two groups. In a questionnaire that was administered at the completion of the block, students rated the formative assessment highly, with 80% regarding it as being helpful for their learning. In conclusion, although substantial staff time was required to set up the question database, this appeared to be justified by the positive responses of the students.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article is published online with Open Access and distributed under the terms of the Creative Commons Attribution Non-Commercial License.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta dissertação trata da análise da produção científica e tecnológica internacional e brasileira na área de conhecimento Engenharia Civil, por meio de indicadores bibliométricos. A área Engenharia Civil foi escolhida em razão da sua relevância para o desenvolvimento econômico do país. No entanto, em termos absolutos e relativos, está entre os setores tecnologicamente mais atrasados da economia. A bibliometria é uma disciplina com alcance multidisciplinar que estuda o uso e os aspectos quantitativos da produção científica registrada. Os indicadores de produção científica são objeto de análise de várias áreas do conhecimento, tanto para o planejamento e a execução de políticas públicas de vários setores quanto para maior conhecimento da comunidade científica sobre o sistema em que está inserida. A metodologia utilizada para a elaboração deste estudo descritivo de caráter exploratório foi a análise documental e bibliométrica, baseada em dados das publicações científicas, no período de 1970 a 2012, e tecnológicas, no período de 2001 a 2012, da área Engenharia Civil, indexadas nas bases de dados Science Citattion Index Expanded (SCI); Social Science Citation Index (SSCI); Conference Proceedings Citation Index (CPCI) e da Derwent Innovations Index (DII), que compõem a base de dados multidisciplinar da Web of Sicence (WoS). As informações foram qualificadas e quantificadas com o auxílio do software bibliométrico VantagePoint®. Os resultados obtidos confirmaram o baixo número de publicações científicas e tecnológicas na área de conhecimento Engenharia Civil de autores filiados a instituições de ensino e pesquisa brasileiras quando comparados aos dos países industrializados. Existe um conjunto de fortes condicionantes que ultrapassam o poder de decisão e de influência da academia, dificultando e limitando a disseminação das pesquisas e patentes brasileiras relacionadas a fatores de caráter sistêmico e cultural. A possibilidade de análise de indicadores de produção científica e tecnológica na Engenharia Civil contribui para criar políticas que, se utilizadas por agências de fomento, podem subsidiar investimentos mais fundamentados por parte dos governos e da iniciativa privada, a exemplo do que é feito por outros setores industriais.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores