952 resultados para Web interface


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Haemophilus influenzae (H. Influenzae) is the causative agent of pneumonia, bacteraemia and meningitis. The organism is responsible for large number of deaths in both developed and developing countries. Even-though the first bacterial genome to be sequenced was that of H. Influenzae, there is no exclusive database dedicated for H. Influenzae. This prompted us to develop the Haemophilus influenzae Genome Database (HIGDB). Methods: All data of HIGDB are stored and managed in MySQL database. The HIGDB is hosted on Solaris server and developed using PERL modules. Ajax and JavaScript are used for the interface development. Results: The HIGDB contains detailed information on 42,741 proteins, 18,077 genes including 10 whole genome sequences and also 284 three dimensional structures of proteins of H. influenzae. In addition, the database provides ``Motif search'' and ``GBrowse''. The HIGDB is freely accessible through the URL:http://bioserverl.physicslisc.ernetin/HIGDB/. Discussion: The HIGDB will be a single point access for bacteriological, clinical, genomic and proteomic information of H. influenzae. The database can also be used to identify DNA motifs within H. influenzae genomes and to compare gene or protein sequences of a particular strain with other strains of H. influenzae. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Nowadays the use of web applications is a routine not only for companies but also for anyone interested in them. Thus, this market has risen hugely since the introduction of The Internet in our daily lives. Everyone has experienced the moment when you have to choose an access service and you do not know which one to select. At that moment, it is when this web application comes into action. It provides a useful interface in order to choose between access services as well as an analysis tool for the different access technologies in the market. Written in Java language, this web application is as simple as it can be, offering a complete interface that meets the needs of everyone, from the people at home to the largest company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

本文针对多用户访问Web数据库的过程进行了分析并提出改进思路,然后利用带抑止弧的扩充Petri网对改进后的访问过程进行建模。

Relevância:

30.00% 30.00%

Publicador:

Resumo:

introdução. Por onde começar?. Alov Mapa. Interface do Alov mapa. Criar arquivo HTML. Definir o arquivo de configuração do projeto. Usando a opção de busca. Usando o botão de informações. Usando o botão de ligação Web. Usando o botão de seleção (SELECT). Conclusões.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho tem por finalidade apresentar os resultados obtidos no contexto do projeto de pesquisa, cujo objetivo foi definir uma infraestrutura de software para implantação de um portal de integração e interoperabilidade de serviços desenvolvidos pela Embrapa Informática Agropecuária denominado WebAgritec.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a highly accurate method for classifying web pages based on link percentage, which is the percentage of text characters that are parts of links normalized by the number of all text characters on a web page. K-means clustering is used to create unique thresholds to differentiate index pages and article pages on individual web sites. Index pages contain mostly links to articles and other indices, while article pages contain mostly text. We also present a novel link grouping algorithm using agglomerative hierarchical clustering that groups links in the same spatial neighborhood together while preserving link structure. Grouping allows users with severe disabilities to use a scan-based mechanism to tab through a web page and select items. In experiments, we saw up to a 40-fold reduction in the number of commands needed to click on a link with a scan-based interface, which shows that we can vastly improve the rate of communication for users with disabilities. We used web page classification and link grouping to alter web page display on an accessible web browser that we developed to make a usable browsing interface for users with disabilities. Our classification method consistently outperformed a baseline classifier even when using minimal data to generate article and index clusters, and achieved classification accuracy of 94.0% on web sites with well-formed or slightly malformed HTML, compared with 80.1% accuracy for the baseline classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A question central to modelling and, ultimately, managing food webs concerns the dimensionality of trophic niche space, that is, the number of independent traits relevant for determining consumer-resource links. Food-web topologies can often be interpreted by assuming resource traits to be specified by points along a line and each consumer's diet to be given by resources contained in an interval on this line. This phenomenon, called intervality, has been known for 30 years and is widely acknowledged to indicate that trophic niche space is close to one-dimensional. We show that the degrees of intervality observed in nature can be reproduced in arbitrary-dimensional trophic niche spaces, provided that the processes of evolutionary diversification and adaptation are taken into account. Contrary to expectations, intervality is least pronounced at intermediate dimensions and steadily improves towards lower- and higher-dimensional trophic niche spaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta dissertação descreve o processo de desenvolvimento de um sistema de informação para a gestão da informação académica de programas de pósgraduação - Sistema WebMaster - que tem como objectivo tornar aquela informação acessível aos utilizadores através da World Wide Web (WWW). Começa-se por apresentar alguns conceitos que se julgam relevantes para a compreensão da problemática dos sistemas de informação em toda a sua abrangência numa determinada organização, particularizando alguns conceitos para o caso das universidades. De seguida reflecte-se sobre os sistemas de informação com base na Web, confrontando-se os conceitos de Web Site (tradicional) e aplicação Web, a nível de arquitectura tecnológica, principais vantagens e desvantagens, fazendo-se, ainda, uma breve referência às principais tecnologias para a construção de soluções com geração dinâmica de conteúdos. Por último representa-se o sistema WebMaster ao longo das suas diferentes etapas de desenvolvimento, desde a análise de requisitos, projecto do sistema, até à fase da implementação. A fase análise de requisitos foi levada a cabo através de um inquérito realizado aos potenciais utilizadores no sentido de identificar as suas necessidades de informação. Com base nos resultados desta fase apresenta-se o projecto do sistema numa perspectiva conceptual, navegacional e de interface de utilizador, fazendo uso da metodologia OOHDM - Object-Oriented Hypermedia Design Method. Finalmente, passa-se à fase da implementação que, com base nas etapas anteriores e nas tecnologias seleccionadas na fase do planeamento, proporciona um espaço interactivo e de troca de informação a todos os interessados da comunidade académica envolvidos em cursos de pós-graduação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines the efficiency of search engine advertising strategies employed by firms. The research setting is the online retailing industry, which is characterized by extensive use of Web technologies and high competition for market share and profitability. For Internet retailers, search engines are increasingly serving as an information gateway for many decision-making tasks. In particular, Search engine advertising (SEA) has opened a new marketing channel for retailers to attract new customers and improve their performance. In addition to natural (organic) search marketing strategies, search engine advertisers compete for top advertisement slots provided by search brokers such as Google and Yahoo! through keyword auctions. The rationale being that greater visibility on a search engine during a keyword search will capture customers' interest in a business and its product or service offerings. Search engines account for most online activities today. Compared with the slow growth of traditional marketing channels, online search volumes continue to grow at a steady rate. According to the Search Engine Marketing Professional Organization, spending on search engine marketing by North American firms in 2008 was estimated at $13.5 billion. Despite the significant role SEA plays in Web retailing, scholarly research on the topic is limited. Prior studies in SEA have focused on search engine auction mechanism design. In contrast, research on the business value of SEA has been limited by the lack of empirical data on search advertising practices. Recent advances in search and retail technologies have created datarich environments that enable new research opportunities at the interface of marketing and information technology. This research uses extensive data from Web retailing and Google-based search advertising and evaluates Web retailers' use of resources, search advertising techniques, and other relevant factors that contribute to business performance across different metrics. The methods used include Data Envelopment Analysis (DEA), data mining, and multivariate statistics. This research contributes to empirical research by analyzing several Web retail firms in different industry sectors and product categories. One of the key findings is that the dynamics of sponsored search advertising vary between multi-channel and Web-only retailers. While the key performance metrics for multi-channel retailers include measures such as online sales, conversion rate (CR), c1ick-through-rate (CTR), and impressions, the key performance metrics for Web-only retailers focus on organic and sponsored ad ranks. These results provide a useful contribution to our organizational level understanding of search engine advertising strategies, both for multi-channel and Web-only retailers. These results also contribute to current knowledge in technology-driven marketing strategies and provide managers with a better understanding of sponsored search advertising and its impact on various performance metrics in Web retailing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

À l’ère du web 2.0, l’usage des sites web se multiplie et génère de nouveaux enjeux. La satisfaction en rapport à l’interactivité, facteur d’efficacité des sites, détermine la popularité, et donc la visibilité de ceux-ci sur la Toile. Par conséquent, dans cette étude, nous considérons que les utilisateurs ont un rôle à jouer lors du processus de conception de ces derniers. Certes, autant en théorie que dans la pratique, les concepteurs semblent bel et bien tenir compte des utilisateurs; toutefois, ils ne les intègrent pas comme participants actifs dans leurs démarches. Cette étude vise au moyen d’une recherche documentaire et d’observations sur le terrain à comprendre les principales catégories et morphologies des sites web ainsi que les usages qui en découlent. Une analyse des diverses démarches de conception et des perceptions et attentes des internautes est réalisée sur la base de ces résultats. Pour répondre à ces objectifs, cette analyse cible deux catégories de sites réalisés par des professionnels et par des amateurs. Celle-ci nous permet de démontrer que les résultats de chacune de ces démarches, exprimés à travers les interfaces graphiques des sites, diffèrent au niveau de la qualité perceptible. Cette étude souligne également l’importance d’un traitement efficace de la communication graphique des éléments des sites web, afin de structurer la lecture et transmettre au final un message clair et compréhensible aux internautes. Dans le but consolider nos propositions, nous faisons référence à deux théories de communication graphique, la Gestalt et la sémiotique, l’une s’intéressant à la perception visuelle, l’autre à l’interprétation des signes. Celles-ci se sont révélées pertinentes pour analyser la qualité et l’efficacité des éléments de contenus. Notre étude révèle que les participants ne sont pas satisfaits des deux sites testés car l’utilisabilité du site conçu par des professionnels est trop complexe et l’interface du site conçu par un amateur manque de professionnalisme et de cohérence. Ces résultats soulignent la pertinence d’une approche centrée sur l’utilisateur pour la conception de sites web, car elle permet d’identifier et de résoudre des erreurs de conception. Nos résultats permettent également de souligner que les professionnels ayant un savoir technique et théorique se démarquent des amateurs au niveau des intervenants, des outils et des limites. Des pistes de solution, via des critères de design centré sur l’utilisateur, sont proposées à la fin de cette étude dans le but d’optimiser la qualité et l’efficacité des interfaces graphiques web.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ka-Map ("ka" as in ka-boom!) is an open source project that is aimed at providing a javascript API for developing highly interactive web-mapping interfaces using features available in modern web browsers. ka-Map currently has a number of interesting features. It sports the usual array of user interface elements such as: interactive, continuous panning without reloading the page; keyboard navigation options (zooming, panning); zooming to pre-set scales; interactive scalebar, legend and keymap support; optional layer control on client side; server side tile caching