1000 resultados para web


Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo principal desse artigo é analisar o jogo social CityVille criado pela empresa Zynga, uma das últimas tendências do Facebook encontrado e disponível na web. A escolha do corpus game CityVille deve-se ao interesse de buscar compreender o por que esse jogo obteve tamanho sucesso e seja atualmente um dos jogos sociais/digitais de maior destaque e adeptos da rede social Facebook. Busca-se dessa maneira depreender de que maneira os jogos sociais tem evoluído e transformado as relações comunicacionais entre os usuários da rede. As redes sociais têm se tornando cada vez mais importantes e estão vinculadas a vida das pessoas. Com o desenvolvimento da linguagem digital, a forma com que as pessoas passaram a interagir se transforma, pois essas se comunicam através do computador em tempo real. O estudo busca fazer uma análise plural do jogo CityVille destacando distintos pontos de vista do jogo social. Propomos verificar as relações entre: o uso e os usuários, e a tecnologia e o conteúdo do jogo. Nas conclusões explicitaremos quais serão as direções possíveis do futuro dos jogos sociais da web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El diseño de un modelo de aplicación web institucional universitaria con categorías, propiedades y fases evolutivas específicas ejerció como unidad de medida para la observación de 64 sitios web de facultades de comunicación de Iberoamérica. Finalmente, 10 sitios han sido escogidos para su análisis detallado, bajo las pautas del modelo diseñado. Los resultados señalan que menos de una decena de sitios web están en condiciones de considerarse productos eficientes de comunicación y gestión.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we presents a project [1] developed to demonstrate the capability that Multi-Layer Perceptrons (MLP) have to approximate non-linear functions [2]. The simulation has been implemented in Java to be used in all the computers by Internet [3], with a simple operation and pleasant interface. The power of the simulations is in the possibility of the user of seeing the evolutions of the approaches, the contribution of each neuron, the control of the different parameters, etc. In addition, to guide the user during the simulation, an online help has been implemented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquest treball de final de grau tracta de donar una solució basada en sistemes d'informació a un distribuïdor d'informàtica anomenat DistriTiC, que ha perdut competitivitat i clients en els darrers anys. A través d'un anàlisi inicial de la situació de l'empresa i dels requeriments organitzatius de la direcció, hem realitzat un pla estratègic de renovació de sistemes d'informació estudiant les quatre etapes del cicle de vida i els seus processos transversals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tavoitteena on tutkia sisällön räätälöintiä Internetissä. Yritysten tarjoama sisällön määrä WWW-sivuillaan on kasvanut räjähdysmäisesti. Räätälöinnin avulla asiakkaat saavat juuri haluamaansa ja tarvitsemaansa sisältöä. Räätälöinti edellyttää asiakkaiden profilointia. Asiakastietojen kerääminen aiheuttaa huolta yksityisyyden menettämisestä. Tutkimus toteutetaan case-tutkimuksena. Tutkimuksen kohteena on viisi yritystä, jotka toimivat sisällön tarjoajina. Tutkimus pohjautuu valmiiseen aineistoon sekä osallistuvaan havainnointiin kohde yrityksistä. Sisällön räätälöinnistä voidaan havaita neljä eri perus lähestymistapaa. Profilointi toteutetaan pääsääntöisesti joko asiakkaan itse antamien tietojen pohjalta tai havainnoimalla hänen käyttäytymistään WWW-sivulla. Tulevaisuudessa tarvitaan selkeät pelisäännöt asiakastietojen keräämiseen ja käyttämiseen. Asiakkaat haluavat räätälöityä sisältöä, mutta sisällön tarjoajien on saavutettava heidän luottamuksensa yksityisyyden suojasta. Luottamuksen merkitys kasvaa entisestään, kun räätälöintiä kehitetään pidemmälle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Internet-palvelujen määrä kasvaa jatkuvasti. Henkilöllä on yleensä yksi sähköinen identiteetti jokaisessa käyttämässään palvelussa. Autentikointitunnusten turvallinen säilytys käy yhä vaikeammaksi, kun niitä kertyy yhdet jokaisesta uudesta palvelurekisteröitymisestä. Tämä diplomityö tarkastelee ongelmaa ja ratkaisuja sekä palvelulähtöisestä että teknisestä näkökulmasta. Palvelulähtöisen identiteetinhallinnan liiketoimintakonsepti ja toteutustekniikat – kuten single sign-on (SSO) ja Security Assertion Markup Language (SAML) – käydään läpi karkeiden esimerkkien avulla sekä tutustuen Nokia Account -hankkeessa tuotetun ratkaisun konseptiin ja teknisiin yksityiskohtiin. Nokia Account -palvelun ensimmäisen version toteutusta analysoidaan lopuksi identiteetinhallintapalveluiden suunnitteluperiaatteita ja vaatimuksia vasten.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CoCo is a collaborative web interface for the compilation of linguistic resources. In this demo we are presenting one of its possible applications: paraphrase acquisition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Most of the proteins in the Protein Data Bank (PDB) are oligomeric complexes consisting of two or more subunits that associate by rotational or helical symmetries. Despite the myriad of superimposition tools in the literature, we could not find any able to account for rotational symmetry and display the graphical results in the web browser. Results BioSuper is a free web server that superimposes and calculates the root mean square deviation (RMSD) of protein complexes displaying rotational symmetry. To the best of our knowledge, BioSuper is the first tool of its kind that provides immediate interactive visualization of the graphical results in the browser, biomolecule generator capabilities, different levels of atom selection, sequence-dependent and structure-based superimposition types, and is the only web tool that takes into account the equivalence of atoms in side chains displaying symmetry ambiguity. BioSuper uses ICM program functionality as a core for the superimpositions and displays the results as text, HTML tables and 3D interactive molecular objects that can be visualized in the browser or in Android and iOS platforms with a free plugin. Conclusions BioSuper is a fast and functional tool that allows for pairwise superimposition of proteins and assemblies displaying rotational symmetry. The web server was created after our own frustration when attempting to superimpose flexible oligomers. We strongly believe that its user-friendly and functional design will be of great interest for structural and computational biologists who need to superimpose oligomeric proteins (or any protein). BioSuper web server is freely available to all users at http://ablab.ucsd.edu/BioSuper webcite.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Article About the Authors Metrics Comments Related Content Abstract Introduction Functionality Implementation Discussion Acknowledgments Author Contributions References Reader Comments (0) Figures Abstract Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/​under the LGPL license.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJETIVO: Integração dos Sistemas de Informação em Radiologia (RIS - "Radiology Information System") e de Arquivamento e Comunicação de Imagens (PACS - "Picture Archiving and Communication System") no Serviço de Radiodiagnóstico do Hospital das Clínicas da Faculdade de Medicina de Ribeirão Preto da Universidade de São Paulo, para possibilitar a consulta remota de laudos e imagens associadas. MATERIAIS E MÉTODOS: A integração RIS/PACS implementada é feita em tempo real, no momento da consulta, utilizando tecnologias "web" e técnicas de programação para "intranet/internet". RESULTADOS: A aplicação "web" permite a consulta pela "intranet" do hospital dos laudos de exames e imagens associadas através de nome, sobrenome, número de registro hospitalar dos pacientes ou por modalidade, dentro de um determinado período. O visualizador possibilita que o usuário navegue pelas imagens, podendo realizar algumas funções básicas como "zoom", controle de brilho e contraste e visualização de imagens lado a lado. CONCLUSÃO: A integração RIS/PACS diminui o risco de inconsistências, através da redução do número de interfaces entre bases de dados com grande redundância de informação, proporcionando um ambiente de trabalho rápido e seguro para consulta de laudos radiológicos e visualização de imagens associadas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El propòsit del projecte és realitzar una aplicació web amb un gestor de continguts (Joomla!) per gestionar la relació amb els clients d'una empresa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

UOC

Relevância:

20.00% 20.00%

Publicador:

Resumo:

UOC

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.