900 resultados para applicazione web, semantic web, semantic publishing, angularJS, user experience, usabilità
Resumo:
Presentació del projecte de disseny i desenvolupament del nou web de la Biblioteca Virtual, a la Jornada Tècnica de UOC.
Resumo:
Informe de l'anàlisi realitzat sobre el nou web de la Biblioteca Virtual de la UOC, mitjançant el mètode de test amb usuaris, per tal d'avaluar el grau d'usabilitat de la nova eina. L'anàlisi s'emmarca en el procés de disseny centrat en l'usuari que s'ha utilitzat per al seu disseny.
Resumo:
Informe de l'anàlisi realitzat sobre el nou web de la Biblioteca Virtual de la UOC, per tal d'avaluar el grau d'usabilitat de la nova eina. És el segon test amb usuaris que es realitza del nou web i s'han analitzat els següents aspectes: l'accés a la col·lecció digital, les cinc funcionalitats més utilitzades del web i les cinc funcionalitats menys visibles del web.
Resumo:
Este trabajo evalúa la usabilidad de la web de Renfe, comparándola con sitios de características similares, particularmente en Francia y Alemania. Para este análisis se efectúa un test de usuario con personas enmarcadas dentro de las características de los usuarios potenciales.
Resumo:
TeliaSoneran älykkään viestintäjärjestelmän kehitysluonnoksella (SME) pilotoidaan prototyyppipalveluita, joiden avulla asiakkaat voivat välittää viestejä matkapuhelimilla sekä tietokoneilla. SME:n peruspalveluita voidaan käyttää SIP-standardin mukaisilla asiakasohjelmilla sekä SME:n omilla WAP- ja WWW-käyttöliittymillä. Käyttäjät voivat nähdä toistensa tilatiedon, muuttaa omaa tilatietoaan sekä lähettää SIP-pikaviestejä, sähköpostiviestejä ja tekstiviestejä. Käyttäjät voivat myös ylläpitää listaa yhteyshenkilöistään, vastaanottaa pikaviestejä ja selata vastaanotettuja viestejä. Diplomityössä käsitellään yleisesti SME-järjestelmän rakennetta ja paneudutaan tutkimaan työssä toteutetun SME:n WWW-asiakasohjelman toteutusta. Diplomityössä käydään läpi projektiin liittyviä standardeja, suosituksia, toteustekniikoita sekä palveluita. Lisäksi tarkastellaan työssä hyödynnettyjä ohjelmointirajapintoja, nykyisiä älypuhelimia sekä niiden Internet-selaimia, jotka rajoittavat WWW-asiakaspalvelun toteutuksessa käytettyjä toteutustekniikkavaihtoehtoja. Lopuksi esitellään toteutettujen ohjelmistojen sisäistä rakennetta ja toimintaa.
Resumo:
Työssä tutkittiin tehokasta tietojohtamista globaalin metsäteollisuusyrityksen tutkimus ja kehitys verkostossa. Työn tavoitteena oli rakentaa kuvaus tutkimus ja kehitys sisällön hallintaan kohdeyrityksen käyttämän tietojohtamisohjelmiston avulla. Ensin selvitettiin käsitteitä tietämys ja tietojohtaminen kirjallisuuden avulla. Selvityksen perusteella esitettiin prosessimalli, jolla tietämystä voidaan tehokkaasti hallita yrityksessä. Seuraavaksi analysoitiin tietojohtamisen asettamia vaatimuksia informaatioteknologialle ja informaatioteknologian roolia prosessimallissa. Verkoston vaatimukset tietojohtamista kohtaan selvitettiin haastattelemalla yrityksen avainhenkilöitä. Haastatteluiden perusteella järjestelmän tuli tehokkaasti tukea virtuaalisten projektiryhmien työskentelyä, mahdollistaa tehtaiden välinen tietämyksen jakaminen ja tukea järjestelmään syötetyn sisällön hallintaa. Ensiksi järjestelmän käyttöliittymän rakenne ja salaukset muokattiin vastaamaan verkoston tarpeita. Rakenne tarjoaa työalueen työryhmille ja alueet tehtaiden väliseen tietämyksen jakamiseen. Sisällönhallintaa varten järjestelmään kehitettiin kategoria, profiloitu portaali ja valmiiksi määriteltyjä hakuja. Kehitetty malli tehostaa projektiryhmien työskentelyä, mahdollistaa olemassa olevan tietämyksen hyväksikäytön tehdastasolla sekä helpottaa tutkimus ja kehitys aktiviteettien seurantaa. Toimenpide-ehdotuksina esitetään järjestelmän integrointia tehtaiden operatiivisiin ohjausjärjestelmiin ja ohjelmiston käyttöönottoa tehdastason projektinhallinta työkaluksi.Ehdotusten tavoitteena on varmistaa sekä tehokas tietämyksen jakaminen tehtaiden välillä että tehokas tietojohtaminen tehdastasolla.
Resumo:
Web portaalit tarjoavat ainutlaatuisia apuvälineitä erilaisien sisältöjen luomiseksi, monenlaisia navigointipolkuja, henkilökohtaisia sivuja ja turvapalveluja. Portaali on monimutkainen systeemi, joka sisältää monta yhteistyötä tekevää komponenttia, yleensä toteutuu valmiiksi tehdyillä ongelmistoilla. Tämä tutkimus kansittelee portaalin toteutusta IBM/Tivolin tuotteella. Portaalin komponenttien integraatio on kriittinen koko järjestelmä arkkitehtuurille ja saattaa vaatia lisää ohjelmistokehittelyä. Tutkimuksen ensisijainen tavoite on kehittää räätälöityä komponenttia kahta portaali-alijärjestelmä varten, tilaaja - turvapalvelu. Tutkimuksessa Tivoli Personalized Services Manager (TPSM) ja Tivoli SecureWay Policy Director (PD) on tutkittu. Integraatio sisältää TPSM tietokaunan ja PD User Registry tiedon synkronisointia. Integraatio-ohjelmisto on suunniteltu ja tehty olemassaoloevien alijärjestelmien perusteella.
Resumo:
Taking the maximum advantage of technological innovations and the investment in them is of key importance for businesses. The IT industry offers a wide range of innovative high-technology solutions to manage information processing and distribution. However for end-user businesses to make informed decisions in this area is challenging. The aim of this research is to identify the key differences in principal solutions, and what the selection criteria should be for those involved. Existing methodologies for software development are classified, and some key criteria are described to help IT system developers and users determine what are the most important factors in system selection, development and deployment. Statistical data is researched and analysed, a theoretical basis is developed and reviewed, key issues from case studies are identified and generalized to be presented along with the conclusions in the current study. The results give a good basis for corporate consideration and provide overall support to the key decisions in developing web-based software. The conclusion is that new web developments should be considered the stakeholders as an evolution of existing business systems, but they should then pay particular attention to the new advantages that web-based software offers in terms of standardised interfaces and procedures, universal deployment opportunities, and a range of other benefits the study highlights.
Resumo:
In this article we presents a project [1] developed to demonstrate the capability that Multi-Layer Perceptrons (MLP) have to approximate non-linear functions [2]. The simulation has been implemented in Java to be used in all the computers by Internet [3], with a simple operation and pleasant interface. The power of the simulations is in the possibility of the user of seeing the evolutions of the approaches, the contribution of each neuron, the control of the different parameters, etc. In addition, to guide the user during the simulation, an online help has been implemented.
Resumo:
Internet-palvelujen määrä kasvaa jatkuvasti. Henkilöllä on yleensä yksi sähköinen identiteetti jokaisessa käyttämässään palvelussa. Autentikointitunnusten turvallinen säilytys käy yhä vaikeammaksi, kun niitä kertyy yhdet jokaisesta uudesta palvelurekisteröitymisestä. Tämä diplomityö tarkastelee ongelmaa ja ratkaisuja sekä palvelulähtöisestä että teknisestä näkökulmasta. Palvelulähtöisen identiteetinhallinnan liiketoimintakonsepti ja toteutustekniikat – kuten single sign-on (SSO) ja Security Assertion Markup Language (SAML) – käydään läpi karkeiden esimerkkien avulla sekä tutustuen Nokia Account -hankkeessa tuotetun ratkaisun konseptiin ja teknisiin yksityiskohtiin. Nokia Account -palvelun ensimmäisen version toteutusta analysoidaan lopuksi identiteetinhallintapalveluiden suunnitteluperiaatteita ja vaatimuksia vasten.
BioSuper: A web tool for the superimposition of biomolecules and assemblies with rotational symmetry
Resumo:
Background Most of the proteins in the Protein Data Bank (PDB) are oligomeric complexes consisting of two or more subunits that associate by rotational or helical symmetries. Despite the myriad of superimposition tools in the literature, we could not find any able to account for rotational symmetry and display the graphical results in the web browser. Results BioSuper is a free web server that superimposes and calculates the root mean square deviation (RMSD) of protein complexes displaying rotational symmetry. To the best of our knowledge, BioSuper is the first tool of its kind that provides immediate interactive visualization of the graphical results in the browser, biomolecule generator capabilities, different levels of atom selection, sequence-dependent and structure-based superimposition types, and is the only web tool that takes into account the equivalence of atoms in side chains displaying symmetry ambiguity. BioSuper uses ICM program functionality as a core for the superimpositions and displays the results as text, HTML tables and 3D interactive molecular objects that can be visualized in the browser or in Android and iOS platforms with a free plugin. Conclusions BioSuper is a fast and functional tool that allows for pairwise superimposition of proteins and assemblies displaying rotational symmetry. The web server was created after our own frustration when attempting to superimpose flexible oligomers. We strongly believe that its user-friendly and functional design will be of great interest for structural and computational biologists who need to superimpose oligomeric proteins (or any protein). BioSuper web server is freely available to all users at http://ablab.ucsd.edu/BioSuper webcite.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers.
Resumo:
Occupational hygiene practitioners typically assess the risk posed by occupational exposure by comparing exposure measurements to regulatory occupational exposure limits (OELs). In most jurisdictions, OELs are only available for exposure by the inhalation pathway. Skin notations are used to indicate substances for which dermal exposure may lead to health effects. However, these notations are either present or absent and provide no indication of acceptable levels of exposure. Furthermore, the methodology and framework for assigning skin notation differ widely across jurisdictions resulting in inconsistencies in the substances that carry notations. The UPERCUT tool was developed in response to these limitations. It helps occupational health stakeholders to assess the hazard associated with dermal exposure to chemicals. UPERCUT integrates dermal quantitative structure-activity relationships (QSARs) and toxicological data to provide users with a skin hazard index called the dermal hazard ratio (DHR) for the substance and scenario of interest. The DHR is the ratio between the estimated 'received' dose and the 'acceptable' dose. The 'received' dose is estimated using physico-chemical data and information on the exposure scenario provided by the user (body parts exposure and exposure duration), and the 'acceptable' dose is estimated using inhalation OELs and toxicological data. The uncertainty surrounding the DHR is estimated with Monte Carlo simulation. Additional information on the selected substances includes intrinsic skin permeation potential of the substance and the existence of skin notations. UPERCUT is the only available tool that estimates the absorbed dose and compares this to an acceptable dose. In the absence of dermal OELs it provides a systematic and simple approach for screening dermal exposure scenarios for 1686 substances.
The personal research portal : web 2.0 driven individual commitment with open access for development
Resumo:
Peer-reviewed