906 resultados para Web content adaptation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aquest projecte final de carrera té la finalitat d'elaborar un filtre de continguts Web de paraula clau. S'ha tractat exclusivament de crear una aplicació que funcioni sobre el protocol http controlant les dades que circulen a través del port 80.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L'objectiu d'aquest treball és seleccionar un sistema de gestió de continguts de codi obert i dissenyar el web d'uns estudis de la UOC amb el programari escollit. Per a això, es comproven les capacitats d'aquest programari en la gestió del contingut del web: creació de nous continguts, expansió, permisos d'accés, flux de treball, etc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aquestes directrius expliquen com fer que el contingut web sigui accessible a persones amb discapacitats i s'adrecen a creadors de contingut (autors de pàgines web o dissenyadors de llocs web) i a creadors d'eines d'autor. L'objectiu principal d'aquestes directrius és promoure l'accessibilitat. Tanmateix, l'aplicació de les directrius facilitarà l'accés al contingut a tot tipus d'usuari, sigui quin sigui l'agent d'usuari usat (navegador web, navegador de veu, telèfon mòbil, ordinador de cotxe, etc.) o les condicions de l'entorn de consulta (entorns sorollosos, espais mal il·luminats, entorns en què no es poden usar les mans, etc.). L'aplicació d'aquestes directrius també ajudarà els usuaris a trobar la informació d'una manera més ràpida dins el web. Les directrius no pretenen desincentivar l'ús d'imatges, vídeo, etc., sinó que expliquen com fer que el contingut multimèdia sigui més accessible a una àmplia audiència.Aquest és un document de referència per a uns principis d'accessibilitat i idees de disseny. Algunes de les estratègies comentades tracten d'aspectes relatius a la internacionalització del web i a l'accés des de terminals mòbils. Tanmateix, el document se centra en l'accessibilitat i no tracta exhaustivament dels aspectes relacionats amb altres activitats del W3C. Si voleu més informació sobre aquests temes podeu consultar les pàgines inicials W3C Mobile Access Activity (per a l'accés des de terminals mòbils) i W3C Internationalization Activity (per als aspectes d'internacionalització).Aquest document està pensat per a ser estable en el temps i, per tant, no dóna informació específica sobre si els navegadors funcionen o no amb una determinada tecnologia, ja que aquesta informació varia molt ràpidament. Aquesta informació es pot trobar al web de la Web Accessibility Initiative ,WAI, (Iniciativa d'Accessibilitat Web) [WAI-UA-SUPPORT].Aquest document inclou un annex que organitza tots els punts de verificació ordenats per tema i per prioritat. Els punts de l'annex estan enllaçats a les respectives definicions en el document. Els temes recollits en l'annex inclouen les imatges, el contingut multimèdia, les taules, els marcs, els formularis i els scripts. L'annex es presenta en forma de taula o com a simple llista.Un document a part, amb el títol Techniques for Web Content Accessibility Guidelines 1.0 (Tècniques per a les directrius per a l'accessibilitat al contingut web, versió 1.0) ([TECHNIQUES]) explica com posar a la pràctica els punts citats fins aquí. El document de tècniques explica cada punt amb més detalls i dóna exemples usant el llenguatge d'etiquetatge d'hipertext (HTML), fulls d'estil en cascada (CSS), el llenguatge d'integració multimèdia sincronitzada (SMIL) o el llenguatge d'etiquetatge matemàtic (MathML). Aquest document també inclou tècniques per a provar o validar una pàgina web i un índex dels elements i atributs HTML amb les tècniques que els usen. El document de tècniques està pensat per a seguir de prop els canvis tecnològics i es preveu que s'actualitzi més sovint que les directrius.Nota: Algunes de les característiques descrites en les directrius no estan encara implementades en tots els navegadors o eines multimèdia; en concret pot ser que no es puguin utilitzar funcions noves d'HTML 4.0, de CSS1 o CSS2.Les Directrius per a l'accessibilitat al contingut web, versió 1.0 són part d'una col·lecció de directrius sobre accessibilitat publicades per la Web Accessibility Initiative, WAI (Iniciativa d'Accessibilitat Web). La col·lecció comprèn User Agent Accessibility Guidelines (Directrius d'accessibilitat per a agents d'usuari) [WAI-USERAGENT] i Authoring Tool Accessibility Guidelines (Directrius d'accessibilitat per a eines d'autor [WAI-AUTOOLS].

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aquestes directrius expliquen com fer que el contingut web sigui accessible a persones ambdiscapacitats i s'adrecen a creadors de contingut (autors de pàgines web o dissenyadors de llocs web) ia creadors d'eines d'autor. L'objectiu principal d'aquestes directrius és promoure l'accessibilitat.Tanmateix, l'aplicació de les directrius facilitarà l'accés al contingut a tot tipus d'usuari, sigui quin siguil'agent d'usuari usat (navegador web, navegador de veu, telèfon mòbil, ordinador de cotxe, etc.) o les condicions de l'entorn de consulta (entorns sorollosos, espais mal il·luminats, entorns en què no es poden usar lesmans, etc.). L'aplicació d'aquestes directrius també ajudarà els usuaris a trobar la informació d'unamanera més ràpida dins el web. Les directrius no pretenen desincentivar l'ús d'imatges, vídeo, etc., sinóque expliquen com fer que el contingut multimèdia sigui més accessible a una àmplia audiència.Aquest és un document de referència per a uns principis d'accessibilitat i idees de disseny. Algunes deles estratègies comentades tracten d'aspectes relatius a la internacionalització del web i a l'accés desde terminals mòbils. Tanmateix, el document se centra en l'accessibilitat i no tracta exhaustivament delsaspectes relacionats amb altres activitats del W3C. Si voleu més informació sobre aquests temes podeuconsultar les pàgines inicials W3C Mobile Access Activity (per a l'accés des de terminals mòbils) i W3CInternationalization Activity (per als aspectes d'internacionalització). Aquest document està pensat per a ser estable en el temps i, per tant, no dóna informació específica sobre si els navegadors funcionen o no amb una determinada tecnologia, ja que aquesta informació varia molt ràpidament. Aquesta informació es pot trobar al web de la Web Accessibility Initiative ,WAI, (Iniciativa d'Accessibilitat Web) [WAI-UA-SUPPORT].Aquest document inclou un annex que organitza tots els punts de verificació ordenats per tema i perprioritat. Els punts de l'annex estan enllaçats a les respectives definicions en el document. Els temesrecollits en l'annex inclouen les imatges, el contingut multimèdia, les taules, els marcs, els formularis iels scripts. L'annex es presenta en forma de taula o com a simple llista. Un document a part, amb el títol Techniques for Web Content Accessibility Guidelines 1.0 (Tècniques per a les directrius per a l'accessibilitat al contingut web, versió 1.0) ([TECHNIQUES]) explica com posar a la pràctica els punts citats fins aquí. El document de tècniques explica cada punt amb més detalls i dóna exemples usant el llenguatge d'etiquetatge d'hipertext (HTML), fulls d'estil en cascada (CSS), el llenguatge d'integració multimèdia sincronitzada (SMIL) o el llenguatge d'etiquetatge matemàtic (MathML). Aquest document també inclou tècniques per a provar o validar una pàgina web i un índex Directrius per a l'accessibilitat al contingut web, versió 1.0 dels elements i atributs HTML amb les tècniques que els usen. El document de tècniques està pensat per a seguir de prop els canvis tecnològics i es preveu que s'actualitzi més sovint que les directrius.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Estudio del grado de accesibilidad de las webs corporativas de las universidades españolas, según el cumplimiento de las Web Content Accessibility Guidelines (WCAG) y otros indicadores.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this thesis is to study, investigate and compare usability of open source cms. The thesis examines and compares usability aspect of some open source cms. The research is divided into two complementary parts –theoretical part and analytical part. The theoretical part mainly describes open source web content management systems, usability and the evaluation methods. The analytical part is to compare and analyze the results found from the empirical research. Heuristic evaluation method was used to measure usability problems in the interfaces. The study is fairly limited in scope; six tasks were designed and implemented in each interface for discovering defects in the interfaces. Usability problems were rated according to their level of severity. Time it took by each task, level of problem’s severity and type of heuristics violated will be recorded, analyzed and compared. The results of this study indicate that the comparing systems provide usable interfaces, and WordPress is recognized as the most usable system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Search engines exploit the Web's hyperlink structure to help infer information content. The new phenomenon of personal Web logs, or 'blogs', encourage more extensive annotation of Web content. If their resulting link structures bias the Web crawling applications that search engines depend upon, there are implications for another form of annotation rapidly on the rise, the Semantic Web. We conducted a Web crawl of 160 000 pages in which the link structure of the Web is compared with that of several thousand blogs. Results show that the two link structures are significantly different. We analyse the differences and infer the likely effect upon the performance of existing and future Web agents. The Semantic Web offers new opportunities to navigate the Web, but Web agents should be designed to take advantage of the emerging link structures, or their effectiveness will diminish.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work involves the organization and content perspectives on Enterprise Content Management (ECM) framework. The case study at the Federal University of Rio Grande do Norte was based on ECM model to analyse the information management provided by the three main administrative systems: The Integrated Management of Academic Activities (SIGAA), Integrated System of Inheritance, and Contracts Administration (SIPAC) and the Integrated System for Administration and Human Resources (SIGRH). A case study protocol was designed to provide greater reliability to research process. Four propositions were examined in order to reach the specific objectives of identification and evaluation of ECM components from UFRN perspective. The preliminary phase provided the guidelines for the data collection. In total, 75 individuals were interviewed. Interviews with four managers directly involved on systems design were recorded (average duration of 90 minutes). The 70 remaining individuals were approached in random way in UFRN s units, including teachers, administrative-technical employees and students. The results showed the presence of many ECM elements in the management of UFRN administrative information. The technological component with higher presence was "management of web content / collaboration". But initiatives of other components (e.g. email and document management) were found and are in continuous improvement. The assessment made use of eQual 4.0 to examine the effectiveness of applications under three factors: usability, quality of information and offered service. In general, the quality offered by the systems was very good and walk side by side with the obtained benefits of ECM strategy adoption in the context of the whole institution

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the context of Software Engineering, web accessibility is gaining more room, establishing itself as an important quality attribute. This fact is due to initiatives of institutions such as the W3C (World Wide Web Consortium) and the introduction of norms and laws such as Section 508 that underlie the importance of developing accessible Web sites and applications. Despite these improvements, the lack of web accessibility is still a persistent problem, and could be related to the moment or phase in which this requirement is solved within the development process. From the moment when Web accessibility is generally regarded as a programming problem or treated when the application is already developed entirely. Thus, consider accessibility already during activities of analysis and requirements specification shows itself a strategy to facilitate project progress, avoiding rework in advanced phases of software development because of possible errors, or omissions in the elicitation. The objective of this research is to develop a method and a tool to support requirements elicitation of web accessibility. The strategy for the requirements elicitation of this method is grounded by the Goal-Oriented approach NFR Framework and the use of catalogs NFRs, created based on the guidelines contained in WCAG 2.0 (Web Content Accessibility Guideline) proposed by W3C

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Web content hosting, in which a Web server stores and provides Web access to documents for different customers, is becoming increasingly common. For example, a web server can host webpages for several different companies and individuals. Traditionally, Web Service Providers (WSPs) provide all customers with the same level of performance (best-effort service). Most service differentiation has been in the pricing structure (individual vs. business rates) or the connectivity type (dial-up access vs. leased line, etc.). This report presents DiffServer, a program that implements two simple, server-side, application-level mechanisms (server-centric and client-centric) to provide different levels of web service. The results of the experiments show that there is not much overhead due to the addition of this additional layer of abstraction between the client and the Apache web server under light load conditions. Also, the average waiting time for high priority requests decreases significantly after they are assigned priorities as compared to a FIFO approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Traditionally, ontologies describe knowledge representation in a denotational, formalized, and deductive way. In addition, in this paper, we propose a semiotic, inductive, and approximate approach to ontology creation. We define a conceptual framework, a semantics extraction algorithm, and a first proof of concept applying the algorithm to a small set of Wikipedia documents. Intended as an extension to the prevailing top-down ontologies, we introduce an inductive fuzzy grassroots ontology, which organizes itself organically from existing natural language Web content. Using inductive and approximate reasoning to reflect the natural way in which knowledge is processed, the ontology’s bottom-up build process creates emergent semantics learned from the Web. By this means, the ontology acts as a hub for computing with words described in natural language. For Web users, the structural semantics are visualized as inductive fuzzy cognitive maps, allowing an initial form of intelligence amplification. Eventually, we present an implementation of our inductive fuzzy grassroots ontology Thus,this paper contributes an algorithm for the extraction of fuzzy grassroots ontologies from Web data by inductive fuzzy classification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In his in uential article about the evolution of the Web, Berners-Lee [1] envisions a Semantic Web in which humans and computers alike are capable of understanding and processing information. This vision is yet to materialize. The main obstacle for the Semantic Web vision is that in today's Web meaning is rooted most often not in formal semantics, but in natural language and, in the sense of semiology, emerges not before interpretation and processing. Yet, an automated form of interpretation and processing can be tackled by precisiating raw natural language. To do that, Web agents extract fuzzy grassroots ontologies through induction from existing Web content. Inductive fuzzy grassroots ontologies thus constitute organically evolved knowledge bases that resemble automated gradual thesauri, which allow precisiating natural language [2]. The Web agents' underlying dynamic, self-organizing, and best-effort induction, enable a sub-syntactical bottom up learning of semiotic associations. Thus, knowledge is induced from the users' natural use of language in mutual Web interactions, and stored in a gradual, thesauri-like lexical-world knowledge database as a top-level ontology, eventually allowing a form of computing with words [3]. Since when computing with words the objects of computation are words, phrases and propositions drawn from natural languages, it proves to be a practical notion to yield emergent semantics for the Semantic Web. In the end, an improved understanding by computers on the one hand should upgrade human- computer interaction on the Web, and, on the other hand allow an initial version of human- intelligence amplification through the Web.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For the main part, electronic government (or e-government for short) aims to put digital public services at disposal for citizens, companies, and organizations. To that end, in particular, e-government comprises the application of Information and Communications Technology (ICT) to support government operations and provide better governmental services (Fraga, 2002) as possible with traditional means. Accordingly, e-government services go further as traditional governmental services and aim to fundamentally alter the processes in which public services are generated and delivered, after this manner transforming the entire spectrum of relationships of public bodies with its citizens, businesses and other government agencies (Leitner, 2003). To implement this transformation, one of the most important points is to inform the citizen, business, and/or other government agencies faithfully and in an accessible way. This allows all the partaking participants of governmental affairs for a transition from passive information access to active participation (Palvia and Sharma, 2007). In addition, by a corresponding handling of the participants' data, a personalization towards these participants may even be accomplished. For instance, by creating significant user profiles as a kind of participants' tailored knowledge structures, a better-quality governmental service may be provided (i.e., expressed by individualized governmental services). To create such knowledge structures, thus known information (e.g., a social security number) can be enriched by vague information that may be accurate to a certain degree only. Hence, fuzzy knowledge structures can be generated, which help improve governmental-participants relationship. The Web KnowARR framework (Portmann and Thiessen, 2013; Portmann and Pedrycz, 2014; Portmann and Kaltenrieder, 2014), which I introduce in my presentation, allows just all these participants to be automatically informed about changes of Web content regarding a- respective governmental action. The name Web KnowARR thereby stands for a self-acting entity (i.e. instantiated form the conceptual framework) that knows or apprehends the Web. In this talk, the frameworks respective three main components from artificial intelligence research (i.e. knowledge aggregation, representation, and reasoning), as well as its specific use in electronic government will be briefly introduced and discussed.