984 resultados para web app, matching domanda offerta


Relevância:

40.00% 40.00%

Publicador:

Resumo:

LEGENDiary es un geoportal 2.0 que permite compartir y explorar las leyendas y cuentos característicos de las ciudades de una manera dinámica y colaborativa. Muchas veces las leyendas forman parte de la historia y cultura de nuestros pueblos o ciudades, LEGENDiary nos permite conocerlas según el territorio donde se ubican. La iniciativa promueve nuevas formas de conocer y explorar el territorio, en general, y las leyendas y cuentos de nuestros pueblos y ciudades, en particular. Un proyecto que permite a los usuarios interactuar y ser protagonistas de esta experiencia. El proyecto parte inicialmente de un contexto español, y se trata de un proyecto hecho a medida de todas aquellas personas interesadas y con curiosidad en compartir, conocer y explorar las leyendas en el territorio. La combinación de las Tecnologías de la Información Geográfica y las Tecnologías 2.0 es el escenario donde surge LEGENDiary, aportando la componente geográfica a las leyendas y cuentos de los pueblos y ciudades en un contexto colaborativo. La iniciativa se lanza el día 16 de noviembre de 2011 en formato concurso con motivo del Día Internacional de los Sistemas de Información Geográfica (GISDay). Para llevar a cabo la aplicación se han utilizado dos librerías Javascript (Leaflet y jQuery) de código abierto y gratuito que permiten crear de forma rápida y ligera aplicaciones de mapas interactivos a través de navegadores web (de escritorio y móvil). Como mapa base se ha utilizado Open Street Map.LEGENDiary es una iniciativa del Servicio de SIG y Teledetección (SIGTE) de la Universidad de Girona en colaboración con el Departamento de Geografía y la Facultad de Turismo de la misma universidad, y con elapoyo del Hotel Llegendes de Girona

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many producers of geographic information are now disseminating their data using open web service protocols, notably those published by the Open Geospatial Consortium. There are many challenges inherent in running robust and reliable services at reasonable cost. Cloud computing provides a new kind of scalable infrastructure that could address many of these challenges. In this study we implement a Web Map Service for raster imagery within the Google App Engine environment. We discuss the challenges of developing GIS applications within this framework and the performance characteristics of the implementation. Results show that the application scales well to multiple simultaneous users and performance will be adequate for many applications, although concerns remain over issues such as latency spikes. We discuss the feasibility of implementing services within the free usage quotas of Google App Engine and the possibility of extending the approaches in this paper to other GIS applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Lo scopo di questa tesi è quello di valutare l’utilizzo di tecnologie Web, per la creazione di applicazioni per dispositivi mobile, come alternativa allo sviluppo di applicazioni tramite linguaggi nativi. Tra i vari dispositivi mobile esistenti, quello che trova maggior interesse nello sviluppo della tesi è sicuramente lo smartphone, il più recente tra questi dispositivi, che rispetto agli altri dispositivi è caratterizzato da una maggior complessità dovuta a funzionalità e capacità più elevate. Quindi in questa tesi verrà analizzato l’aspetto legato alle applicazioni utilizzate da questo dispositivo mobile. Si è deciso di strutturare la tesi in diversi capitoli, che verranno illustrati qui in seguito, al fine di creare un percorso concettuale per arrivare ad analizzare l’argomento chiave della tesi, le applicazioni mobile basate su tecnologie Web, perciò nei primi due capitoli verrano trattati argomenti riguardanti il mondo dei dispositivi mobile, con particolare riguardo per lo smartphone, con lo scopo di dare una visione dell’ambiente che circonda questo argomento di tesi. Negli ultimi capitoli si entrerà nel cuore della tesi, dove verrà trattato l’argomento chiave nel dettaglio, con la specifica analisi di Tizen come caso di studio. Inoltre si è deciso di approfondire gli aspetti legati a questo argomento di tesi sviluppando un piccola applicazione, con lo scopo di andare a sperimentare le nozioni acquisite durante tutto questo percorso di studio.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

L'avvento delle nuove tecnologie e dei nuovi terminali Smartphone, ha portato ad una sempre più ampia implementazioni di applicazioni mobile. L'obiettivo di questa tesi è quello di illustrare il processo di progettazione ed implementazione di una mobile App per la Web Radio degli studenti universitari di Cesena: Uniradio Cesena.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

L’app in questione si pone l’obbiettivo di rispondere alla domanda: “Dove ti trovi?”. Grazie all'alta frequenza di utilizzo dei dispositivi mobili che si ha oggigiorno, è stato possibile pensare, progettare e creare un software in grado di tracciare periodicamente gli utenti in modo da far visualizzare ai loro amici in rete la propria posizione. Il servizio sfrutta le conoscenze acquisite in Mobile Web Design e basi di dati.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recently, the Semantic Web has experienced significant advancements in standards and techniques, as well as in the amount of semantic information available online. Nevertheless, mechanisms are still needed to automatically reconcile information when it is expressed in different natural languages on the Web of Data, in order to improve the access to semantic information across language barriers. In this context several challenges arise [1], such as: (i) ontology translation/localization, (ii) cross-lingual ontology mappings, (iii) representation of multilingual lexical information, and (iv) cross-lingual access and querying of linked data. In the following we will focus on the second challenge, which is the necessity of establishing, representing and storing cross-lingual links among semantic information on the Web. In fact, in a “truly” multilingual Semantic Web, semantic data with lexical representations in one natural language would be mapped to equivalent or related information in other languages, thus making navigation across multilingual information possible for software agents.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La web vive un proceso de cambio constante, basado en una interacción mayor del usuario. A partir de la actual corriente de paradigmas y tecnologías asociadas a la web 2.0, han surgido una serie de estándares de gran utilidad, que cubre la necesidad de los desarrollos actuales de la red. Entre estos se incluyen los componentes web, etiquetas HTML definidas por el usuario que cubren una función concreta dentro de una página. Existe una necesidad de medir la calidad de dichos desarrollos, para discernir si el concepto de componente web supone un cambio revolucionario en el desarrollo de la web 2.0. Para ello, es necesario realizar una explotación de componentes web, considerada como la medición de calidad basada en métricas y definición de un modelo de interconexión de componentes. La plataforma PicBit surge como respuesta a estas cuestiones. Consiste en una plataforma social de construcción de perfiles basada en estos elementos. Desde la perspectiva del usuario final se trata de una herramienta para crear perfiles y comunidades sociales, mientras que desde una perspectiva académica, la plataforma consiste en un entorno de pruebas o sandbox de componentes web. Para ello, será necesario implementar el extremo servidor de dicha plataforma, enfocado a la labor de explotación, por medio de la definición de una interfaz REST de operaciones y un sistema para la recolección de eventos de usuario en la plataforma. Gracias a esta plataforma se podrán discernir qué parámetros influyen positivamente en la experiencia de uso de un componente, así como descubrir el futuro potencial de este tipo de desarrollos.---ABSTRACT---The web evolves into a more interactive platform. From the actual version of the web, named as web 2.0, many paradigms and standards have arisen. One of those standards is web components, a set of concepts to define new HTML tags that covers a specific function inside a web page. It is necessary to measure the quality of this kind of software development, and the aim behind this approach is to determine if this new set of concepts would survive in the actual web paradigm. To achieve this, it is described a model to analyse components, in the terms of quality measure and interconnection model description. PicBit consists of a social platform to use web components. From the point of view of the final user, this platform is a tool to create social profiles using components, whereas from the point of view of technicians, it consists of a sandbox of web components. Thanks to this platform, we will be able to discover those parameters that have a positive effect in the user experience and to discover the potential of this new set of standards into the web 2.0.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Personalised social matching systems can be seen as recommender systems that recommend people to others in the social networks. However, with the rapid growth of users in social networks and the information that a social matching system requires about the users, recommender system techniques have become insufficiently adept at matching users in social networks. This paper presents a hybrid social matching system that takes advantage of both collaborative and content-based concepts of recommendation. The clustering technique is used to reduce the number of users that the matching system needs to consider and to overcome other problems from which social matching systems suffer, such as cold start problem due to the absence of implicit information about a new user. The proposed system has been evaluated on a dataset obtained from an online dating website. Empirical analysis shows that accuracy of the matching process is increased, using both user information (explicit data) and user behavior (implicit data).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most web service discovery systems use keyword-based search algorithms and, although partially successful, sometimes fail to satisfy some users information needs. This has given rise to several semantics-based approaches that look to go beyond simple attribute matching and try to capture the semantics of services. However, the results reported in the literature vary and in many cases are worse than the results obtained by keyword-based systems. We believe the accuracy of the mechanisms used to extract tokens from the non-natural language sections of WSDL files directly affects the performance of these techniques, because some of them can be more sensitive to noise. In this paper three existing tokenization algorithms are evaluated and a new algorithm that outperforms all the algorithms found in the literature is introduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to comprehend user information needs by concepts, this paper introduces a novel method to match relevance features with ontological concepts. The method first discovers relevance features from user local instances. Then, a concept matching approach is developed for matching these features to accurate concepts in a global knowledge base. This approach is significant for the transition of informative descriptor and conceptional descriptor. The proposed method is elaborately evaluated by comparing against three information gathering baseline models. The experimental results shows the matching approach is successful and achieves a series of remarkable improvements on search effectiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.