940 resultados para WEB applications
Resumo:
Web is constantly evolving, thanks to the 2.0 transition, HTML5 new features and the coming of cloud-computing, the gap between Web and traditional desktop applications is tailing off. Web-apps are more and more widespread and bring several benefits compared to traditional ones. On the other hand reference technologies, JavaScript primarly, are not keeping pace, so a paradim shift is taking place in Web programming, and so many new languages and technologies are coming out. First objective of this thesis is to survey the reference and state-of-art technologies for client-side Web programming focusing in particular on what concerns concurrency and asynchronous programming. Taking into account the problems that affect existing technologies, we finally design simpAL-web, an innovative approach to tackle Web-apps development, based on the Agent-oriented programming abstraction and the simpAL language. == Versione in italiano: Il Web è in continua evoluzione, grazie alla transizione verso il 2.0, alle nuove funzionalità introdotte con HTML5 ed all’avvento del cloud-computing, il divario tra le applicazioni Web e quelle desktop tradizionali va assottigliandosi. Le Web-apps sono sempre più diffuse e presentano diversi vantaggi rispetto a quelle tradizionali. D’altra parte le tecnologie di riferimento, JavaScript in primis, non stanno tenendo il passo, motivo per cui la programmazione Web sta andando incontro ad un cambio di paradigma e nuovi linguaggi e tecnologie stanno spuntando sempre più numerosi. Primo obiettivo di questa tesi è di passare al vaglio le tecnologie di riferimento ed allo stato dell’arte per quel che riguarda la programmmazione Web client-side, porgendo particolare attenzione agli aspetti inerenti la concorrenza e la programmazione asincrona. Considerando i principali problemi di cui soffrono le attuali tecnologie passeremo infine alla progettazione di simpAL-web, un approccio innovativo con cui affrontare lo sviluppo di Web-apps basato sulla programmazione orientata agli Agenti e sul linguaggio simpAL.
Resumo:
This work is concerned with the increasing relationships between two distinct multidisciplinary research fields, Semantic Web technologies and scholarly publishing, that in this context converge into one precise research topic: Semantic Publishing. In the spirit of the original aim of Semantic Publishing, i.e. the improvement of scientific communication by means of semantic technologies, this thesis proposes theories, formalisms and applications for opening up semantic publishing to an effective interaction between scholarly documents (e.g., journal articles) and their related semantic and formal descriptions. In fact, the main aim of this work is to increase the users' comprehension of documents and to allow document enrichment, discovery and linkage to document-related resources and contexts, such as other articles and raw scientific data. In order to achieve these goals, this thesis investigates and proposes solutions for three of the main issues that semantic publishing promises to address, namely: the need of tools for linking document text to a formal representation of its meaning, the lack of complete metadata schemas for describing documents according to the publishing vocabulary, and absence of effective user interfaces for easily acting on semantic publishing models and theories.
Resumo:
Following last two years’ workshop on dynamic languages at the ECOOP conference, the Dyla 2007 workshop was a successful and popular event. As its name implies, the workshop’s focus was on dynamic languages and their applications. Topics and discussions at the workshop included macro expansion mechanisms, extension of the method lookup algorithm, language interpretation, reflexivity and languages for mobile ad hoc networks. The main goal of this workshop was to bring together different dynamic language communities and favouring cross communities interaction. Dyla 2007 was organised as a full day meeting, partly devoted to presentation of submitted position papers and partly devoted to tool demonstration. All accepted papers can be downloaded from the workshop’s web site. In this report, we provide an overview of the presentations and a summary of discussions.
Resumo:
Dynamic, unanticipated adaptation of running systems is of interest in a variety of situations, ranging from functional upgrades to on-the-fly debugging or monitoring of critical applications. In this paper we study a particular form of computational reflection, called unanticipated partial behavioral reflection, which is particularly well-suited for unanticipated adaptation of real-world systems. Our proposal combines the dynamicity of unanticipated reflection, i.e. reflection that does not require preparation of the code of any sort, and the selectivity and efficiency of partial behavioral reflection. First, we propose unanticipated partial behavioral reflection which enables the developer to precisely select the required reifications, to flexibly engineer the metalevel and to introduce the meta behavior dynamically. Second, we present a system supporting unanticipated partial behavioral reflection in Squeak Smalltalk, called Geppetto, and illustrate its use with a concrete example of a web application. Benchmarks validate the applicability of our proposal as an extension to the standard reflective abilities of Smalltalk.
Resumo:
Die vorliegende Forschungsarbeit siedelt sich im Dreieck der Erziehungswissenschaften, der Informatik und der Schulpraxis an und besitzt somit einen starken interdisziplinären Charakter. Aus Sicht der Erziehungswissenschaften handelt es sich um ein Forschungsprojekt aus den Bereichen E-Learning und Multimedia Learning und der Fragestellung nach geeigneten Informatiksystemen für die Herstellung und den Austausch von digitalen, multimedialen und interaktiven Lernbausteinen. Dazu wurden zunächst methodisch-didaktische Vorteile digitaler Lerninhalte gegenüber klassischen Medien wie Buch und Papier zusammengetragen und mögliche Potentiale im Zusammenhang mit neuen Web 2.0-Technologien aufgezeigt. Darauf aufbauend wurde für existierende Autorenwerkzeuge zur Herstellung digitaler Lernbausteine und bestehende Austauschplattformen analysiert, inwieweit diese bereits Web 2.0-Technologien unterstützen und nutzen. Aus Sicht der Informatik ergab sich aus der Analyse bestehender Systeme ein Anforderungsprofil für ein neues Autorenwerkzeug und eine neue Austauschplattform für digitale Lernbausteine. Das neue System wurde nach dem Ansatz des Design Science Research in einem iterativen Entwicklungsprozess in Form der Webapplikation LearningApps.org realisiert und stetig mit Lehrpersonen aus der Schulpraxis evaluiert. Bei der Entwicklung kamen aktuelle Web-Technologien zur Anwendung. Das Ergebnis der Forschungsarbeit ist ein produktives Informatiksystem, welches bereits von tausenden Nutzern in verschiedenen Ländern sowohl in Schulen als auch in der Wirtschaft eingesetzt wird. In einer empirischen Studie konnte das mit der Systementwicklung angestrebte Ziel, die Herstellung und den Austausch von digitalen Lernbausteinen zu vereinfachen, bestätigt werden. Aus Sicht der Schulpraxis liefert LearningApps.org einen Beitrag zur Methodenvielfalt und zur Nutzung von ICT im Unterricht. Die Ausrichtung des Werkzeugs auf mobile Endgeräte und 1:1-Computing entspricht dem allgemeinen Trend im Bildungswesen. Durch die Verknüpfung des Werkzeugs mit aktuellen Software-Entwicklungen zur Herstellung von digitalen Schulbüchern werden auch Lehrmittelverlage als Zielgruppe angesprochen.
Resumo:
For smart cities applications, a key requirement is to disseminate data collected from both scalar and multimedia wireless sensor networks to thousands of end-users. Furthermore, the information must be delivered to non-specialist users in a simple, intuitive and transparent manner. In this context, we present Sensor4Cities, a user-friendly tool that enables data dissemination to large audiences, by using using social networks, or/and web pages. The user can request and receive monitored information by using social networks, e.g., Twitter and Facebook, due to their popularity, user-friendly interfaces and easy dissemination. Additionally, the user can collect or share information from smart cities services, by using web pages, which also include a mobile version for smartphones. Finally, the tool could be configured to periodically monitor the environmental conditions, specific behaviors or abnormal events, and notify users in an asynchronous manner. Sensor4Cities improves the data delivery for individuals or groups of users of smart cities applications and encourages the development of new user-friendly services.
Resumo:
The web is continuously evolving into a collection of many data, which results in the interest to collect and merge these data in a meaningful way. Based on that web data, this paper describes the building of an ontology resting on fuzzy clustering techniques. Through continual harvesting folksonomies by web agents, an entire automatic fuzzy grassroots ontology is built. This self-updating ontology can then be used for several practical applications in fields such as web structuring, web searching and web knowledge visualization.A potential application for online reputation analysis, added value and possible future studies are discussed in the conclusion.
Resumo:
A web service is a collection of industry standards to enable reusability of services and interoperability of heterogeneous applications. The UMLS Knowledge Source (UMLSKS) Server provides remote access to the UMLSKS and related resources. We propose a Web Services Architecture that encapsulates UMLSKS-API and makes it available in distributed and heterogeneous environments. This is the first step towards intelligent and automatic UMLS services discovery and invocation by computer systems in distributed environments such as web.
Resumo:
Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to congure the annotations to their specic needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation condence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.
Resumo:
Enabling real end-user programming development is the next logical stage in the evolution of Internetwide service-based applications. Even so, the vision of end users programming their own web-based solutions has not yet materialized. This will continue to be so unless both industry and the research community rise to the ambitious challenge of devising an end-to-end compositional model for developing a new age of end-user web application development tools. This paper describes a new composition model designed to empower programming-illiterate end users to create and share their own off-the-shelf rich Internet applications in a fully visual fashion. This paper presents the main insights and outcomes of our research and development efforts as part of a number of successful European Union research projects. A framework implementing this model was developed as part of the European Seventh Framework Programme FAST Project and the Spanish EzWeb Project and allowed us to validate the rationale behind our approach.
Resumo:
Given the significant impact of Web 2.0-related innovations on new Internet-based initiatives, this paper seeks to identify to what extent the main developments are protected by patents and whether patents have had a leading role in the advent of Web 2.0. The article shows that the number of patent applications filed is not that important for many of the Web 2.0 technologies in frequent use and that, of those filed, those granted are even less. The conclusion is that patents do not seem to be a relevant factor in the development of the Web 2.0 (and more generally in dynamic markets) where there is a high degree of innovation and low entry barriers for newcomers.
Resumo:
In the context of the Semantic Web, natural language descriptions associated with ontologies have proven to be of major importance not only to support ontology developers and adopters, but also to assist in tasks such as ontology mapping, information extraction, or natural language generation. In the state-of-the-art we find some attempts to provide guidelines for URI local names in English, and also some disagreement on the use of URIs for describing ontology elements. When trying to extrapolate these ideas to a multilingual scenario, some of these approaches fail to provide a valid solution. On the basis of some real experiences in the translation of ontologies from English into Spanish, we provide a preliminary set of guidelines for naming and labeling ontologies in a multilingual scenario.
Resumo:
Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to configure the annotations to their specific needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation confidence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.
End-User Development Success Factors and their Application to Composite Web Development Environments
Resumo:
The Future Internet is expected to be composed of a mesh of interoperable Web services accessed from all over the Web. This approach has not yet caught on since global user-service interaction is still an open issue. Successful composite applications rely on heavyweight service orchestration technologies that raise the bar far above end-user skills. The weakness lies in the abstraction of the underlying service front-end architecture rather than the infrastructure technologies themselves. In our opinion, the best approach is to offer end-to-end composition from user interface to service invocation, as well as an understandable abstraction of both building blocks and a visual composition technique. In this paper we formalize our vision with regard to the next-generation front-end Web technology that will enable integrated access to services, contents and things in the Future Internet. We present a novel reference architecture designed to empower non-technical end users to create and share their own self-service composite applications. A tool implementing this architecture has been developed as part of the European FP7 FAST Project and EzWeb Project, allowing us to validate the rationale behind our approach.
Resumo:
In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interfaces for speeding up the scraper development. This article proposes a generic framework for web scraping based on semantic technologies. This framework is structured in three levels: scraping services, semantic scraping model and syntactic scraping. The first level provides an interface to generic applications or intelligent agents for gathering information from the web at a high level. The second level defines a semantic RDF model of the scraping process, in order to provide a declarative approach to the scraping task. Finally, the third level provides an implementation of the RDF scraping model for specific technologies. The work has been validated in a scenario that illustrates its application to mashup technologies