977 resultados para Web, Application, WebApp, Ionic, Angular, SPA
Resumo:
Le grandi catene di distribuzione, per sviluppare strategie commerciali sempre più efficaci, sono interessate a comprendere il percorso che ogni cliente effettua all’interno del punto vendita, che reparti visita, il tempo di permanenza in un’area specifica ecc… Quindi è stato necessario trovare un sistema per localizzare e tracciare un cliente all’interno di un ambiente chiuso (indoor position). Prima di tutto ci si è concentrati sulla ricerca e sviluppo di una nuova idea che potesse superare gli ostacoli delle soluzioni attualmente in commercio. Si è pensato di sostituire le tessere punti del punto vendita con delle tessere bluetoothLE e di creare un sistema di posizionamento al chiuso utilizzando la stessa logica di funzionamento del GPS per gli ambienti aperti. Il ricevitore è la tessera BLE posseduta dal cliente e i satelliti sono tre device Android dotati di un’app specifica per rilevare il segnale radio (RSSI) emesso dalla tessera ogni secondo. Le rilevazioni dei tre device Android sono successivamente trasferite all’interno di una web application che si occupa di elaborare i dati tramite il processo di trilaterazione. L’output sono le coordinate x,y di ciascuna tessera in ogni secondo di visita all’interno del punto vendita. Questi dati sono infine utilizzati per mostrare graficamente il percorso effettuato dal cliente, l’orario di ingresso e di uscita e il tempo di permanenza. Riepilogando, il progetto comprende una fase di ricerca e intuizione di una nuova idea, una fase di progettazione per traslare i meccanismi del funzionamento GPS all’utilizzo in un ambiente chiuso, una fase di implementazione dell’app e della web application e infine una fase di sperimentazioni sul campo che si concluderà dopo la laurea con test reali in un supermercato della zona.
Resumo:
Dynamic, unanticipated adaptation of running systems is of interest in a variety of situations, ranging from functional upgrades to on-the-fly debugging or monitoring of critical applications. In this paper we study a particular form of computational reflection, called unanticipated partial behavioral reflection, which is particularly well-suited for unanticipated adaptation of real-world systems. Our proposal combines the dynamicity of unanticipated reflection, i.e. reflection that does not require preparation of the code of any sort, and the selectivity and efficiency of partial behavioral reflection. First, we propose unanticipated partial behavioral reflection which enables the developer to precisely select the required reifications, to flexibly engineer the metalevel and to introduce the meta behavior dynamically. Second, we present a system supporting unanticipated partial behavioral reflection in Squeak Smalltalk, called Geppetto, and illustrate its use with a concrete example of a web application. Benchmarks validate the applicability of our proposal as an extension to the standard reflective abilities of Smalltalk.
Resumo:
Lehrvideos erfreuen sich dank aktueller Entwicklungen im Bereich der Online-Lehre (Videoplattformen, MOOCs) auf der einen Seite und einer riesigen Auswahl sowie einer einfachen Produktion und Distribution auf der anderen Seite großer Beliebtheit bei der Wissensvermittlung. Trotzdem bringen Videos einen entscheidenden Nachteil mit sich, welcher in der Natur des Datenformats liegt. So sind die Suche nach konkreten Sachverhalten in einem Video sowie die semantische Aufbereitung zur automatisierten Verknüpfung mit weiteren spezifischen Inhalten mit hohem Aufwand verbunden. Daher werden die lernerfolg-orientierte Selektion von Lehrsegmenten und ihr Arrangement zur auf Lernprozesse abgestimmten Steuerung gehemmt. Beim Betrachten des Videos werden unter Umständen bereits bekannte Sachverhalte wiederholt bzw. können nur durch aufwendiges manuelles Spulen übersprungen werden. Selbiges Problem besteht auch bei der gezielten Wiederholung von Videoabschnitten. Als Lösung dieses Problems wird eine Webapplikation vorgestellt, welche die semantische Aufbereitung von Videos hin zu adaptiven Lehrinhalten ermöglicht: mittels Integration von Selbsttestaufgaben mit definierten Folgeaktionen können auf Basis des aktuellen Nutzerwissens Videoabschnitte automatisiert übersprungen oder wiederholt und externe Inhalte verlinkt werden. Der präsentierte Ansatz basiert somit auf einer Erweiterung der behavioristischen Lerntheorie der Verzweigten Lehrprogramme nach Crowder, die auf den Lernverlauf angepasste Sequenzen von Lerneinheiten beinhaltet. Gleichzeitig werden mittels regelmäßig eingeschobener Selbsttestaufgaben Motivation sowie Aufmerksamkeit des Lernenden nach Regeln der Programmierten Unterweisung nach Skinner und Verstärkungstheorie gefördert. Durch explizite Auszeichnung zusammengehöriger Abschnitte in Videos können zusätzlich die enthaltenden Informationen maschinenlesbar gestaltet werden, sodass weitere Möglichkeiten zum Auffinden und Verknüpfen von Lerninhalten geschaffen werden.
Resumo:
Social networks offer horizontal integration for any mobile platform providing app users with a convenient single sign-on point. Nonetheless, there are growing privacy concerns regarding its use. These vulnerabilities trigger alarm among app developers who fight for their user base: While they are happy to act on users’ information collected via social networks, they are not always willing to sacrifice their adoption rate for this goal. So far, understanding of this trade-off has remained ambiguous. To fill this gap, we employ a discrete choice experiment to explore the role of Facebook Login and investigate the impact of accompanying requests for different information items / actions in the mobile app adoption process. We quantify users’ concerns regarding these items in monetary terms. Beyond hands-on insights for providers, our study contributes to the theoretical discourse on the value of privacy in the growing world of Social Media and mobile web.
Resumo:
BACKGROUND Combination antiretroviral therapy (ART) suppresses viral replication in HIV-infected children. The growth of virologically suppressed children on ART has not been well documented. We aimed to develop dynamic reference curves for weight-for-age z scores (WAZ) and height-for-age z scores (HAZ). RESULTS A total of 4,876 children were followed for 7,407 person-years. Analyses were stratified by baseline z-scores and age, which were the most important predictors of growth response. The youngest children showed the most pronounced increase in weight and height initially but catch-up growth stagnated after 1-2 years. Three years after starting ART, WAZ ranged from -2.2 (95% Prediction interval -5.6 to 0.8) in children with baseline age "5 years and z-score "-3 to 0.0 (-2.7 to 2.4) in children with baseline age "2 years and WAZ "-1. For HAZ the corresponding range was -2.3 (-4.9 to 0.3) in children with baseline age"5 years and z-score "-3 to 0.3 (-3.1 to 3.4) in children with baseline age 2-5 years and HAZ "-1. CONCLUSIONS We have developed an online tool to calculate reference trajectories in fully suppressed children. The web application could help to define 'optimal' growth response and identify children with treatment failure.
Resumo:
The reporting of outputs from health surveillance systems should be done in a near real-time and interactive manner in order to provide decision makers with powerful means to identify, assess, and manage health hazards as early and efficiently as possible. While this is currently rarely the case in veterinary public health surveillance, reporting tools do exist for the visual exploration and interactive interrogation of health data. In this work, we used tools freely available from the Google Maps and Charts library to develop a web application reporting health-related data derived from slaughterhouse surveillance and from a newly established web-based equine surveillance system in Switzerland. Both sets of tools allowed entry-level usage without or with minimal programing skills while being flexible enough to cater for more complex scenarios for users with greater programing skills. In particular, interfaces linking statistical softwares and Google tools provide additional analytical functionality (such as algorithms for the detection of unusually high case occurrences) for inclusion in the reporting process. We show that such powerful approaches could improve timely dissemination and communication of technical information to decision makers and other stakeholders and could foster the early-warning capacity of animal health surveillance systems.
Resumo:
We investigated the impacts of predicted ocean acidification and future warming on the quantity and nutritional quality of a natural phytoplankton autumn bloom in a mesocosm experiment. Since the effects of CO2-enrichment and temperature have usually been studied independently, we were also interested in the interactive effects of both aspects of climate change. Therefore, we used a factorial design with two temperature and two acidification levels in a mesocosm experiment with a Baltic Sea phytoplankton community. Our results show a significant time-dependent influence of warming on phytoplankton carbon, chlorophyll a as well as POC. Phytoplankton carbon for instance decreased by more than a half with increasing temperature at bloom time. Additionally, elemental carbon to phosphorus ratios (C:P) increased significantly by approximately 5-8 % under warming. Impacts of CO2 or synergetic effects of warming and acidification could not be detected. We suggest that temperature-induced stronger grazing pressure was responsible for the significant decline in phytoplankton biomass. Our results suggest that biological effects of warming on Baltic Sea phytoplankton are considerable and will likely have fundamental consequences for the trophic transfer in the pelagic food-web.
Resumo:
Enabling real end-user programming development is the next logical stage in the evolution of Internetwide service-based applications. Even so, the vision of end users programming their own web-based solutions has not yet materialized. This will continue to be so unless both industry and the research community rise to the ambitious challenge of devising an end-to-end compositional model for developing a new age of end-user web application development tools. This paper describes a new composition model designed to empower programming-illiterate end users to create and share their own off-the-shelf rich Internet applications in a fully visual fashion. This paper presents the main insights and outcomes of our research and development efforts as part of a number of successful European Union research projects. A framework implementing this model was developed as part of the European Seventh Framework Programme FAST Project and the Spanish EzWeb Project and allowed us to validate the rationale behind our approach.
Resumo:
An important competence of human data analysts is to interpret and explain the meaning of the results of data analysis to end-users. However, existing automatic solutions for intelligent data analysis provide limited help to interpret and communicate information to non-expert users. In this paper we present a general approach to generating explanatory descriptions about the meaning of quantitative sensor data. We propose a type of web application: a virtual newspaper with automatically generated news stories that describe the meaning of sensor data. This solution integrates a variety of techniques from intelligent data analysis into a web-based multimedia presentation system. We validated our approach in a real world problem and demonstrate its generality using data sets from several domains. Our experience shows that this solution can facilitate the use of sensor data by general users and, therefore, can increase the utility of sensor network infrastructures.
Resumo:
This paper describes a novel architecture to introduce automatic annotation and processing of semantic sensor data within context-aware applications. Based on the well-known state-charts technologies, and represented using W3C SCXML language combined with Semantic Web technologies, our architecture is able to provide enriched higher-level semantic representations of user’s context. This capability to detect and model relevant user situations allows a seamless modeling of the actual interaction situation, which can be integrated during the design of multimodal user interfaces (also based on SCXML) for them to be adequately adapted. Therefore, the final result of this contribution can be described as a flexible context-aware SCXML-based architecture, suitable for both designing a wide range of multimodal context-aware user interfaces, and implementing the automatic enrichment of sensor data, making it available to the entire Semantic Sensor Web
Resumo:
The European Higher Education Area (EHEA) has leaded to a change in the way the subjects are taught. One of the more important aspects of the EHEA is to support the autonomous study of the students. Taking into account this new approach, the virtual laboratory of the subject Mechanisms of the Aeronautical studies at the Technical University of Madrid is being migrated to an on-line scheme. This virtual laboratory consist on two practices: the design of cam-follower mechanisms and the design of trains of gears. Both practices are software applications that, in the current situation, need to be installed on each computer and the students carry out the practice at the computer classroom of the school under the supervision of a teacher. During this year the design of cam-follower mechanisms practice has been moved to a web application using Java and the Google Development Toolkit. In this practice the students has to design and study the running of a cam to perform a specific displacement diagram with a selected follower taking into account that the mechanism must be able to work properly at high speed regime. The practice has maintained its objectives in the new platform but to take advantage of the new methodology and try to avoid the inconveniences that the previous version had shown. Once the new practice has been ready, a pilot study has been carried out to compare both approaches: on-line and in-lab. This paper shows the adaptation of the cam and follower practice to an on-line methodology. Both practices are described and the changes that has been done to the initial one are shown. They are compared and the weak and strong points of each one are analyzed. Finally we explain the pilot study carried out, the students impression and the results obtained.
Resumo:
The European Higher Education Area (EHEA) has leaded to a change in the way the subjects are taught. One of the more important aspects of the EHEA is to support the autonomous study of the students. Taking into account this new approach, the virtual laboratory of the subject Mechanisms of the Aeronautical studies at the Technical University of Madrid is being migrated to an on-line scheme. This virtual laboratory consist on two practices: the design of cam-follower mechanisms and the design of trains of gears. Both practices are software applications that, in the current situation, need to be installed on each computer and the students carry out the practice at the computer classroom of the school under the supervision of a teacher. During this year the design of cam-follower mechanisms practice has been moved to a web application using Java and the Google Development Toolkit. In this practice the students has to design and study the running of a cam to perform a specific displacement diagram with a selected follower taking into account that the mechanism must be able to work properly at high speed regime. The practice has maintained its objectives in the new platform but to take advantage of the new methodology and try to avoid the inconveniences that the previous version had shown. Once the new practice has been ready, a pilot study has been carried out to compare both approaches: on-line and in-lab. This paper shows the adaptation of the cam and follower practice to an on-line methodology. Both practices are described and the changes that has been done to the initial one are shown. They are compared and the weak and strong points of each one are analyzed. Finally we explain the pilot study carried out, the students impression and the results obtained.
Resumo:
A useful strategy for improving disaster risk management is sharing spatial data across different technical organizations using shared information systems. However, the implementation of this type of system requires a large effort, so it is difficult to find fully implemented and sustainable information systems that facilitate sharing multinational spatial data about disasters, especially in developing countries. In this paper, we describe a pioneer system for sharing spatial information that we developed for the Andean Community. This system, called SIAPAD (Andean Information System for Disaster Prevention and Relief), integrates spatial information from 37 technical organizations in the Andean countries (Bolivia, Colombia, Ecuador, and Peru). SIAPAD was based on the concept of a thematic Spatial Data Infrastructure (SDI) and includes a web application, called GEORiesgo, which helps users to find relevant information with a knowledge-based system. In the paper, we describe the design and implementation of SIAPAD together with general conclusions and future directions which we learned as a result of this work.
Resumo:
Multi-user videoconferencing systems offer communication between more than two users, who are able to interact through their webcams, microphones and other components. The use of these systems has been increased recently due to, on the one hand, improvements in Internet access, networks of companies, universities and houses, whose available bandwidth has been increased whilst the delay in sending and receiving packets has decreased. On the other hand, the advent of Rich Internet Applications (RIA) means that a large part of web application logic and control has started to be implemented on the web browsers. This has allowed developers to create web applications with a level of complexity comparable to traditional desktop applications, running on top of the Operating Systems. More recently the use of Cloud Computing systems has improved application scalability and involves a reduction in the price of backend systems. This offers the possibility of implementing web services on the Internet with no need to spend a lot of money when deploying infrastructures and resources, both hardware and software. Nevertheless there are not many initiatives that aim to implement videoconferencing systems taking advantage of Cloud systems. This dissertation proposes a set of techniques, interfaces and algorithms for the implementation of videoconferencing systems in public and private Cloud Computing infrastructures. The mechanisms proposed here are based on the implementation of a basic videoconferencing system that runs on the web browser without any previous installation requirements. To this end, the development of this thesis starts from a RIA application with current technologies that allow users to access their webcams and microphones from the browser, and to send captured data through their Internet connections. Furthermore interfaces have been implemented to allow end users to participate in videoconferencing rooms that are managed in different Cloud provider servers. To do so this dissertation starts from the results obtained from the previous techniques and backend resources were implemented in the Cloud. A traditional videoconferencing service which was implemented in the department was modified to meet typical Cloud Computing infrastructure requirements. This allowed us to validate whether Cloud Computing public infrastructures are suitable for the traffic generated by this kind of system. This analysis focused on the network level and processing capacity and stability of the Cloud Computing systems. In order to improve this validation several other general considerations were taken in order to cover more cases, such as multimedia data processing in the Cloud, as research activity has increased in this area in recent years. The last stage of this dissertation is the design of a new methodology to implement these kinds of applications in hybrid clouds reducing the cost of videoconferencing systems. Finally, this dissertation opens up a discussion about the conclusions obtained throughout this study, resulting in useful information from the different stages of the implementation of videoconferencing systems in Cloud Computing systems. RESUMEN Los sistemas de videoconferencia multiusuario permiten la comunicación entre más de dos usuarios que pueden interactuar a través de cámaras de video, micrófonos y otros elementos. En los últimos años el uso de estos sistemas se ha visto incrementado gracias, por un lado, a la mejora de las redes de acceso en las conexiones a Internet en empresas, universidades y viviendas, que han visto un aumento del ancho de banda disponible en dichas conexiones y una disminución en el retardo experimentado por los datos enviados y recibidos. Por otro lado también ayudó la aparación de las Aplicaciones Ricas de Internet (RIA) con las que gran parte de la lógica y del control de las aplicaciones web comenzó a ejecutarse en los mismos navegadores. Esto permitió a los desarrolladores la creación de aplicaciones web cuya complejidad podía compararse con la de las tradicionales aplicaciones de escritorio, ejecutadas directamente por los sistemas operativos. Más recientemente el uso de sistemas de Cloud Computing ha mejorado la escalabilidad y el abaratamiento de los costes para sistemas de backend, ofreciendo la posibilidad de implementar servicios Web en Internet sin la necesidad de grandes desembolsos iniciales en las áreas de infraestructuras y recursos tanto hardware como software. Sin embargo no existen aún muchas iniciativas con el objetivo de realizar sistemas de videoconferencia que aprovechen las ventajas del Cloud. Esta tesis doctoral propone un conjunto de técnicas, interfaces y algoritmos para la implentación de sistemas de videoconferencia en infraestructuras tanto públicas como privadas de Cloud Computing. Las técnicas propuestas en la tesis se basan en la realización de un servicio básico de videoconferencia que se ejecuta directamente en el navegador sin la necesidad de instalar ningún tipo de aplicación de escritorio. Para ello el desarrollo de esta tesis parte de una aplicación RIA con tecnologías que hoy en día permiten acceder a la cámara y al micrófono directamente desde el navegador, y enviar los datos que capturan a través de la conexión de Internet. Además se han implementado interfaces que permiten a usuarios finales la participación en salas de videoconferencia que se ejecutan en servidores de proveedores de Cloud. Para ello se partió de los resultados obtenidos en las técnicas anteriores de ejecución de aplicaciones en el navegador y se implementaron los recursos de backend en la nube. Además se modificó un servicio ya existente implementado en el departamento para adaptarlo a los requisitos típicos de las infraestructuras de Cloud Computing. Alcanzado este punto se procedió a analizar si las infraestructuras propias de los proveedores públicos de Cloud Computing podrían soportar el tráfico generado por los sistemas que se habían adaptado. Este análisis se centró tanto a nivel de red como a nivel de capacidad de procesamiento y estabilidad de los sistemas. Para los pasos de análisis y validación de los sistemas Cloud se tomaron consideraciones más generales para abarcar casos como el procesamiento de datos multimedia en la nube, campo en el que comienza a haber bastante investigación en los últimos años. Como último paso se ideó una metodología de implementación de este tipo de aplicaciones para que fuera posible abaratar los costes de los sistemas de videoconferencia haciendo uso de clouds híbridos. Finalmente en la tesis se abre una discusión sobre las conclusiones obtenidas a lo largo de este amplio estudio, obteniendo resultados útiles en las distintas etapas de implementación de los sistemas de videoconferencia en la nube.
Resumo:
With the success of Web 2.0 we are witnessing a growing number of services and APIs exposed by Telecom, IT and content providers. Targeting the Web community and, in particular, Web application developers, service providers expose capabilities of their infrastructures and applications in order to open new markets and to reach new customer groups. However, due to the complexity of the underlying technologies, the last step, i.e., the consumption and integration of the offered services, is a non-trivial and time-consuming task that is still a prerogative of expert developers. Although many approaches to lower the entry barriers for end users exist, little success has been achieved so far. In this paper, we introduce the OMELETTE project and show how it addresses end-user-oriented telco mashup development. We present the goals of the project, describe its contributions, summarize current results, and describe current and future work.