910 resultados para Long-Polling, GCM, Google Cloud Messaging, RESTful Web services, Push, Notifiche
Resumo:
En esta investigación analizaremos las capacidades de la nube, centrándonos en la computación (entendiendo esta como la capacidad de procesamiento) y el almacenamiento de datos geolocalizados (redes de entrega de contenidos). Estas capacidades, unidas al modelo de negocio que implica un coste cero por aprovisionamiento y el pago por uso, además de la reducción de costes mediante sinergias y optimización de los centros de datos, suponen un escenario interesante para las PYMES dedicadas a la producción audiovisual. Utilizando los servicios en la nube, una empresa pequeña puede, en cuestión de minutos, alcanzar la potencia de procesamiento y distribución de gigantes de la producción audiovisual, sin necesidad de ocupar sus instalaciones ni pagar por adelantado el alquiler de los equipos. Describiremos los servicios de AWS (Amazon Web Services) que serían útiles a una empresa, el uso que darían de dichos servicios y su coste, y lo compararemos con el presupuesto por realizar una misma instalación físicamente en dicha empresa, teniendo que comprar e instalar los equipos.
Resumo:
In this paper we introduce the online version of our ReaderBench framework, which includes multi-lingual comprehension-centered web services designed to address a wide range of individual and collaborative learning scenarios, as follows. First, students can be engaged in reading a course material, then eliciting their understanding of it; the reading strategies component provides an in-depth perspective of comprehension processes. Second, students can write an essay or a summary; the automated essay grading component provides them access to more than 200 textual complexity indices covering lexical, syntax, semantics and discourse structure measurements. Third, students can start discussing in a chat or a forum; the Computer Supported Collaborative Learning (CSCL) component provides indepth conversation analysis in terms of evaluating each member’s involvement in the CSCL environments. Eventually, the sentiment analysis, as well as the semantic models and topic mining components enable a clearer perspective in terms of learner’s points of view and of underlying interests.
Resumo:
Intelligent Tutoring Systems (ITSs) are computerized systems for learning-by-doing. These systems provide students with immediate and customized feedback on learning tasks. An ITS typically consists of several modules that are connected to each other. This research focuses on the distribution of the ITS module that provides expert knowledge services. For the distribution of such an expert knowledge module we need to use an architectural style because this gives a standard interface, which increases the reusability and operability of the expert knowledge module. To provide expert knowledge modules in a distributed way we need to answer the research question: ‘How can we compare and evaluate REST, Web services and Plug-in architectural styles for the distribution of the expert knowledge module in an intelligent tutoring system?’. We present an assessment method for selecting an architectural style. Using the assessment method on three architectural styles, we selected the REST architectural style as the style that best supports the distribution of expert knowledge modules. With this assessment method we also analyzed the trade-offs that come with selecting REST. We present a prototype and architectural views based on REST to demonstrate that the assessment method correctly scores REST as an appropriate architectural style for the distribution of expert knowledge modules.
Resumo:
Anualmente, realizam-se no país inúmeras iniciativas de Todo-Terreno Turístico (TTT) onde são automaticamente registadas as coordenadas de Global Positioning System (GPS) por aplicações de dispositivos móveis. Este tipo de informação pode ser utilizada, quer para fins de divulgação turística, quer por outro tipo de entidades que necessitem de circular nesses caminhos rurais, tipicamente no meio da montanha. Entre outras, são registadas a posição, velocidade e altitude do veículo, o que permite obter informações relevantes, tais como, se o percurso se encontra transitável ou qual a velocidade recomendada. Por exemplo, durante os combates a incêndios, os bombeiros e proteção civil poderão saber se estes percursos são utilizáveis no planeamento dos combates a incêndios com reduzida probabilidade de complicações relativa ao acesso dos veículos, melhorando assim o tempo de resposta. O presente documento discute como poderá ser concebida uma aplicação web mapping, de código aberto, que permita a partilha, utilização e valorização de dados relativos aos percursos todo-terreno dos praticantes de TTT. O presente documento descreve como a aplicação desenvolvida no âmbito da dissertação de mestrado permite selecionar e ordenar possíveis trajetos que incluem os trajetos de TTT, apresentando as caraterísticas do terreno de modo a auxiliar a tomada de decisão por membros das corporações de Bombeiros. Será igualmente apresentada a interface atual da aplicação que inclui um mapa dinâmico e um gestor de pontos de referência.
Resumo:
Este artigo descreve um modelo para implementação de plataformas dinâmicas de negócio, exclusivamente baseadas em tecnologias XML. Apresenta um caso real que implementa as suas diversas camadas, apresentação, lógica e dados, em XML. A comunicação entre camadas é assegurada por serviços Web (WS), tornando esta arquitectura orientada aos serviços (SOA). O modelo proposto sustenta toda a programação do processo de negócio numa linguagem de alto nível, o WS-BPEL, proporcionando, desse modo, condições de adaptação ao dinamismo exigido pelo negócio da organização e à heterogeneidade dos sistemas. O caso real é de uma secretaria electrónica que surge no contexto dos portais Web universitários. O sistema desenvolvido visa oferecer um conjunto de serviços para acesso a informação e para o despoletar de acções computacionais e/ou humanas.
Resumo:
Like many other higher educat ion schools, ISCAP`s population has grown at a rate of almost 100% in the end of the twentieth century. Its administrative structures were reinforced, but it was not in the same proportion. Face to face with the inability to resolve the problem, the administration decided to implement a computer based system, available in the Internet. In a first stage, in 1997, the system was implemented as a services support. The next stage, in 1999, proposes to increase student services quality. A project that aims to bring student services available on the Internet begins to be developed.
Resumo:
This article discusses the application of Information and Communication Technologies and strategies for best practices in order to capture and maintain faculty students' attention. It is based on a case study of ten years, using a complete information system. This system, in addition to be considered an ERP, to support the activities of academic management, also has a strong component of SRM that provides support to academic and administrative activities. It describes the extent to which the presented system facilitates the interaction and communication between members of the academic community, using the Internet, with services available on the Web complementing them with email, SMS and CTI. Through a perception, backed by empirical analysis and results of investigations, it demonstrates how this type of practice may raise the level of satisfaction of the community. In particular, it is possible to combat failure at school, avoid that students leave their course before its completion and also that they recommend them to potential students. In addition, such a strategy also allows strong economies in the management of the institution, increasing its value. As future work, we present the new phase of the project towards implementation of Business Intelligence to optimize the management process, making it proactive. The technological vision that guides new developments to a construction based on Web services and procedural languages is also presented.
Resumo:
Esta dissertação apresenta uma arquitectura interoperável que permite lidar com a obtenção, manipulação, processamento e análise de informação geográfica. A aplicação 30, implementada como parte da arquitectura, para além de permitir a visualização e manipulação de dados dentro de um ambiente 30, oferece métodos que permitem descobrir, aceder e usar geo-processos, disponíveis através de serviços Web. A interacção com o utilizador é também feita através uma abordagem que quebra a típica complexidade que a maioria dos Sistemas de Informação Geográfica apresenta. O recurso à programação visual reduz a complexidade do sistema, e permite aos operadores tirar proveito da localização e de uma abstracção de um processo complexo, onde as unidades de processamento são representadas no terreno através de componentes 30 que podem ser directamente manipuladas e ligadas de modo a criar encandeamentos complexos de processos. Estes processos podem também ser criados visualmente e disponibilizados online. ABSTRACT; This thesis presents an interoperable architecture mainly designed for manipulation, processing and geographical information analysis. The three-dimensional interface, implemented as part of the architecture, besides allowing the visualization and manipulation of spatial data within a 30 environment, offers methods for discovering, accessing and using geo-processes, available through Web Services. Furthermore, the user interaction is done through an approach that breaks the typical complexity of most Geographic information Systems. This simplicity is in general archived through a visual programming approach that allows operators to take advantage of location, and use processes through abstract representations. Thus, processing units are represented on the terrain through 30 components, which can be directly manipulated and linked to create complex process chains. New processes can also be visually created and deployed online.
Resumo:
Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.
Resumo:
Part 21: Mobility and Logistics
Resumo:
No âmbito das obrigações que o Estado Português tem em garantir a segurança dos seus cidadãos, é efetuada, em países ou regiões onde há comunidades nacionais, uma avaliação quanto ao risco de vida para os cidadãos nacionais que aí residam ou aí se encontrem, entendendo-se, à luz do direito internacional consuetudinário, que é legítima a eventual execução de intervenção militar de extração de nacionais não combatentes dessas zonas de risco. Este trabalho pretende contribuir para uma reflexão sobre o apoio geoespacial a uma operação de extração de cidadãos nacionais não combatentes, que se denomina NEO (non-combatant evacuation operation). Dada a importância do conhecimento holístico do ambiente operacional para os comandantes militares, os Sistemas de Informação Geográfica desempenham um papel fundamental em termos da análise, contextualização e visualização da informação geoespacial, sendo um precioso sistema de apoio à decisão. A tomada de decisão é efetuada com os contributos de várias áreas de conhecimento, sendo fundamental que o planeamento seja efetuado com base na mesma informação geoespacial, evitando a existência de uma multitude de dados geoespaciais nem sempre coerentes, atualizados e acessíveis a todos os que deles necessitam, pretendendo-se com este trabalho fornecer um contributo para resolver este problema. Aborda-se também a escassez dos dados geográficos nas zonas em que este tipo de operações se poderá desenrolar, a pertinência e a adequabilidade de utilização de dados espaciais abertos, os modelos de dados, bem como a forma como a informação pode ser disponibilizada.
Resumo:
Brands are those lifestyles which consumers chose to buy in order to gain the value offered by the company, in order to be part of the community created through the brand equity elements and validated in the purchase of the products. Companies have understood how important it is to build a strong brand and many of them spend millions on aligning the brand with the design and style of the products, projecting the face and values of the company into the advertising campaigns. One of the most popular methods is through endorsement, placing a renounced celebrity and leveraging on the positive feedback of those customers that also follow the activities of the star whose face is on the cover of the marcom campaign. Celebrities have been used for a very long time to promote brands, sell products and services. Research has shown that those spokesmen of a brand who are more attractive can improve the statistics of recall and appeal more interest to the promotion campaign, as well as influence more on customer’s intention of buying the product (Kahle and Homer, 1985). The main purpose of this research is to investigate how celebrity endorsements influence the brand equity dimensions (brand loyalty, brand awareness, perceived quality and brand associations) as well as stimulate consumers’ word-of-mouth through brand identification, growth in interest and the advertising memorability. The hypotheses were tested with the aid of Structural Equation Modelling (SEM) in the PLS (Partial Least Squares) software. The survey is comprised of a target group of 589 respondents, from three countries – Brazil, Moldova and Portugal. Results evidence that the Attitude towards the Celebrity influences different Brand Equity dimensions and affects brand identification, growth in advertisement interest and advertising memorability, generating positive word of mouth (or negative, depending on the type of advertisement and reputation). Based on these findings we suggest further investigation in this area with the possibility to gain more data about the different fields of marcom and the different types of CE which are more appropriate for the given type of business.
Resumo:
The recent widespread use of social media platforms and web services has led to a vast amount of behavioral data that can be used to model socio-technical systems. A significant part of this data can be represented as graphs or networks, which have become the prevalent mathematical framework for studying the structure and the dynamics of complex interacting systems. However, analyzing and understanding these data presents new challenges due to their increasing complexity and diversity. For instance, the characterization of real-world networks includes the need of accounting for their temporal dimension, together with incorporating higher-order interactions beyond the traditional pairwise formalism. The ongoing growth of AI has led to the integration of traditional graph mining techniques with representation learning and low-dimensional embeddings of networks to address current challenges. These methods capture the underlying similarities and geometry of graph-shaped data, generating latent representations that enable the resolution of various tasks, such as link prediction, node classification, and graph clustering. As these techniques gain popularity, there is even a growing concern about their responsible use. In particular, there has been an increased emphasis on addressing the limitations of interpretability in graph representation learning. This thesis contributes to the advancement of knowledge in the field of graph representation learning and has potential applications in a wide range of complex systems domains. We initially focus on forecasting problems related to face-to-face contact networks with time-varying graph embeddings. Then, we study hyperedge prediction and reconstruction with simplicial complex embeddings. Finally, we analyze the problem of interpreting latent dimensions in node embeddings for graphs. The proposed models are extensively evaluated in multiple experimental settings and the results demonstrate their effectiveness and reliability, achieving state-of-the-art performances and providing valuable insights into the properties of the learned representations.
Resumo:
In recent years, we have witnessed great changes in the industrial environment as a result of the innovations introduced by Industry 4.0, especially in the integration of Internet of Things, Automation and Robotics in the manufacturing field. The project presented in this thesis lies within this innovation context and describes the implementation of an Image Recognition application focused on the automotive field. The project aims at helping the supply chain operator to perform an effective and efficient check of the homologation tags present on vehicles. The user contribution consists in taking a picture of the tag and the application will automatically, exploiting Amazon Web Services, return the result of the control about the correctness of the tag, the correct positioning within the vehicle and the presence of faults or defects on the tag. To implement this application we ombined two IoT platforms widely used in industrial field: Amazon Web Services(AWS) and ThingWorx. AWS exploits Convolutional Neural Networks to perform Text Detection and Image Recognition, while PTC ThingWorx manages the user interface and the data manipulation.
Resumo:
App Engine on lyhenne englanninkielisistä termeistä application, sovellus ja engine, moottori. Kyseessä on Google, Inc. -konsernin toteuttama kaupallinen palvelu, joka noudattaa pilvimallin tietojenkäsittelyn periaatteita ja mahdollistaa asiakkaan oman sovelluskehityksen. Järjestelmään on mahdollista ohjelmoida itse ideoitu palvelu Internet - verkon välityksellä käytettäväksi, joko yksityisesti tai julkisesti. Kyse on siis hajautetusta palvelinjärjestelmästä, jonka tarjoaa dynaamisesti kuormitukseen sopeutuvan sovellusalustan, jossa asiakas ei vuokraa virtuaalikoneita. Myös järjestelmän tarjoama tallennuskapasiteetti on saatavilla joustavasti. Itse kandidaatintyössä syvennytään yksityiskohtaisemmin sovelluksen toteuttamiseen palvelussa, rajoitteisiin ja soveltuvuuteen. Alussa käydään läpi pilvikäsite, joista monilla tietokoneiden käyttäjillä on epäselvä käsitys. Erilaisia kokonaisuuksia voidaan luoda erittäin monella tavalla, joista rajaamme käsittelyn kohteeksi toteuttamiskelpoiset yleiset ratkaisut.