30 resultados para Proxy servers
Resumo:
This paper presents a low-cost scaled model of a silo for drying and airing cereal grains. It allows the control and monitor of several parameters associated to the silo's operation, through a remote accessible infrastructure. The scaled model consists of a 2.50 m wide × 2.10 m long plant with all control and monitor capacities provided by micro-Web servers. An application running on the micro-Web servers enables storing all parameters in a data basis for later analysis. The implemented model aims to support a remote experimentation facility for technological education, research-oriented tutorials, and industrial applications. Given the low-cost requirement, this remote facility can be easily replicated in other institutions to support a network of remote labs, which encompasses the concurrent access of several users (e.g. students).
Resumo:
Consolidation consists in scheduling multiple virtual machines onto fewer servers in order to improve resource utilization and to reduce operational costs due to power consumption. However, virtualization technologies do not offer performance isolation, causing applications’ slowdown. In this work, we propose a performance enforcing mechanism, composed of a slowdown estimator, and a interference- and power-aware scheduling algorithm. The slowdown estimator determines, based on noisy slowdown data samples obtained from state-of-the-art slowdown meters, if tasks will complete within their deadlines, invoking the scheduling algorithm if needed. When invoked, the scheduling algorithm builds performance and power aware virtual clusters to successfully execute the tasks. We conduct simulations injecting synthetic jobs which characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our strategy can be efficiently integrated with state-of-the-art slowdown meters to fulfil contracted SLAs in real-world environments, while reducing operational costs in about 12%.
Resumo:
The two largest causes for battery consumption on mobile devices are related with the display and network operations. Since most application need to share data and communicate with remote servers, communications should be as lightweight and efficient as possible. In network communication, serialization plays a central role as the process of converting an object into a stream of bytes. One of the most popular data-interchange format is JSON (JavaScript Object Notation). This paper presents a survey on JSON parsers in mobile scenarios. The aim of the survey is to find the most efficient JSON parser in mobile communications characterised by high transfer rate of small amounts of data. In the performance benchmark we compare the time required to read and write data with several popular JSON parser implementations such as Gson, Jackson, org.json and others. The results of this survey are important for others that need to select an efficient parser for mobile communication.
Resumo:
The current ubiquitous network access and increase in network bandwidth are driving the sales of mobile location-aware user devices and, consequently, the development of context-aware applications, namely location-based services. The goal of this project is to provide consumers of location-based services with a richer end-user experience by means of service composition, personalization, device adaptation and continuity of service. Our approach relies on a multi-agent system composed of proxy agents that act as mediators and providers of personalization meta-services, device adaptation and continuity of service for consumers of pre-existing location-based services. These proxy agents, which have Web services interfaces to ensure a high level of interoperability, perform service composition and take in consideration the preferences of the users, the limitations of the user devices, making the usage of different types of devices seamless for the end-user. To validate and evaluate the performance of this approach, use cases were defined, tests were conducted and results gathered which demonstrated that the initial goals were successfully fulfilled.
Resumo:
In this paper we describe a low cost distributed system intended to increase the positioning accuracy of outdoor navigation systems based on the Global Positioning System (GPS). Since the accuracy of absolute GPS positioning is insufficient for many outdoor navigation tasks, another GPS based methodology – the Differential GPS (DGPS) – was developed in the nineties. The differential or relative positioning approach is based on the calculation and dissemination of the range errors of the received GPS satellites. GPS/DGPS receivers correlate the broadcasted GPS data with the DGPS corrections, granting users increased accuracy. DGPS data can be disseminated using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to provide mobile platforms within our campus with DGPS data for precise outdoor navigation. To achieve this objective, we designed and implemented a three-tier client/server distributed system that, first, establishes Internet links with remote DGPS sources and, then, performs campus-wide dissemination of the obtained data. The Internet links are established between data servers connected to remote DGPS sources and the client, which is the data input module of the campus-wide DGPS data provider. The campus DGPS data provider allows the establishment of both Intranet and wireless links within the campus. This distributed system is expected to provide adequate support for accurate outdoor navigation tasks.
Resumo:
The accuracy of the Navigation Satellite Timing and Ranging (NAVSTAR) Global Positioning System (GPS) measurements is insufficient for many outdoor navigation tasks. As a result, in the late nineties, a new methodology – the Differential GPS (DGPS) – was developed. The differential approach is based on the calculation and dissemination of the range errors of the GPS satellites received. GPS/DGPS receivers correlate the broadcasted GPS data with the DGPS corrections, granting users increased accuracy. DGPS data can be disseminated using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to provide mobile platforms within our campus with DGPS data for precise outdoor navigation. To achieve this objective, we designed and implemented a three-tier client/server distributed system that establishes Internet links with remote DGPS sources and performs campus-wide dissemination of the obtained data. The Internet links are established between data servers connected to remote DGPS sources and the client, which is the data input module of the campus-wide DGPS data provider. The campus DGPS data provider allows the establishment of both Intranet and wireless links within the campus. This distributed system is expected to provide adequate support for accurate (submetric) outdoor navigation tasks.
Resumo:
Although the Navigation Satellite Timing and Ranging (NAVSTAR) Global Positioning System (GPS) is, de facto, the standard positioning system used in outdoor navigation, it does not provide, per se, all the features required to perform many outdoor navigational tasks. The accuracy of the GPS measurements is the most critical issue. The quest for higher position readings accuracy led to the development, in the late nineties, of the Differential Global Positioning System (DGPS). The differential GPS method detects the range errors of the GPS satellites received and broadcasts them. The DGPS/GPS receivers correlate the DGPS data with the GPS satellite data they are receiving, granting users increased accuracy. DGPS data is broadcasted using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to have access, within the ISEP campus, to DGPS correction data. To achieve this objective we designed and implemented a distributed system composed of two main modules which are interconnected: a distributed application responsible for the establishment of the data link over the Internet between the remote DGPS stations and the campus, and the campus-wide DGPS data server application. The DGPS data Internet link is provided by a two-tier client/server distributed application where the server-side is connected to the DGPS station and the client-side is located at the campus. The second unit, the campus DGPS data server application, diffuses DGPS data received at the campus via the Intranet and via a wireless data link. The wireless broadcast is intended for DGPS/GPS portable receivers equipped with an air interface and the Intranet link is provided for DGPS/GPS receivers with just a RS232 DGPS data interface. While the DGPS data Internet link servers receive the DGPS data from the DGPS base stations and forward it to the DGPS data Internet link client, the DGPS data Internet link client outputs the received DGPS data to the campus DGPS data server application. The distributed system is expected to provide adequate support for accurate (sub-metric) outdoor campus navigation tasks. This paper describes in detail the overall distributed application.
Resumo:
The multiprocessor scheduling scheme NPS-F for sporadic tasks has a high utilisation bound and an overall number of preemptions bounded at design time. NPS-F binpacks tasks offline to as many servers as needed. At runtime, the scheduler ensures that each server is mapped to at most one of the m processors, at any instant. When scheduled, servers use EDF to select which of their tasks to run. Yet, unlike the overall number of preemptions, the migrations per se are not tightly bounded. Moreover, we cannot know a priori which task a server will be currently executing at the instant when it migrates. This uncertainty complicates the estimation of cache-related preemption and migration costs (CPMD), potentially resulting in their overestimation. Therefore, to simplify the CPMD estimation, we propose an amended bin-packing scheme for NPS-F allowing us (i) to identify at design time, which task migrates at which instant and (ii) bound a priori the number of migrating tasks, while preserving the utilisation bound of NPS-F.
Resumo:
Mestrado em Engenharia Informática, Área de Especialização em Tecnologias do Conhecimento e da Decisão
Resumo:
The 30th ACM/SIGAPP Symposium On Applied Computing (SAC 2015). 13 to 17, Apr, 2015, Embedded Systems. Salamanca, Spain.
Resumo:
We investigate whether the positive relation between accounting accruals and information asymmetry documented for U.S. stock markets also holds for European markets, considered as a whole and at the country level. This research is relevant because this relation is likely to be affected by differences in accounting standards used by companies for financial reporting, in the traditional use of the banking system or capital markets for firm financing, in legal systems and cultural environment. We find that in European stock markets discretionary accruals are positively related with the Corwin and Schultz high-low spread estimator used as a proxy for information asymmetry. Our results suggest that the earnings management component of accruals outweighs the informational component, but the significance of the relation varies across countries. Further, such association tends to be stronger for firms with the highest levels of positive discretionary accruals. Consistent with the evidence provided by the authors, our results also suggest that the high-low spread estimator is more efficient than the closing bid-ask spread when analysing the impact of information quality on information asymmetry.
Resumo:
Cloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%.
Resumo:
Os Sistemas de Gestão Técnica Centralizada (SGTC) assumem-se como essenciais nos grandes edifícios, já que permitem monitorizar, controlar, comandar e gerir, de forma facilitada, integrada e otimizada, as várias instalações existentes no edifício. O estado da arte de um SGTC baseia-se numa arquitetura distribuída, com recurso a Quadros de Gestão Técnica (QGT) que incluem Automation Servers - equipamentos nativos nos protocolos de comunicação mais comummente utilizados neste âmbito, incorporadores de funcionalidades e programações pré-definidas, e que ficarão responsáveis por integrar na sua área de influência, um conjunto de pontos de SGTC, definidos em projeto. Numa nova filosofia de instalação, integração e comunicação facilitada entre dispositivos que nos quadros elétricos geram dados relevantes para o utilizador e desencadeiam ações úteis na gestão de uma instalação, surge o novo conceito no mercado de Smart Panels, da Schneider Electric. Este sistema baseia-se numa ampla e diversa gama de possibilidades de medição e monitorização energética e da própria aparelhagem, com um sistema de comunicação com o sistema de gestão e controlo da instalação integrado no próprio quadro, dispensando assim a necessidade de um sistema externo (QGT), de recolha, comunicação e processamento de informação. Após o estudo descritivo teórico dos vários tópicos, questões e considerações relacionadas com os SGTC, os Smart Panels e a sua integração, o projeto e estudo comparativo do SGTC sem e com a integração de Smart Panels num grande centro comercial, permitiu concluir que a integração de Smart Panels num SGTC pode conferir vantagens no que diz respeito à implificação do projeto, da instalação, do comissionamento, programação, e da própria exploração da instalação elétrica, traduzindo-se numa redução dos custos normalmente elevados inerentes à mão de obra associada a todos estes processos.
Resumo:
O desenvolvimento aplicacional é uma área em grande expansão no mercado das tecnologias de informação e como tal, é uma área que evolui rápido. Os impulsionadores para esta característica são as comunicações e os equipamentos informáticos, pois detêm características mais robustas e são cada vez mais rápidos. A função das aplicações é acompanhar esta evolução, possuindo arquiteturas mais complexas/completas visando suportar todos os pedidos dos clientes, através da produção de respostas em tempos aceitáveis. Esta dissertação aborda várias arquiteturas aplicacionais possíveis de implementar, mediante o contexto que esteja inserida, como por exemplo, um cenário com poucos ou muitos clientes, pouco ou muito capital para investir em servidores, etc. É fornecido um nivelamento acerca dos conceitos subjacentes ao desenvolvimento aplicacional. Posteriormente é analisado o estado de arte das linguagens de programação web e orientadas a objetos, bases de dados, frameworks em JavaScript, arquiteturas aplicacionais e, por fim, as abordagens para definir objetivos mensuráveis no desenvolvimento aplicacional. Foram implementados dois protótipos. Um deles, numa arquitetura multicamada com várias linguagens de programação e tecnologias. O segundo, numa única camada (monolítica) com uma única linguagem de programação. Os dois protótipos foram testados e comparados com o intuito de escolher uma das arquiteturas, num determinado cenário de utilização.
Resumo:
Nos últimos anos tem-se verificado a constante evolução dos mercados em plataformas na Internet como forma de melhoria não só dos serviços prestados, mas também para o aumento de vendas de produtos e respetiva internacionalização dos mesmos. Este aumento da procura por este tipo de softwares, assim como a constante evolução e atualização dos mesmos tem contribuído para que estas aplicações evoluam em termos de funcionalidades e complexidade. Isto contribui cada vez para a dificuldade de formar equipas capazes de manter e desenvolver estes sistemas sem que comprometa em grandes custos para as organizações. Neste sentido surgiram diversas ferramentas que permitem criar soluções pré desenvolvidas de aplicações na Internet denominadas de "E-commerce applications". Estas plataformas, apesar do utilizador não ter obrigatoriamente que deter conhecimentos de programação para proceder à sua instalação, são bastante restritas tanto aos serviços que podem ser usados, e na sua escalabilidade, visto que normalmente correm em servidores específicos e por vezes as configurações necessárias para instalação tornam-se bastante complexas de ser efetuadas. Pretende-se no âmbito desta dissertação de mestrado propor um modelo de uma arquitetura de um sistema baseado em mecanismos MDA para a área de retalho, particularmente em ambientes de e-commerce. Serão inicialmente sistematizados os principais tipos de ecommerce numa perspetiva de evolução histórica. Será igualmente enquadrado o MDA no desenvolvimento de um sistema de e-commerce. Neste sentido, serão equacionadas as diferenças entre o modelo típico de desenvolvimento de software e o desenvolvimento de software orientado pelas metodologias do MDA. No processo de especificação e desenvolvimento do modelo proposto será realizada uma análise de requisitos, assim como, a proposta do modelo da arquitetura de um sistema baseado em mecanismos MDA, tendo como orientação os requisitos e arquitetura definida na fase de análise. Finalmente no sentido de analisar o resultado esperado para um sistema orientado por metodologias definidas por MDA, serão realizado alguns testes no sistema desenvolvido de forma a analisar o seu desempenho e validar a sua adequabilidade no âmbito do processo de desenvolvimento de sistemas e-commerce