29 resultados para Wearable Computing Augmented Reality Interfaccia utente Smart Glass Android
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
In this work it is proposed the design of a mobile system to assist car drivers in a smart city environment oriented to the upcoming reality of Electric Vehicles (EV). Taking into account the new reality of smart cites, EV introduction, Smart Grids (SG), Electrical Markets (EM), with deregulation of electricity production and use, drivers will need more information for decision and mobility purposes. A mobile application to recommend useful related information will help drivers to deal with this new reality, giving guidance towards traffic, batteries charging process, and city mobility infrastructures (e. g. public transportation information, parking places availability and car & bike sharing systems). Since this is an upcoming reality with possible process changes, development must be based on agile process approaches (Web services).
Resumo:
Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems. © 2014 Technical University of Munich (TUM).
Resumo:
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
Resumo:
A necessidade de poder computacional é crescente nas diversas áreas de actuação humana, tanto na indústria, como em ambientes académicos. Grid Computing permite a ligação de recursos computacionais dispersos de maneira a permitir a sua utilização mais eficaz, fornecendo aos utilizadores um acesso simplificado ao poder computacional de diversos sistemas. Os primeiros projectos de Grid Computing implicavam a ligação de máquinas paralelas ou aglomerados de alto desempenho e alto custo, disponíveis apenas em algumas instituições. Contrastando com o elevado custo dos super-computadores, os computadores pessoais e a Internet sofreram uma evolução significativa nos últimos anos. O uso de computadores dispersos em uma WAN pode representar um ambiente muito interessante para processamento de alto desempenho. Os sistemas em Grid fornecem a possibilidade de se utilizar um conjunto de computadores pessoais de modo a fornecer uma computação que utiliza recursos que de outra maneira estariam omissos. Este trabalho consiste no estudo de Grid Computing a nível de conceito e de arquitectura e numa análise ao seu estado actual hoje em dia. Como complemento foi desenvolvido um componente que permite o desenvolvimento de serviços para Grids (Grid Services) mais eficaz do que o modelo de suporte a serviços actualmente utilizado. Este componente é disponibilizado sob a forma um plug-in para a plataforma Eclipse IDE.
Resumo:
A possibilidade de venda de medicamentos não sujeitos a receita médica fora das farmácias provocou uma enorme revolução no sector. Esta medida visou, entre outros aspectos, a redução dos preços destes medicamentos e a melhoria da acessibilidade do consumidor a estes produtos, pelo aumento do número de postos de venda. No entanto, com a excepção do efeito sobre o preço dos medicamentos que esta medida teve, pouco se conhece sobre a qualidade dos serviços prestados nos locais de venda e a percepção sobre essa qualidade. Objectivo do estudo: caracterizar a população-utente de locais de venda de MNSRM e determinar o seu grau de satisfação com a organização e o atendimento prestado no local.
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
In this work is proposed the design of a system to create and handle Electric Vehicles (EV) charging procedures, based on intelligent process. Due to the electrical power distribution network limitation and absence of smart meter devices, Electric Vehicles charging should be performed in a balanced way, taking into account past experience, weather information based on data mining, and simulation approaches. In order to allow information exchange and to help user mobility, it was also created a mobile application to assist the EV driver on these processes. This proposed Smart ElectricVehicle Charging System uses Vehicle-to-Grid (V2G) technology, in order to connect Electric Vehicles and also renewable energy sources to Smart Grids (SG). This system also explores the new paradigm of Electrical Markets (EM), with deregulation of electricity production and use, in order to obtain the best conditions for commercializing electrical energy.
Resumo:
Sendo os desperdícios “Waste” associados à atividade industrial em Portugal e nos mercados globais e os seus custos inerentes, uma das maiores preocupações a todos os níveis de gestão empresarial, a filosofia “Lean” nasce como ajuda e encaminhamento na solução desta problemática. O conceito “Lean”, no que se refere à indústria, desde sempre e até aos dias de hoje, tem uma enorme ênfase, com a adoção deste conceito.Verificam-se bons resultados ao nível da redução de custos, melhoria da qualidade geral dos artigos produzidos, no controlo da produção em geral e é uma poderosa ferramenta no estreitamento da relação entre os diferentes intervenientes da cadeia de valor de determinado produto, sobretudo com fornecedores e com clientes. Com “Lean Management” e “Glass Wall Management”, em ambientes onde as empresas mais avançadas estão a procurar melhorar a sua competitividade através de uma gestão transparente (“Glass Wall Management”), a partir da qual, “toda informação relevante é compartilhada de maneira a que todos entendam a situação”(Suzaki, K, 1993), ganha cada vez mais importância a existência de uma estrutura organizacional que permita esta transparência e a consequente maturidade das empresas. Neste trabalho foram descritos alguns processos de gestão transparente desenvolvidos nos últimos dois anos numa PME portuguesa, aprofundando o processo de gestão transparente vigente e as ferramentas que ajudam a empresa e que na sua globalidade poderão ser extrapoladas a outras PME Portuguesas de modo que a informação importante e relevante seja partilhada por todos os intervenientes na estrutura empresarial, sendo entendida e desenvolvida por todos através de Edições e Revisões aos documentos mais importantes da empresa. Neste estudo foram contactadas vinte e uma PME’S portuguesas de tipologia de produção MTO (Make to Order) do sector dos estofos/mobiliário, e solicitado o preenchimento de um Questionário, tendo como fim em vista, a verificação do uso desta metodologia “Glass Wall Management” à escala empresarial portuguesa e a interpretação do Conceito Geral “Lean” como filosofia de redução de materiais, tempos e custos.
Resumo:
We investigate the phase behaviour of 2D mixtures of bi-functional and three-functional patchy particles and 3D mixtures of bi-functional and tetra-functional patchy particles by means of Monte Carlo simulations and Wertheim theory. We start by computing the critical points of the pure systems and then we investigate how the critical parameters change upon lowering the temperature. We extend the successive umbrella sampling method to mixtures to make it possible to extract information about the phase behaviour of the system at a fixed temperature for the whole range of densities and compositions of interest. (C) 2013 AIP Publishing LLC.
Resumo:
Trabalho Final de Mestrado para a obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Thesis submitted in the fulfilment of the requirements for the Degree of Master in Electronic and Telecomunications Engineering
Resumo:
This paper focus on a demand response model analysis in a smart grid context considering a contingency scenario. A fuzzy clustering technique is applied on the developed demand response model and an analysis is performed for the contingency scenario. Model considerations and architecture are described. The demand response developed model aims to support consumers decisions regarding their consumption needs and possible economic benefits.
Resumo:
Although the computational power of mobile devices has been increasing, it is still not enough for some classes of applications. In the present, these applications delegate the computing power burden on servers located on the Internet. This model assumes an always-on Internet connectivity and implies a non-negligible latency. The thesis addresses the challenges and contributions posed to the application of a mobile collaborative computing environment concept to wireless networks. The goal is to define a reference architecture for high performance mobile applications. Current work is focused on efficient data dissemination on a highly transitive environment, suitable to many mobile applications and also to the reputation and incentive system available on this mobile collaborative computing environment. For this we are improving our already published reputation/incentive algorithm with knowledge from the usage pattern from the eduroam wireless network in the Lisbon area.
Resumo:
This letter presents a new parallel method for hyperspectral unmixing composed by the efficient combination of two popular methods: vertex component analysis (VCA) and sparse unmixing by variable splitting and augmented Lagrangian (SUNSAL). First, VCA extracts the endmember signatures, and then, SUNSAL is used to estimate the abundance fractions. Both techniques are highly parallelizable, which significantly reduces the computing time. A design for the commodity graphics processing units of the two methods is presented and evaluated. Experimental results obtained for simulated and real hyperspectral data sets reveal speedups up to 100 times, which grants real-time response required by many remotely sensed hyperspectral applications.
Resumo:
Physical computing has spun a true global revolution in the way in which the digital interfaces with the real world. From bicycle jackets with turn signal lights to twitter-controlled christmas trees, the Do-it-Yourself (DiY) hardware movement has been driving endless innovations and stimulating an age of creative engineering. This ongoing (r)evolution has been led by popular electronics platforms such as the Arduino, the Lilypad, or the Raspberry Pi, however, these are not designed taking into account the specific requirements of biosignal acquisition. To date, the physiological computing community has been severely lacking a parallel to that found in the DiY electronics realm, especially in what concerns suitable hardware frameworks. In this paper, we build on previous work developed within our group, focusing on an all-in-one, low-cost, and modular biosignal acquisition hardware platform, that makes it quicker and easier to build biomedical devices. We describe the main design considerations, experimental evaluation and circuit characterization results, together with the results from a usability study performed with volunteers from multiple target user groups, namely health sciences and electrical, biomedical, and computer engineering. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved.