77 resultados para Linked Data


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although the Navigation Satellite Timing and Ranging (NAVSTAR) Global Positioning System (GPS) is, de facto, the standard positioning system used in outdoor navigation, it does not provide, per se, all the features required to perform many outdoor navigational tasks. The accuracy of the GPS measurements is the most critical issue. The quest for higher position readings accuracy led to the development, in the late nineties, of the Differential Global Positioning System (DGPS). The differential GPS method detects the range errors of the GPS satellites received and broadcasts them. The DGPS/GPS receivers correlate the DGPS data with the GPS satellite data they are receiving, granting users increased accuracy. DGPS data is broadcasted using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to have access, within the ISEP campus, to DGPS correction data. To achieve this objective we designed and implemented a distributed system composed of two main modules which are interconnected: a distributed application responsible for the establishment of the data link over the Internet between the remote DGPS stations and the campus, and the campus-wide DGPS data server application. The DGPS data Internet link is provided by a two-tier client/server distributed application where the server-side is connected to the DGPS station and the client-side is located at the campus. The second unit, the campus DGPS data server application, diffuses DGPS data received at the campus via the Intranet and via a wireless data link. The wireless broadcast is intended for DGPS/GPS portable receivers equipped with an air interface and the Intranet link is provided for DGPS/GPS receivers with just a RS232 DGPS data interface. While the DGPS data Internet link servers receive the DGPS data from the DGPS base stations and forward it to the DGPS data Internet link client, the DGPS data Internet link client outputs the received DGPS data to the campus DGPS data server application. The distributed system is expected to provide adequate support for accurate (sub-metric) outdoor campus navigation tasks. This paper describes in detail the overall distributed application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Os programas de melhoria contínua dos processos são cada vez mais a aposta das empresas para fazer face ao mercado. Através da implementação destes programas é possível conferir simplicidade e padronização aos processos e consequentemente reduzir os custos com desperdícios internos relacionados com a qualidade dos mesmos. As ferramentas de melhoria da qualidade e as ferramentas associadas ao Lean Thinking representam um pilar importante no sucesso de qualquer programa de melhoria contínua dos processos. Estas ferramentas constituem meios úteis na análise, controlo, organização de dados importantes para a correta tomada de decisão nas organizações. O presente projeto tem como principal objetivo a conceção e implementação de um programa de melhoria da qualidade na Eurico Ferreira, S.A., tendo por base a avaliação da satisfação do cliente e a aplicação dos 5S. Neste contexto, o trabalho teve como fundamentação teórica a Gestão da Qualidade, Lean Thinking e algumas ferramentas de ambas as matérias. Posteriormente foi selecionada a área de negócio da empresa a abordar. Após a seleção, realizou-se um diagnóstico inicial do processo identificando os diversos pontos de melhoria onde foram aplicadas algumas ferramentas do Lean Thinking, nomeadamente o Value Stream Mapping e a metodologia 5S. Com a primeira foi possível construir um mapa do estado atual do processo, no qual estavam representados todos os intervenientes assim como o fluxo de materiais e de informação ao longo do processo. A metodologia 5S permitiu atuar sobre os desperdícios, identificando e implementando diversas melhorias no processo. Concluiu-se que a implementação das ferramentas contribuiu eficientemente para a melhoria contínua da qualidade nos processos, tendo sido decisão da coordenação alargar o âmbito do projeto aos restantes armazéns do centro logístico da empresa. Pode afirmar-se com recurso à satisfação do cliente expressa através da evolução favorável do Service-level agreement que as ferramentas implementadas têm gerado resultados muito positivos no curto prazo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aims to optimize the water quality monitoring of a polluted watercourse (Leça River, Portugal) through the principal component analysis (PCA) and cluster analysis (CA). These statistical methodologies were applied to physicochemical, bacteriological and ecotoxicological data (with the marine bacterium Vibrio fischeri and the green alga Chlorella vulgaris) obtained with the analysis of water samples monthly collected at seven monitoring sites and during five campaigns (February, May, June, August, and September 2006). The results of some variables were assigned to water quality classes according to national guidelines. Chemical and bacteriological quality data led to classify Leça River water quality as “bad” or “very bad”. PCA and CA identified monitoring sites with similar pollution pattern, giving to site 1 (located in the upstream stretch of the river) a distinct feature from all other sampling sites downstream. Ecotoxicity results corroborated this classification thus revealing differences in space and time. The present study includes not only physical, chemical and bacteriological but also ecotoxicological parameters, which broadens new perspectives in river water characterization. Moreover, the application of PCA and CA is very useful to optimize water quality monitoring networks, defining the minimum number of sites and their location. Thus, these tools can support appropriate management decisions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adhesive bonding is nowadays a serious candidate to replace methods such as fastening or riveting, because of attractive mechanical properties. As a result, adhesives are being increasingly used in industries such as the automotive, aerospace and construction. Thus, it is highly important to predict the strength of bonded joints to assess the feasibility of joining during the fabrication process of components (e.g. due to complex geometries) or for repairing purposes. This work studies the tensile behaviour of adhesive joints between aluminium adherends considering different values of adherend thickness (h) and the double-cantilever beam (DCB) test. The experimental work consists of the definition of the tensile fracture toughness (GIC) for the different joint configurations. A conventional fracture characterization method was used, together with a J-integral approach, that take into account the plasticity effects occurring in the adhesive layer. An optical measurement method is used for the evaluation of crack tip opening and adherends rotation at the crack tip during the test, supported by a Matlab® sub-routine for the automated extraction of these quantities. As output of this work, a comparative evaluation between bonded systems with different values of adherend thickness is carried out and complete fracture data is provided in tension for the subsequent strength prediction of joints with identical conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study identifies predictors and normative data for quality of life (QOL) in a sample of Portuguese adults from general population. A cross-sectional correlational study was undertaken with two hundred and fifty-five (N = 255) individuals from Portuguese general population (mean age 43 years, range 25–84 years; 148 females, 107 males). Participants completed the European Portuguese version of the World Health Organization Quality of Life short-form instrument and the European Portuguese version of the Center for Epidemiologic Studies Depression Scale. Demographic information was also collected. Portuguese adults reported their QOL as good. The physical, psychological and environmental domains predicted 44 % of the variance of QOL. The strongest predictor was the physical domain and the weakest was social relationships. Age, educational level, socioeconomic status and emotional status were significantly correlated with QOL and explained 25 % of the variance of QOL. The strongest predictor of QOL was emotional status followed by education and age. QOL was significantly different according to: marital status; living place (mainland or islands); type of cohabitants; occupation; health. The sample of adults from general Portuguese population reported high levels of QOL. The life domain that better explained QOL was the physical domain. Among other variables, emotional status best predicted QOL. Further variables influenced overall QOL. These findings inform our understanding on adults from Portuguese general population QOL and can be helpful for researchers and practitioners using this assessment tool to compare their results with normative data

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentada ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Contabilidade e Finanças, sob orientação do Dr. Carlos Mota

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O surgir da World Wide Web providenciou aos utilizadores uma série de oportunidades no que diz respeito ao acesso a dados e informação. Este acesso tornou-se um ato banal para qualquer utilizador da Web, tanto pelo utilizador comum como por outros mais experientes, tanto para obter informações básicas, como outras informações mais complexas. Todo este avanço tecnológico permitiu que os utilizadores tivessem acesso a uma vasta quantidade de informação, dispersa pelo globo, não tendo, na maior parte das vezes, a informação qualquer tipo de ligação entre si. A necessidade de se obter informação de interesse relativamente a determinado tema, mas tendo que recorrer a diversas fontes para obter toda a informação que pretende obter e comparar, torna-se um processo moroso para o utilizador. Pretende-se que este processo de recolha de informação de páginas web seja o mais automatizado possível, dando ao utilizador a possibilidade de utilizar algoritmos e ferramentas de análise e processamento automáticas, reduzindo desta forma o tempo e esforço de realização de tarefas sobre páginas web. Este processo é denominado Web Scraping. Neste trabalho é descrita uma arquitetura de sistema de web scraping automático e configurável baseado em tecnologias existentes, nomeadamente no contexto da web semântica. Para tal o trabalho desenvolvido analisa os efeitos da aplicação do Web Scraping percorrendo os seguintes pontos: • Identificação e análise de diversas ferramentas de web scraping; • Identificação do processo desenvolvido pelo ser humano complementar às atuais ferramentas de web scraping; • Design duma arquitetura complementar às ferramentas de web scraping que dê apoio ao processo de web scraping do utilizador; • Desenvolvimento dum protótipo baseado em ferramentas e tecnologias existentes; • Realização de experiências no domínio de aplicação de páginas de super-mercados portugueses; • Analisar resultados obtidos a partir destas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter examines the cross-cultural influence of training on the adjustment of international assignees. We focus on the pre-departure training (PDT) before an international assignment. It is an important topic because in the globalized world of today more and more expatriations are needed. The absence of PDT may generate the failure of the expatriation experience. Companies may neglect PDT due to cost reduction practices and ignorance of the need for it. Data were collected through semi-structured interviews to 42 Portuguese international assignees and 18 organizational representatives from nine Portuguese companies. The results suggest that companies should develop PDT programs, particularly when the cultural distance to the host country is bigger and when there is no previous experience of expatriation to that country in the company. The study is original because it details in depth the methods of PDT, its problems, and consequences. Some limitations linked to the research design and detailed in the conclusion should be overcome in future studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

More than ever, there is an increase of the number of decision support methods and computer aided diagnostic systems applied to various areas of medicine. In breast cancer research, many works have been done in order to reduce false-positives when used as a double reading method. In this study, we aimed to present a set of data mining techniques that were applied to approach a decision support system in the area of breast cancer diagnosis. This method is geared to assist clinical practice in identifying mammographic findings such as microcalcifications, masses and even normal tissues, in order to avoid misdiagnosis. In this work a reliable database was used, with 410 images from about 115 patients, containing previous reviews performed by radiologists as microcalcifications, masses and also normal tissue findings. Throughout this work, two feature extraction techniques were used: the gray level co-occurrence matrix and the gray level run length matrix. For classification purposes, we considered various scenarios according to different distinct patterns of injuries and several classifiers in order to distinguish the best performance in each case described. The many classifiers used were Naïve Bayes, Support Vector Machines, k-nearest Neighbors and Decision Trees (J48 and Random Forests). The results in distinguishing mammographic findings revealed great percentages of PPV and very good accuracy values. Furthermore, it also presented other related results of classification of breast density and BI-RADS® scale. The best predictive method found for all tested groups was the Random Forest classifier, and the best performance has been achieved through the distinction of microcalcifications. The conclusions based on the several tested scenarios represent a new perspective in breast cancer diagnosis using data mining techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper consists in the characterization of medium voltage (MV) electric power consumers based on a data clustering approach. It is intended to identify typical load profiles by selecting the best partition of a power consumption database among a pool of data partitions produced by several clustering algorithms. The best partition is selected using several cluster validity indices. These methods are intended to be used in a smart grid environment to extract useful knowledge about customers’ behavior. The data-mining-based methodology presented throughout the paper consists in several steps, namely the pre-processing data phase, clustering algorithms application and the evaluation of the quality of the partitions. To validate our approach, a case study with a real database of 1.022 MV consumers was used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This document presents a tool able to automatically gather data provided by real energy markets and to generate scenarios, capture and improve market players’ profiles and strategies by using knowledge discovery processes in databases supported by artificial intelligence techniques, data mining algorithms and machine learning methods. It provides the means for generating scenarios with different dimensions and characteristics, ensuring the representation of real and adapted markets, and their participating entities. The scenarios generator module enhances the MASCEM (Multi-Agent Simulator of Competitive Electricity Markets) simulator, endowing a more effective tool for decision support. The achievements from the implementation of the proposed module enables researchers and electricity markets’ participating entities to analyze data, create real scenarios and make experiments with them. On the other hand, applying knowledge discovery techniques to real data also allows the improvement of MASCEM agents’ profiles and strategies resulting in a better representation of real market players’ behavior. This work aims to improve the comprehension of electricity markets and the interactions among the involved entities through adequate multi-agent simulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of electricity markets operation has been gaining an increasing importance in the last years, as result of the new challenges that the restructuring process produced. Currently, lots of information concerning electricity markets is available, as market operators provide, after a period of confidentiality, data regarding market proposals and transactions. These data can be used as source of knowledge to define realistic scenarios, which are essential for understanding and forecast electricity markets behavior. The development of tools able to extract, transform, store and dynamically update data, is of great importance to go a step further into the comprehension of electricity markets and of the behaviour of the involved entities. In this paper an adaptable tool capable of downloading, parsing and storing data from market operators’ websites is presented, assuring constant updating and reliability of the stored data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electricity markets worldwide suffered profound transformations. The privatization of previously nationally owned systems; the deregulation of privately owned systems that were regulated; and the strong interconnection of national systems, are some examples of such transformations [1, 2]. In general, competitive environments, as is the case of electricity markets, require good decision-support tools to assist players in their decisions. Relevant research is being undertaken in this field, namely concerning player modeling and simulation, strategic bidding and decision-support.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an electricity medium voltage (MV) customer characterization framework supportedby knowledge discovery in database (KDD). The main idea is to identify typical load profiles (TLP) of MVconsumers and to develop a rule set for the automatic classification of new consumers. To achieve ourgoal a methodology is proposed consisting of several steps: data pre-processing; application of severalclustering algorithms to segment the daily load profiles; selection of the best partition, corresponding tothe best consumers’ segmentation, based on the assessments of several clustering validity indices; andfinally, a classification model is built based on the resulting clusters. To validate the proposed framework,a case study which includes a real database of MV consumers is performed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of Electricity Markets operation has been gaining an increasing importance in the last years, as result of the new challenges that the restructuring produced. Currently, lots of information concerning Electricity Markets is available, as market operators provide, after a period of confidentiality, data regarding market proposals and transactions. These data can be used as source of knowledge, to define realistic scenarios, essential for understanding and forecast Electricity Markets behaviour. The development of tools able to extract, transform, store and dynamically update data, is of great importance to go a step further into the comprehension of Electricity Markets and the behaviour of the involved entities. In this paper we present an adaptable tool capable of downloading, parsing and storing data from market operators’ websites, assuring actualization and reliability of stored data.