929 resultados para location-dependent data query


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Indoor location systems cannot rely on technologies such as GPS (Global Positioning System) to determine the position of a mobile terminal, because its signals are blocked by obstacles such as walls, ceilings, roofs, etc. In such environments. The use of alternative techniques, such as the use of wireless networks, should be considered. The location estimation is made by measuring and analysing one of the parameters of the wireless signal, usually the received power. One of the techniques used to estimate the locations using wireless networks is fingerprinting. This technique comprises two phases: in the first phase data is collected from the scenario and stored in a database; the second phase consists in determining the location of the mobile node by comparing the data collected from the wireless transceiver with the data previously stored in the database. In this paper an approach for localisation using fingerprinting based on Fuzzy Logic and pattern searching is presented. The performance of the proposed approach is compared with the performance of classic methods, and it presents an improvement between 10.24% and 49.43%, depending on the mobile node and the Fuzzy Logic parameters.ł

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fingerprinting is an indoor location technique, based on wireless networks, where data stored during the offline phase is compared with data collected by the mobile device during the online phase. In most of the real-life scenarios, the mobile node used throughout the offline phase is different from the mobile nodes that will be used during the online phase. This means that there might be very significant differences between the Received Signal Strength values acquired by the mobile node and the ones stored in the Fingerprinting Map. As a consequence, this difference between RSS values might contribute to increase the location estimation error. One possible solution to minimize these differences is to adapt the RSS values, acquired during the online phase, before sending them to the Location Estimation Algorithm. Also the internal parameters of the Location Estimation Algorithms, for example the weights of the Weighted k-Nearest Neighbour, might need to be tuned for every type of terminal. This paper focuses both approaches, using Direct Search optimization methods to adapt the Received Signal Strength and to tune the Location Estimation Algorithm parameters. As a result it was possible to decrease the location estimation error originally obtained without any calibration procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims to optimize the water quality monitoring of a polluted watercourse (Leça River, Portugal) through the principal component analysis (PCA) and cluster analysis (CA). These statistical methodologies were applied to physicochemical, bacteriological and ecotoxicological data (with the marine bacterium Vibrio fischeri and the green alga Chlorella vulgaris) obtained with the analysis of water samples monthly collected at seven monitoring sites and during five campaigns (February, May, June, August, and September 2006). The results of some variables were assigned to water quality classes according to national guidelines. Chemical and bacteriological quality data led to classify Leça River water quality as “bad” or “very bad”. PCA and CA identified monitoring sites with similar pollution pattern, giving to site 1 (located in the upstream stretch of the river) a distinct feature from all other sampling sites downstream. Ecotoxicity results corroborated this classification thus revealing differences in space and time. The present study includes not only physical, chemical and bacteriological but also ecotoxicological parameters, which broadens new perspectives in river water characterization. Moreover, the application of PCA and CA is very useful to optimize water quality monitoring networks, defining the minimum number of sites and their location. Thus, these tools can support appropriate management decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RESUMO - A exposição a formaldeído é reconhecidamente um dos mais importantes factores de risco presente nos laboratórios hospitalares de anatomia patológica. Neste contexto ocupacional, o formaldeído é utilizado em solução, designada comummente por formol. Trata-se de uma solução comercial de formaldeído, normalmente diluída a 10%, sendo pouco onerosa e, por esse motivo, a eleita para os trabalhos de rotina em anatomia patológica. A solução é utilizada como fixador e conservante do material biológico, pelo que as peças anatómicas a serem processadas são previamente impregnadas. No que concerne aos efeitos para a saúde do formaldeído, os efeitos locais parecem apresentar um papel mais importante comparativamente com os efeitos sistémicos, devido à sua reactividade e rápido metabolismo nas células da pele, tracto gastrointestinal e pulmões. Da mesma forma, a localização das lesões correspondem principalmente às zonas expostas às doses mais elevadas deste agente químico, ou seja, o desenvolvimento dos efeitos tóxicos dependerá mais da intensidade da dose externa do que da duração da exposição. O efeito do formaldeído no organismo humano mais facilmente detectável é a acção irritante, transitória e reversível sobre as mucosas dos olhos e aparelho respiratório superior (naso e orofaringe), o que acontece em geral para exposições frequentes e superiores a 1 ppm. Doses elevadas são citotóxicas e podem conduzir a degenerescência e necrose das mucosas e epitélios. No que concerne aos efeitos cancerígenos, a primeira avaliação efectuada pela International Agency for Research on Cancer data de 1981, actualizada em 1982, 1987, 1995 e 2004, considerando-o como um agente cancerígeno do grupo 2A (provavelmente carcinogénico). No entanto, a mais recente avaliação, em 2006, considera o formaldeído no Grupo 1 (agente carcinogénico) com base na evidência de que a exposição a este agente é susceptível de causar cancro nasofaríngeo em humanos. Constituiu objectivo principal deste estudo caracterizar a exposição profissional a formaldeído nos laboratórios hospitalares de anatomia patológica Portugueses. Pretendeu-se, ainda, descrever os fenómenos ambientais da contaminação ambiental por formaldeído e explorar eventuais associações entre variáveis. Considerou-se uma amostra de 10 laboratórios hospitalares de anatomia patológica, avaliada a exposição dos três grupos profissionais por comparação com os dois referenciais de exposição e, ainda, conhecidos os valores de concentração máxima em 83 actividades. Foram aplicados simultaneamente dois métodos distintos de avaliação ambiental: um dos métodos (Método 1) fez uso de um equipamento de leitura directa com o princípio de medição por Photo Ionization Detection, com uma lâmpada de 11,7 eV e, simultaneamente, realizou-se o registo da actividade. Este método disponibilizou dados para o referencial de exposição da concentração máxima; o outro método (Método 2) traduziu-se na aplicação do método NIOSH 2541, implicando o uso de bombas de amostragem eléctricas de baixo caudal e posterior processamento analítico das amostras por cromatografia gasosa. Este método, por sua vez, facultou dados para o referencial de exposição da concentração média ponderada. As estratégias de medição de cada um dos métodos e a definição dos grupos de exposição existentes neste contexto ocupacional, designadamente os Técnicos de Anatomia Patológica, os Médicos Anatomo-Patologistas e os Auxiliares, foram possíveis através da informação disponibilizada pelas técnicas de observação da actividade da análise (ergonómica) do trabalho. Estudaram-se diversas variáveis independentes, nomeadamente a temperatura ambiente e a humidade relativa, a solução de formaldeído utilizada, as condições de ventilação existentes e o número médio de peças processadas por dia em cada laboratório. Para a recolha de informação sobre estas variáveis foi preenchida, durante a permanência nos laboratórios estudados, uma Grelha de Observação e Registo. Como variáveis dependentes seleccionaram-se três indicadores de contaminação ambiental, designadamente o valor médio das concentrações superiores a 0,3 ppm em cada laboratório, a Concentração Média Ponderada obtida para cada grupo de exposição e o Índice do Tempo de Regeneração de cada laboratório. Os indicadores foram calculados e definidos através dos dados obtidos pelos dois métodos de avaliação ambiental aplicados. Baseada no delineado pela Universidade de Queensland, foi ainda aplicada uma metodologia de avaliação do risco de cancro nasofaríngeo nas 83 actividades estudadas de modo a definir níveis semi-quantitativos de estimação do risco. Para o nível de Gravidade considerou-se a informação disponível em literatura científica que define eventos biológicos adversos, relacionados com o modo de acção do agente químico e os associa com concentrações ambientais de formaldeído. Para o nível da Probabilidade utilizou-se a informação disponibilizada pela análise (ergonómica) de trabalho que permitiu conhecer a frequência de realização de cada uma das actividades estudadas. A aplicação simultânea dos dois métodos de avaliação ambiental resultou na obtenção de resultados distintos, mas não contraditórios, no que concerne à avaliação da exposição profissional a formaldeído. Para as actividades estudadas (n=83) verificou-se que cerca de 93% dos valores são superiores ao valor limite de exposição definido para a concentração máxima (VLE-CM=0,3 ppm). O “exame macroscópico” foi a actividade mais estudada e onde se verificou a maior prevalência de resultados superiores ao valor limite (92,8%). O valor médio mais elevado da concentração máxima (2,04 ppm) verificou-se no grupo de exposição dos Técnicos de Anatomia Patológica. No entanto, a maior amplitude de resultados observou-se no grupo dos Médicos Anatomo-Patologistas (0,21 ppm a 5,02 ppm). No que respeita ao referencial da Concentração Média Ponderada, todos os valores obtidos nos 10 laboratórios estudados para os três grupos de exposição foram inferiores ao valor limite de exposição definido pela Occupational Safety and Health Administration (TLV-TWA=0,75 ppm). Verificou-se associação estatisticamente significativa entre o número médio de peças processadas por laboratório e dois dos três indicadores de contaminação ambiental utilizados, designadamente o valor médio das concentrações superiores a 0,3 ppm (p=0,009) e o Índice do Tempo de Regeneração (p=0,001). Relativamente à temperatura ambiente não se observou associação estatisticamente significativa com nenhum dos indicadores de contaminação ambiental utilizados. A humidade relativa apresentou uma associação estatisticamente significativa apenas com o indicador de contaminação ambiental da Concentração Média Ponderada de dois grupos de exposição, nomeadamente com os Médicos Anatomo-Patologistas (p=0,02) e os Técnicos de Anatomia Patológica (p=0,04). A aplicação da metodologia de avaliação do risco nas 83 actividades estudadas permitiu verificar que, em cerca de dois terços (35%), o risco foi classificado como (pelo menos) elevado e, ainda, constatar que 70% dos laboratórios apresentou pelo menos 1 actividade com a classificação de risco elevado. Da aplicação dos dois métodos de avaliação ambiental e das informações obtidas para os dois referenciais de exposição pode concluir-se que o referencial mais adequado é a Concentração Máxima por estar associado ao modo de actuação do agente químico. Acresce, ainda, que um método de avaliação ambiental, como o Método 1, que permite o estudo das concentrações de formaldeído e simultaneamente a realização do registo da actividade, disponibiliza informações pertinentes para a intervenção preventiva da exposição por permitir identificar as actividades com a exposição mais elevada, bem como as variáveis que a condicionam. As peças anatómicas apresentaram-se como a principal fonte de contaminação ambiental por formaldeído neste contexto ocupacional. Aspecto de particular interesse, na medida que a actividade desenvolvida neste contexto ocupacional e, em particular na sala de entradas, é centrada no processamento das peças anatómicas. Dado não se perspectivar a curto prazo a eliminação do formaldeído, devido ao grande número de actividades que envolvem ainda a utilização da sua solução comercial (formol), pode concluir-se que a exposição a este agente neste contexto ocupacional específico é preocupante, carecendo de uma intervenção rápida com o objectivo de minimizar a exposição e prevenir os potenciais efeitos para a saúde dos trabalhadores expostos. ---------------- ABSTRACT - Exposure to formaldehyde is recognized as one of the most important risk factors present in anatomy and pathology laboratories from hospital settings. In this occupational setting, formaldehyde is used in solution, typically diluted to 10%, and is an inexpensive product. Because of that, is used in routine work in anatomy and pathology laboratories. The solution is applied as a fixative and preservative of biological material. Regarding formaldehyde health effects, local effects appear to have a more important role compared with systemic effects, due to his reactivity and rapid metabolism in skin, gastrointestinal tract and lungs cells. Likewise, lesions location correspond mainly to areas exposed to higher doses and toxic effects development depend more on external dose intensity than exposure duration. Human body formaldehyde effect more easily detectable is the irritating action, transient and reversible on eyes and upper respiratory tract (nasal and throat) membranes, which happen in general for frequent exposure to concentrations higher than 1 ppm. High doses are cytotoxic and can lead to degeneration, and also to mucous membranes and epithelia necrosis. With regard to carcinogenic effects, first assessment performed by International Agency for Research on Cancer in 1981, updated in 1982, 1987, 1995 and 2004, classified formaldehyde in Group 2A (probably carcinogenic). However, most recent evaluation in 2006, classifies formaldehyde carcinogenic (Group 1), based on evidence that exposure to this agent is likely to cause nasopharyngeal cancer in humans. This study principal objective was to characterize occupational exposure to formaldehyde in anatomy and pathology hospital laboratories, as well to describe formaldehyde environmental contamination phenomena and explore possible associations between variables. It was considered a sample of 10 hospital pathology laboratories, assessed exposure of three professional groups for comparison with two exposure metrics, and also knows ceiling concentrations in 83 activities. Were applied, simultaneously, two different environmental assessment methods: one method (Method 1) using direct reading equipment that perform measure by Photo Ionization Detection, with 11,7 eV lamps and, simultaneously, make activity description and film. This method provided data for ceiling concentrations for each activity study (TLV-C). In the other applied method (Method 2), air sampling and formaldehyde analysis were performed according to NIOSH method (2541). This method provided data average exposure concentration (TLV-TWA). Measuring and sampling strategies of each methods and exposure groups definition (Technicians, Pathologists and Assistants) was possible by information provided by activities (ergonomic) analysis. Several independent variables were studied, including temperature and relative humidity, formaldehyde solution used, ventilation conditions, and also anatomic pieces mean value processed per day in each laboratory. To register information about these variables was completed an Observation and Registration Grid. Three environmental contamination indicators were selected has dependent variables namely: mean value from concentrations exceeding 0,3 ppm in each laboratory, weighted average concentration obtained for each exposure group, as well each laboratory Time Regeneration Index. These indicators were calculated and determined through data obtained by the two environmental assessment methods. Based on Queensland University proposal, was also applied a methodology for assessing nasopharyngeal cancer risk in 83 activities studied in order to obtain risk levels (semi-quantitative estimation). For Severity level was considered available information in scientific literature that defines biological adverse events related to the chemical agent action mode, and associated with environment formaldehyde concentrations. For Probability level was used information provided by (ergonomic) work analysis that helped identifies activity frequency. Environmental assessment methods provide different results, but not contradictory, regarding formaldehyde occupational exposure evaluation. In the studied activities (n=83), about 93% of the values were above exposure limit value set for ceiling concentration in Portugal (VLE-CM = 0,3 ppm). "Macroscopic exam" was the most studied activity, and obtained the higher prevalence of results superior than 0,3 ppm (92,8%). The highest ceiling concentration mean value (2,04 ppm) was obtain in Technicians exposure group, but a result wider range was observed in Pathologists group (0,21 ppm to 5,02 ppm). Concerning Method 2, results from the three exposure groups, were all lower than limit value set by Occupational Safety and Health Administration (TLV-TWA=0,75ppm). There was a statistically significant association between anatomic pieces mean value processed by each laboratory per day, and two of the three environmental contamination indicators used, namely average concentrations exceeding 0,3 ppm (p=0,009) and Time Regeneration Index (p=0,001). Temperature was not statistically associated with any environmental contamination used indicators. Relative humidity had a statistically significant association only with one environmental contamination indicator, namely weighted average concentration, particularly with Pathologists group (p=0,02) and Technicians group (p=0,04). Risk assessment performed in the 83 studied activities showed that around two thirds (35%) were classified as (at least) high, and also noted that 70% of laboratories had at least 1 activity with high risk rating. The two environmental assessment methods application, as well information obtained from two exposure metrics, allowed to conclude that most appropriate exposure metric is ceiling concentration, because is associated with formaldehyde action mode. Moreover, an environmental method, like Method 1, which allows study formaldehyde concentrations and relates them with activity, provides relevant information for preventive information, since identifies the activity with higher exposure, as well variables that promote exposure. Anatomic pieces represent formaldehyde contamination main source in this occupational setting, and this is of particular interest because all activities are focused on anatomic pieces processing. Since there is no prospect, in short term, for formaldehyde use elimination due to large number of activities that still involve solution use, it can be concluded that exposure to this agent, in this particular occupational setting, is preoccupant, requiring an rapid intervention in order to minimize exposure and prevent potential health effects in exposed workers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, data centers are large energy consumers and the trend for next years is expected to increase further, considering the growth in the order of cloud services. A large portion of this power consumption is due to the control of physical parameters of the data center (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data center. Therefore, managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolution of the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center and with them, find opportunities to optimize energy consumptions. Having a high-resolution picture of the data center conditions, also enables minimizing local hot-spots, perform more accurate predictive maintenance (failures in all infrastructure equipments can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demo in Workshop on ns-3 (WNS3 2015). 13 to 14, May, 2015. Castelldefels, Spain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

É possível assistir nos dias de hoje, a um processo tecnológico evolutivo acentuado por toda a parte do globo. No caso das empresas, quer as pequenas, médias ou de grandes dimensões, estão cada vez mais dependentes dos sistemas informatizados para realizar os seus processos de negócio, e consequentemente à geração de informação referente aos negócios e onde, muitas das vezes, os dados não têm qualquer relacionamento entre si. A maioria dos sistemas convencionais informáticos não são projetados para gerir e armazenar informações estratégicas, impossibilitando assim que esta sirva de apoio como recurso estratégico. Portanto, as decisões são tomadas com base na experiência dos administradores, quando poderiam serem baseadas em factos históricos armazenados pelos diversos sistemas. Genericamente, as organizações possuem muitos dados, mas na maioria dos casos extraem pouca informação, o que é um problema em termos de mercados competitivos. Como as organizações procuram evoluir e superar a concorrência nas tomadas de decisão, surge neste contexto o termo Business Intelligence(BI). A GisGeo Information Systems é uma empresa que desenvolve software baseado em SIG (sistemas de informação geográfica) recorrendo a uma filosofia de ferramentas open-source. O seu principal produto baseia-se na localização geográfica dos vários tipos de viaturas, na recolha de dados, e consequentemente a sua análise (quilómetros percorridos, duração de uma viagem entre dois pontos definidos, consumo de combustível, etc.). Neste âmbito surge o tema deste projeto que tem objetivo de dar uma perspetiva diferente aos dados existentes, cruzando os conceitos BI com o sistema implementado na empresa de acordo com a sua filosofia. Neste projeto são abordados alguns dos conceitos mais importantes adjacentes a BI como, por exemplo, modelo dimensional, data Warehouse, o processo ETL e OLAP, seguindo a metodologia de Ralph Kimball. São também estudadas algumas das principais ferramentas open-source existentes no mercado, assim como quais as suas vantagens/desvantagens relativamente entre elas. Em conclusão, é então apresentada a solução desenvolvida de acordo com os critérios enumerados pela empresa como prova de conceito da aplicabilidade da área Business Intelligence ao ramo de Sistemas de informação Geográfica (SIG), recorrendo a uma ferramenta open-source que suporte visualização dos dados através de dashboards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.