90 resultados para geographical data
Resumo:
São muitas as organizações que por todo o mundo possuem instalações deste tipo, em Portugal temos o exemplo da Portugal Telecom que recentemente inaugurou o seu Data Center na Covilhã. O desenvolvimento de um Data Center exige assim um projeto muito cuidado, o qual entre outros aspetos deverá garantir a segurança da informação e das próprias instalações, nomeadamente no que se refere à segurança contra incêndio.
Resumo:
In-network storage of data in wireless sensor networks contributes to reduce the communications inside the network and to favor data aggregation. In this paper, we consider the use of n out of m codes and data dispersal in combination to in-network storage. In particular, we provide an abstract model of in-network storage to show how n out of m codes can be used, and we discuss how this can be achieved in five cases of study. We also define a model aimed at evaluating the probability of correct data encoding and decoding, we exploit this model and simulations to show how, in the cases of study, the parameters of the n out of m codes and the network should be configured in order to achieve correct data coding and decoding with high probability.
Resumo:
Accepted in 13th IEEE Symposium on Embedded Systems for Real-Time Multimedia (ESTIMedia 2015), Amsterdam, Netherlands.
Resumo:
Nowadays, data centers are large energy consumers and the trend for next years is expected to increase further, considering the growth in the order of cloud services. A large portion of this power consumption is due to the control of physical parameters of the data center (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data center. Therefore, managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolution of the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center and with them, find opportunities to optimize energy consumptions. Having a high-resolution picture of the data center conditions, also enables minimizing local hot-spots, perform more accurate predictive maintenance (failures in all infrastructure equipments can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.
Resumo:
This paper studies the statistical distributions of worldwide earthquakes from year 1963 up to year 2012. A Cartesian grid, dividing Earth into geographic regions, is considered. Entropy and the Jensen–Shannon divergence are used to analyze and compare real-world data. Hierarchical clustering and multi-dimensional scaling techniques are adopted for data visualization. Entropy-based indices have the advantage of leading to a single parameter expressing the relationships between the seismic data. Classical and generalized (fractional) entropy and Jensen–Shannon divergence are tested. The generalized measures lead to a clear identification of patterns embedded in the data and contribute to better understand earthquake distributions.
Resumo:
Complex industrial plants exhibit multiple interactions among smaller parts and with human operators. Failure in one part can propagate across subsystem boundaries causing a serious disaster. This paper analyzes the industrial accident data series in the perspective of dynamical systems. First, we process real world data and show that the statistics of the number of fatalities reveal features that are well described by power law (PL) distributions. For early years, the data reveal double PL behavior, while, for more recent time periods, a single PL fits better into the experimental data. Second, we analyze the entropy of the data series statistics over time. Third, we use the Kullback–Leibler divergence to compare the empirical data and multidimensional scaling (MDS) techniques for data analysis and visualization. Entropy-based analysis is adopted to assess complexity, having the advantage of yielding a single parameter to express relationships between the data. The classical and the generalized (fractional) entropy and Kullback–Leibler divergence are used. The generalized measures allow a clear identification of patterns embedded in the data.
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
Data Mining (DM) methods are being increasingly used in prediction with time series data, in addition to traditional statistical approaches. This paper presents a literature review of the use of DM with time series data, focusing on short- time stocks prediction. This is an area that has been attracting a great deal of attention from researchers in the field. The main contribution of this paper is to provide an outline of the use of DM with time series data, using mainly examples related with short-term stocks prediction. This is important to a better understanding of the field. Some of the main trends and open issues will also be introduced.
Resumo:
Espresso coffee beverages prepared from pure origin roasted ground coffees from the major world growing regions (Brazil, Ethiopia, Colombia, India, Mexico, Honduras, Guatemala, Papua New Guinea, Kenya, Cuba, Timor, Mussulo and China) were characterized and compared in terms of their mineral content. Regular consumption of one cup of espresso contributes to a daily mineral intake varying from 0.002% (sodium; Central America) to 8.73% (potassium; Asia). The mineral profiles of the espresso beverages revealed significant inter- and intra-continental differences. South American pure origin coffees are on average richer in the analyzed elements except for calcium, while samples from Central America have generally lower mineral amounts (except for manganese). Manganese displayed significant differences (p < 0.05) among the countries of each characterized continent. Intercontinental and inter-country discrimination between the major world coffee producers were achieved by applying canonical discriminant analysis. Manganese and calcium were found to be the best chemical descriptors for origin.
Resumo:
New arguments proving that successive (repeated) measurements have a memory and actually remember each other are presented. The recognition of this peculiarity can change essentially the existing paradigm associated with conventional observation in behavior of different complex systems and lead towards the application of an intermediate model (IM). This IM can provide a very accurate fit of the measured data in terms of the Prony's decomposition. This decomposition, in turn, contains a small set of the fitting parameters relatively to the number of initial data points and allows comparing the measured data in cases where the “best fit” model based on some specific physical principles is absent. As an example, we consider two X-ray diffractometers (defined in paper as A- (“cheap”) and B- (“expensive”) that are used after their proper calibration for the measuring of the same substance (corundum a-Al2O3). The amplitude-frequency response (AFR) obtained in the frame of the Prony's decomposition can be used for comparison of the spectra recorded from (A) and (B) - X-ray diffractometers (XRDs) for calibration and other practical purposes. We prove also that the Fourier decomposition can be adapted to “ideal” experiment without memory while the Prony's decomposition corresponds to real measurement and can be fitted in the frame of the IM in this case. New statistical parameters describing the properties of experimental equipment (irrespective to their internal “filling”) are found. The suggested approach is rather general and can be used for calibration and comparison of different complex dynamical systems in practical purposes.
Resumo:
Cloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%.
Resumo:
The present work aims to achieve and further develop a hydrogeomechanical approach in Caldas da Cavaca hydromineral system rock mass (Aguiar da Beira, NW Portugal), and contribute to a better understanding of the hydrogeological conceptual site model. A collection of several data, namely geology, hydrogeology, rock and soil geotechnics, borehole hydraulics and hydrogeomechanics, was retrieved from three rock slopes (Lagoa, Amores and Cancela). To accomplish a comprehensive analysis and rock engineering conceptualisation of the site, a multi‐technical approach were used, such as, field and laboratory techniques, hydrogeotechnical mapping, hydrogeomechanical zoning and hydrogeomechanical scheme classifications and indexes. In addition, a hydrogeomechanical data analysis and assessment, such as Hydro‐Potential (HP)‐Value technique, JW Joint Water Reduction index, Hydraulic Classification (HC) System were applied on rock slopes. The hydrogeomechanical zone HGMZ 1 of Lagoa slope achieved higher hydraulic conductivities with poorer rock mass quality results, followed by the hydrogeomechanical zone HGMZ 2 of Lagoa slope, with poor to fair rock mass quality and lower hydraulic parameters. In addition, Amores slope had a fair to good rock mass quality and the lowest hydraulic conductivity. The hydrogeomechanical zone HGMZ 3 of Lagoa slope, and the hydrogeomechanical zones HGMZ 1 and HGMZ 2 of Cancela slope had a fair to poor rock mass quality but were completely dry. Geographical Information Systems (GIS) mapping technologies was used in overall hydrogeological and hydrogeomechanical data integration in order to improve the hydrogeological conceptual site model.
Resumo:
Hard‐rock watersheds commonly exhibit complex geological bedrock and morphological features. Hydromineral resources have relevant economic value for the thermal spas industry. The present study aims to develop a groundwater vulnerability approach in Caldas da Cavaca hydromineral system (Aguiar da Beira, Central Portugal) which has a thermal tradition that dates back to the late 19th century, and contribute to a better understanding of the hydrogeological conceptual site model. In this work different layers were overlaid, generating several thematic maps to arrive at an integrated framework of several key‐sectors in Caldas da Cavaca site. Thus, to accomplish a comprehensive analysis and conceptualization of the site, a multi‐technical approach was used, such as, field and laboratory techniques, where several data was collected, like geotectonics, hydrology and hydrogeology, hydrogeomorphology, hydrogeophysical and hydrogeomechanical zoning aiming the application of the so‐called DISCO method. All these techniques were successfully performed and a groundwater vulnerability to contamination assessment, based on GOD‐S, DRASTIC‐Fm, SINTACS, SI and DISCO indexes methodology, was delineated. Geographical Information Systems (GIS) technology was on the basis to organise and integrate the geodatabases and to produce all the thematic maps. This multi‐technical approach highlights the importance of groundwater vulnerability to contamination mapping as a tool to support hydrogeological conceptualisation, contributing to better decision‐making of water resources management and sustainability.
Resumo:
Atualmente os sistemas Automatic Vehicle Location (AVL) fazem parte do dia-a-dia de muitas empresas. Esta tecnologia tem evoluído significativamente ao longo da última década, tornando-se mais acessível e fácil de utilizar. Este trabalho consiste no desenvolvimento de um sistema de localização de veículos para smartphone Android. Para tal, foram desenvolvidas duas aplicações: uma aplicação de localização para smarphone Android e uma aplicação WEB de monitorização. A aplicação de localização permite a recolha de dados de localização GPS e estabelecer uma rede piconet Bluetooth, admitindo assim a comunicação simultânea com a unidade de controlo de um veículo (ECU) através de um adaptador OBDII/Bluetooth e com até sete sensores/dispositivos Bluetooth que podem ser instalados no veículo. Os dados recolhidos pela aplicação Android são enviados periodicamente (intervalo de tempo definido pelo utilizador) para um servidor Web No que diz respeito à aplicação WEB desenvolvida, esta permite a um gestor de frota efetuar a monitorização dos veículos em circulação/registados no sistema, podendo visualizar a posição geográfica dos mesmos num mapa interativo (Google Maps), dados do veículo (OBDII) e sensores/dispositivos Bluetooth para cada localização enviada pela aplicação Android. O sistema desenvolvido funciona tal como esperado. A aplicação Android foi testada inúmeras vezes e a diferentes velocidades do veículo, podendo inclusive funcionar em dois modos distintos: data logger e data pusher, consoante o estado da ligação à Internet do smartphone. Os sistemas de localização baseados em smartphone possuem vantagens relativamente aos sistemas convencionais, nomeadamente a portabilidade, facilidade de instalação e baixo custo.
Resumo:
É possível assistir nos dias de hoje, a um processo tecnológico evolutivo acentuado por toda a parte do globo. No caso das empresas, quer as pequenas, médias ou de grandes dimensões, estão cada vez mais dependentes dos sistemas informatizados para realizar os seus processos de negócio, e consequentemente à geração de informação referente aos negócios e onde, muitas das vezes, os dados não têm qualquer relacionamento entre si. A maioria dos sistemas convencionais informáticos não são projetados para gerir e armazenar informações estratégicas, impossibilitando assim que esta sirva de apoio como recurso estratégico. Portanto, as decisões são tomadas com base na experiência dos administradores, quando poderiam serem baseadas em factos históricos armazenados pelos diversos sistemas. Genericamente, as organizações possuem muitos dados, mas na maioria dos casos extraem pouca informação, o que é um problema em termos de mercados competitivos. Como as organizações procuram evoluir e superar a concorrência nas tomadas de decisão, surge neste contexto o termo Business Intelligence(BI). A GisGeo Information Systems é uma empresa que desenvolve software baseado em SIG (sistemas de informação geográfica) recorrendo a uma filosofia de ferramentas open-source. O seu principal produto baseia-se na localização geográfica dos vários tipos de viaturas, na recolha de dados, e consequentemente a sua análise (quilómetros percorridos, duração de uma viagem entre dois pontos definidos, consumo de combustível, etc.). Neste âmbito surge o tema deste projeto que tem objetivo de dar uma perspetiva diferente aos dados existentes, cruzando os conceitos BI com o sistema implementado na empresa de acordo com a sua filosofia. Neste projeto são abordados alguns dos conceitos mais importantes adjacentes a BI como, por exemplo, modelo dimensional, data Warehouse, o processo ETL e OLAP, seguindo a metodologia de Ralph Kimball. São também estudadas algumas das principais ferramentas open-source existentes no mercado, assim como quais as suas vantagens/desvantagens relativamente entre elas. Em conclusão, é então apresentada a solução desenvolvida de acordo com os critérios enumerados pela empresa como prova de conceito da aplicabilidade da área Business Intelligence ao ramo de Sistemas de informação Geográfica (SIG), recorrendo a uma ferramenta open-source que suporte visualização dos dados através de dashboards.