101 resultados para COMBINING DATA


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the statistical distributions of worldwide earthquakes from year 1963 up to year 2012. A Cartesian grid, dividing Earth into geographic regions, is considered. Entropy and the Jensen–Shannon divergence are used to analyze and compare real-world data. Hierarchical clustering and multi-dimensional scaling techniques are adopted for data visualization. Entropy-based indices have the advantage of leading to a single parameter expressing the relationships between the seismic data. Classical and generalized (fractional) entropy and Jensen–Shannon divergence are tested. The generalized measures lead to a clear identification of patterns embedded in the data and contribute to better understand earthquake distributions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complex industrial plants exhibit multiple interactions among smaller parts and with human operators. Failure in one part can propagate across subsystem boundaries causing a serious disaster. This paper analyzes the industrial accident data series in the perspective of dynamical systems. First, we process real world data and show that the statistics of the number of fatalities reveal features that are well described by power law (PL) distributions. For early years, the data reveal double PL behavior, while, for more recent time periods, a single PL fits better into the experimental data. Second, we analyze the entropy of the data series statistics over time. Third, we use the Kullback–Leibler divergence to compare the empirical data and multidimensional scaling (MDS) techniques for data analysis and visualization. Entropy-based analysis is adopted to assess complexity, having the advantage of yielding a single parameter to express relationships between the data. The classical and the generalized (fractional) entropy and Kullback–Leibler divergence are used. The generalized measures allow a clear identification of patterns embedded in the data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Demo in Workshop on ns-3 (WNS3 2015). 13 to 14, May, 2015. Castelldefels, Spain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data Mining (DM) methods are being increasingly used in prediction with time series data, in addition to traditional statistical approaches. This paper presents a literature review of the use of DM with time series data, focusing on short- time stocks prediction. This is an area that has been attracting a great deal of attention from researchers in the field. The main contribution of this paper is to provide an outline of the use of DM with time series data, using mainly examples related with short-term stocks prediction. This is important to a better understanding of the field. Some of the main trends and open issues will also be introduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

13th International Conference on Autonomous Robot Systems (Robotica), 2013, Lisboa

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The excessive use of pesticides and fertilisers in agriculture has generated a decrease in groundwater and surface water quality in many regions of the EU, constituting a hazard for human health and the environment. Besides, on-site sewage disposal is an important source of groundwater contamination in urban and peri-urban areas. The assessment of groundwater vulnerability to contamination is an important tool to fulfil the demands of EU Directives. The purpose of this study is to assess the groundwater vulnerability to contamination related mainly to agricultural activities in a peri-urban area (Vila do Conde, NW Portugal). The hydrogeological framework is characterised mainly by fissured granitic basement and sedimentary cover. Water samples were collected and analysed for temperature, pH, electrical conductivity, chloride, phosphate, nitrate and nitrite. An evaluation of groundwater vulnerability to contamination was applied (GOD-S, Pesticide DRASTIC-Fm, SINTACS and SI) and the potential nitrate contamination risk was assessed, both on a hydrogeological GIS-based mapping. A principal component analysis was performed to characterised patterns of relationship among groundwater contamination, vulnerability, and the hydrogeological setting assessed. Levels of nitrate above legislation limits were detected in 75 % of the samples analysed. Alluvia units showed the highest nitrate concentrations and also the highest vulnerability and risk. Nitrate contamination is a serious problem affecting groundwater, particularly shallow aquifers, especially due to agriculture activities, livestock and cesspools. GIS-based cartography provided an accurate way to improve knowledge on water circulation models and global functioning of local aquifer systems. Finally, this study highlights the adequacy of an integrated approach, combining hydrogeochemical data, vulnerability assessments and multivariate analysis, to understand groundwater processes in peri-urban areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New arguments proving that successive (repeated) measurements have a memory and actually remember each other are presented. The recognition of this peculiarity can change essentially the existing paradigm associated with conventional observation in behavior of different complex systems and lead towards the application of an intermediate model (IM). This IM can provide a very accurate fit of the measured data in terms of the Prony's decomposition. This decomposition, in turn, contains a small set of the fitting parameters relatively to the number of initial data points and allows comparing the measured data in cases where the “best fit” model based on some specific physical principles is absent. As an example, we consider two X-ray diffractometers (defined in paper as A- (“cheap”) and B- (“expensive”) that are used after their proper calibration for the measuring of the same substance (corundum a-Al2O3). The amplitude-frequency response (AFR) obtained in the frame of the Prony's decomposition can be used for comparison of the spectra recorded from (A) and (B) - X-ray diffractometers (XRDs) for calibration and other practical purposes. We prove also that the Fourier decomposition can be adapted to “ideal” experiment without memory while the Prony's decomposition corresponds to real measurement and can be fitted in the frame of the IM in this case. New statistical parameters describing the properties of experimental equipment (irrespective to their internal “filling”) are found. The suggested approach is rather general and can be used for calibration and comparison of different complex dynamical systems in practical purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atualmente, o contexto de atuação das empresas tem sido cada vez mais desafiante dado o binómio de competitividade e crise financeira. Desta forma, a exploração de novas soluções e identificação de lacunas ou desequilíbrios no mercado torna-se imperial para o desenvolvimento das novas entidades. É com base nesta premissa que emerge o conceito de Start-Up, sendo o seu propósito o desenvolvimento de novos produtos e modelos de negócios inovadores. Como resultado, estas empresas tornam-se num elemento revitalizador do tecido económico dos países em que estão inseridas. De forma a possibilitar a potenciação das atividades desenvolvidas por estas entidades é crucial a identificação das diversas fontes de recursos financeiros e as consequentes contrapartidas exigidas. No entanto, dado o seu caráter inovador obviamente que vários são os riscos lhes estão associados, pelo que estas empresas deparam-se com mais dificuldades no momento de acesso aos recursos financeiros que pretendem. Desta forma, a presente dissertação analisa as tendências de financiamento das Start-Up tendo como motivação o facto desta temática ainda não se encontrar devidamente explorada e dado o desconhecimento desta realidade em Portugal. Neste sentido, e aliando a problemática existente com a motivação para a concretização da presente dissertação foram realizados um conjunto de inquéritos os quais foram aliados com o desenvolvimento de modelos empíricos multivariados aplicados aos dados seccionais e em painel. Os resultados alcançados com a investigação empírica permitiram concluir a influência de um conjunto de variáveis bem como justificar a orientação e estrutura de financiamento das Start-Up portuguesas. Das variáveis investigadas de ressalvar a influência significativa da Dimensão da Empresa, Estrutura de Ativos e Forma Legal no financiamento das Start-Up nacionais.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Na sociedade atual, a preocupação com o ambiente, por um lado, e com o conforto e a segurança, por outro, faz com que a sustentabilidade energética se assuma como uma forma de intervenção adequada às exigências de qualidade de vida e à eficiência no âmbito da economia. Nesta conformidade, é incontornável a mais-valia do Smart Panel, um quadro elétrico inteligente criado com vista à consecução daqueles desideratos, o que motivou o tema do presente trabalho. Assim, pretende-se demonstrar as potencialidades do Smart Panel, um novo conceito de quadro elétrico que visa a otimização da sua funcionalidade na gestão dinâmica e pragmática das instalações elétricas, nomeadamente no que respeita ao controlo, monitorização e atuação sobre os dispositivos, quer in loco quer, sobretudo, à distância. Para a consecução deste objetivo, concorrem outros que o potenciam, designadamente a compreensão do funcionamento do quadro elétrico (QE) tradicional, a comparação deste com o Smart Panel e a demonstração das vantagens da utilização desta nova tecnologia. A grande finalidade do trabalho desenvolvido é, por um lado, colocar a formação académica ao serviço de um bom desempenho profissional futuro, por outro ir ao encontro da tendência tecnológica inerente às necessidades que o homem, hoje, tem de controlar. Deste modo, num primeiro momento, é feita uma abordagem geral ao quadro eléctrico tradicional a fim de ser compreendido o seu funcionamento, aplicações e potencialidades. Para tanto, a explanação inclui a apresentação de conceitos teóricos subjacentes à conceção, produção e montagem do QE. São explicitados os diversos componentes que o integram e funções que desempenham, bem como as interações que estabelecem entre si e os normativos a que devem obedecer, para conformidade. Houve a preocupação de incluir imagens coadjuvantes das explicações, descrições e procedimentos técnicos. No terceiro capítulo é abordada a tecnologia Smart Panel, introduzindo o conceito e objetivos que lhe subjazem. Explicita-se o modo de funcionamento deste sistema que agrupa proteção, supervisão, controlo, armazenamento e manutenção preventiva, e demonstra-se de que forma a capacidade de leitura de dados, de comunicação e de comando do quadro elétrico à distância se afigura uma revolução tecnológica facilitadora do cumprimento das necessidades de segurança, conforto e economia da vida moderna. Os capítulos quarto, quinto e sexto versam uma componente prática do trabalho. No capítulo quarto é explanado um suporte formativo e posterior demonstração do kit de ensaio, que servirá de apoio à apresentação da tecnologia Smart Panel aos clientes. Além deste suporte de formação, no quinto capítulo é elaborada uma lista de procedimentos de verificação a serem executados aos componentes de comunicação que integram o Smart Panel, para fornecimento ao quadrista. Por fim, no sexto capítulo incluem-se dois casos de estudo: o estudo A centra-se na aplicação da tecnologia Smart Panel ao projeto de um QE tradicional, que implica fazer o levantamento de toda a aparelhagem existente e, de seguida, proceder à transposição para a tecnologia Smart Panel por forma a cumprir os requisitos estabelecidos pelo cliente. O estudo de caso B consiste na elaboração de um projeto de um quadro eléctrico com a tecnologia Smart Panel em função de determinados requisitos e necessidades do cliente, por forma a garantir as funções desejadas.