990 resultados para Real database
Resumo:
The increase in the number of spatial data collected has motivated the development of geovisualisation techniques, aiming to provide an important resource to support the extraction of knowledge and decision making. One of these techniques are 3D graphs, which provides a dynamic and flexible increase of the results analysis obtained by the spatial data mining algorithms, principally when there are incidences of georeferenced objects in a same local. This work presented as an original contribution the potentialisation of visual resources in a computational environment of spatial data mining and, afterwards, the efficiency of these techniques is demonstrated with the use of a real database. The application has shown to be very interesting in interpreting obtained results, such as patterns that occurred in a same locality and to provide support for activities which could be done as from the visualisation of results. © 2013 Springer-Verlag.
Resumo:
In this work was developed a fuzzy computational model type-2 predictive interval, using the software of the type-2 fuzzy MATLAB toolbox, the final idea is to estimate the number of hospitalizations of patients with respiratory diseases. The interest in the creation of this model is to assist in decision makeshift hospital environment, where there are no medical or professional equipment available to provide the care that the population need. It began working with the study of fuzzy logic, the fuzzy inference system and fuzzy toolbox. Through a real database provided by the Departamento de Informática do Sistema Único de Saúde (DATASUS) and Companhia de Tecnologia de Saneamento Básico (CETESB), was possible to start the model. The analyzed database is composed of the number of patients admitted with respiratory diseases a day for the public hospital in São José dos Campos, during the year 2009 and by factors such as PM10, SO2, wind and humidity. These factors were analyzed as input variables and, through these, is possible to get the number of admissions a day, which is the output variable of the model. For data analysis we used the fuzzy control method type-2 Mamdani. In the following steps the performance developed in this work was compared with the performance of the same model using fuzzy logic type-1. Finally, the validity of the models was estimated by the ROC curve
Resumo:
A Internet das Coisas é um novo paradigma de comunicação que estende o mundo virtual (Internet) para o mundo real com a interface e interação entre objetos. Ela possuirá um grande número de dispositivos heteregôneos interconectados, que deverá gerar um grande volume de dados. Um dos importantes desafios para seu desenvolvimento é se guardar e processar esse grande volume de dados em aceitáveis intervalos de tempo. Esta pesquisa endereça esse desafio, com a introdução de serviços de análise e reconhecimento de padrões nas camadas inferiores do modelo de para Internet das Coisas, que procura reduzir o processamento nas camadas superiores. Na pesquisa foram analisados os modelos de referência para Internet das Coisas e plataformas para desenvolvimento de aplicações nesse contexto. A nova arquitetura de implementada estende o LinkSmart Middeware pela introdução de um módulo para reconhecimento de padrões, implementa algoritmos para estimação de valores, detecção de outliers e descoberta de grupos nos dados brutos, oriundos de origens de dados. O novo módulo foi integrado à plataforma para Big Data Hadoop e usa as implementações algorítmicas do framework Mahout. Este trabalho destaca a importância da comunicação cross layer integrada à essa nova arquitetura. Nos experimentos desenvolvidos na pesquisa foram utilizadas bases de dados reais, provenientes do projeto Smart Santander, de modo a validar da nova arquitetura de IoT integrada aos serviços de análise e reconhecimento de padrões e a comunicação cross-layer.
Resumo:
Part of the work of an insurance company is to keep claims reserves, which is known as the technical reserves, in order to mitigate the risk inherent in their activities and comply with the legal obligations. There are several methods for estimate the claims reserves, deterministics and stochastics methods. One of the most used method is the deterministic method Chain Ladder, of simple application. However, the deterministics methods produce only point estimates, for which the stochastics methods have become increasingly popular because they are capable of producing interval estimates, measuring the variability inherent in the technical reserves. In this study the deterministics methods (Grossing Up, Link Ratio and Chain Ladder) and stochastics (Thomas Mack and Bootstrap associated with Overdispersed Poisson model) will be applied to estimate the claims reserves derived from automobile material damage occurred until December 2012. The data used in this research is based on a real database provided by AXA Portugal. The comparison of results obtained by different methods is hereby presented.
Resumo:
We use data on exchange rates and consumer price indices and the weighting matrix derived by Bayoumi, Lee and Jaewoo (2006) to calculate consumer price index-based REER. The main novelties of our database are that (1) it includes data for 178 countries –many more than in any other publicly available database– plus an external REER for the euro area, using a consistent methodology; (2) it includes up-to-date REER values, such as data for January 2012; and (3) it is relatively easy to calculate REER against any arbitrary group of countries. The annual database is complete for 172 countries and the euro area for 1992-2011 and data is available for six other countries for a shorter period. For several countries annual data is available for earlier years as well, eg data is available for 67 countries from 1960. The monthly database is complete for 138 countries for January 1995-January 2012, and data is also available for 15 other countries for a shorter period. The indicators calculated by us are freely downloadable and will be irregularly updated.
Resumo:
This research is investigating the claim that Change Data Capture (CDC) technologies capture data changes in real-time. Based on theory, our hypothesis states that real-time CDC is not achievable with traditional approaches (log scanning, triggers and timestamps). Traditional approaches to CDC require a resource to be polled, which prevents true real-time CDC. We propose an approach to CDC that encapsulates the data source with a set of web services. These web services will propagate the changes to the targets and eliminate the need for polling. Additionally we propose a framework for CDC technologies that allow changes to flow from source to target. This paper discusses current CDC technologies and presents the theory about why they are unable to deliver changes in real-time. Following, we discuss our web service approach to CDC and accompanying framework, explaining how they can produce real-time CDC. The paper concludes with a discussion on the research required to investigate the real-time capabilities of CDC technologies. © 2010 IEEE.
Resumo:
This paper presents an agent-based approach to modelling individual driver behaviour under the influence of real-time traffic information. The driver behaviour models developed in this study are based on a behavioural survey of drivers which was conducted on a congested commuting corridor in Brisbane, Australia. Commuters' responses to travel information were analysed and a number of discrete choice models were developed to determine the factors influencing drivers' behaviour and their propensity to change route and adjust travel patterns. Based on the results obtained from the behavioural survey, the agent behaviour parameters which define driver characteristics, knowledge and preferences were identified and their values determined. A case study implementing a simple agent-based route choice decision model within a microscopic traffic simulation tool is also presented. Driver-vehicle units (DVUs) were modelled as autonomous software components that can each be assigned a set of goals to achieve and a database of knowledge comprising certain beliefs, intentions and preferences concerning the driving task. Each DVU provided route choice decision-making capabilities, based on perception of its environment, that were similar to the described intentions of the driver it represented. The case study clearly demonstrated the feasibility of the approach and the potential to develop more complex driver behavioural dynamics based on the belief-desire-intention agent architecture. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper aims to study the relationship between the debt level and the asset structure of Brazilian companies of the agribusiness sector, since it is considered a current and relevant discussion: to evaluate the mechanisms for fund-raising and guarantees. The methodology of Granger`s Causality test and Autoregressive Vectors was used to conduct a comparative analysis, applied to a financial database of companies with open capital of Brazilian agribusiness, in particular the agricultural sector and Fisheries and Food and Beverages in a period of 10 years (1997-2007) from quarterly series available in the database of Economatica(R). The results demonstrated that changes in leverage generate variations in the tangibility of the companies, a fact that can be explained by the large search of funding secured by fiduciary transfer of fixed assets, which facilitates access to credit by business of the Agribusiness sector, increasing the payment time and lowering interest rates.
Resumo:
The aim of the present study was to evaluate the genetic correlations among real-time ultrasound carcass, BW, and scrotal circumference (SC) traits in Nelore cattle. Carcass traits, measured by real-time ultrasound of the live animal, were recorded from 2002 to 2004 on 10 farms across 6 Brazilian states on 2,590 males and females ranging in age from 450 to 599 d. Ultrasound records of LM area (LMA) and backfat thickness (BF) were obtained from cross-sectional images between the 12th and 13th ribs, and rump fat thickness (RF) was measured between the hook and pin bones over the junction between gluteus medius and biceps femoris muscles. Also, BW (n = 22,778) and SC ( n = 5,695) were recorded on animals born between 1998 and 2003. The BW traits were 120, 210, 365, 450, and 550-d standardized BW (W120, W210, W365, W450, and W550), plus BW (WS) and hip height (HH) on the ultrasound scanning date. The SC traits were 365-, 450-, and 550-d standardized SC (SC365, SC450, and SC550). For the BW and SC traits, the database used was from the Nelore Breeding Program-Nelore Brazil. The genetic parameters were estimated with multivariate animal models and REML. Estimated genetic correlations between LMA and other traits were 0.06 (BF), -0.04 ( RF), 0.05 (HH), 0.58 (WS), 0.53 (W120), 0.62 (W210), 0.67 (W365), 0.64 ( W450 and W550), 0.28 (SC365), 0.24 (SC450), and 0.00 ( SC550). Estimated genetic correlations between BF and with other traits were 0.74 ( RF), -0.32 (HH), 0.19 (WS), -0.03 (W120), -0.10 (W210), 0.04 (W365), 0.01 (W450), 0.06 ( W550), 0.17 (SC365 and SC450), and -0.19 (SC550). Estimated genetic correlations between RF and other traits were -0.41 (HH), -0.09 (WS), -0.13 ( W120), -0.09 ( W210), -0.01 ( W365), 0.02 (W450), 0.03 (W550), 0.05 ( SC365), 0.11 ( SC450), and -0.18 (SC550). These estimates indicate that selection for carcass traits measured by real-time ultrasound should not cause antagonism in the genetic improvement of SC and BW traits. Also, selection to increase HH might decrease subcutaneous fat as correlated response. Therefore, to obtain animals suited to specific tropical production systems, carcass, BW, and SC traits should be considered in selection programs.
Resumo:
One of the most important advantages of database systems is that the underlying mathematics is rich enough to specify very complex operations with a small number of statements in the database language. This research covers an aspect of biological informatics that is the marriage of information technology and biology, involving the study of real-world phenomena using virtual plants derived from L-systems simulation. L-systems were introduced by Aristid Lindenmayer as a mathematical model of multicellular organisms. Not much consideration has been given to the problem of persistent storage for these simulations. Current procedures for querying data generated by L-systems for scientific experiments, simulations and measurements are also inadequate. To address these problems the research in this paper presents a generic process for data-modeling tools (L-DBM) between L-systems and database systems. This paper shows how L-system productions can be generically and automatically represented in database schemas and how a database can be populated from the L-system strings. This paper further describes the idea of pre-computing recursive structures in the data into derived attributes using compiler generation. A method to allow a correspondence between biologists' terms and compiler-generated terms in a biologist computing environment is supplied. Once the L-DBM gets any specific L-systems productions and its declarations, it can generate the specific schema for both simple correspondence terminology and also complex recursive structure data attributes and relationships.
Resumo:
The study of electricity markets operation has been gaining an increasing importance in last years, as result of the new challenges that the electricity markets restructuring produced. This restructuring increased the competitiveness of the market, but with it its complexity. The growing complexity and unpredictability of the market’s evolution consequently increases the decision making difficulty. Therefore, the intervenient entities are forced to rethink their behaviour and market strategies. Currently, lots of information concerning electricity markets is available. These data, concerning innumerous regards of electricity markets operation, is accessible free of charge, and it is essential for understanding and suitably modelling electricity markets. This paper proposes a tool which is able to handle, store and dynamically update data. The development of the proposed tool is expected to be of great importance to improve the comprehension of electricity markets and the interactions among the involved entities.
Resumo:
This paper presents the first phase of the redevelopment of the Electric Vehicle Scenario Simulator (EVeSSi) tool. A new methodology to generate traffic demand scenarios for the Simulation of Urban MObility (SUMO) tool for urban traffic simulation is described. This methodology is based on a Portugal census database to generate a synthetic population for a given area under study. A realistic case study of a Portuguese city, Vila Real, is assessed. For this area the road network was created along with a synthetic population and public transport. The traffic results were obtained and an electric buses fleet was evaluated assuming that the actual fleet would be replaced in a near future. The energy requirements to charge the electric fleet overnight were estimated in order to evaluate the impacts that it would cause in the local electricity network.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.