844 resultados para Data mining and knowledge discovery


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An increasing degree of attention is being given to the ecosystem services which insect pollinators supply, and the economic value of these services. Recent research suggests that a range of factors are contributing to a global decline in pollination services, which are often used as a “headline” ecosystem service in terms of communicating the concept of ecosystem services, and how this ties peoples׳ well-being to the condition of ecosystems and the biodiversity found therein. Our paper offers a conceptual framework for measuring the economic value of changes in insect pollinator populations, and then reviews what evidence exists on the empirical magnitude of these values (both market and non-market). This allows us to highlight where the largest gaps in knowledge are, where the greatest conceptual and empirical challenges remain, and where research is most needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on ""Joint Research Using Small Tokamaks"". (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Mario Schenberg gravitational wave detector has started its commissioning phase at the Physics Institute of the University of Sao Paulo. We have collected almost 200 h of data from the instrument in order to check out its behavior and performance. We have also been developing a data acquisition system for it under a VXI System. Such a system is composed of an analog-to-digital converter and a GPS receiver for time synchronization. We have been building the software that controls and sets up the data acquisition. Here we present an overview of the Mario Schenberg detector and its data acquisition system, some results from the first commissioning run and solutions for some problems we have identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

GPS technology has been embedded into portable, low-cost electronic devices nowadays to track the movements of mobile objects. This implication has greatly impacted the transportation field by creating a novel and rich source of traffic data on the road network. Although the promise offered by GPS devices to overcome problems like underreporting, respondent fatigue, inaccuracies and other human errors in data collection is significant; the technology is still relatively new that it raises many issues for potential users. These issues tend to revolve around the following areas: reliability, data processing and the related application. This thesis aims to study the GPS tracking form the methodological, technical and practical aspects. It first evaluates the reliability of GPS based traffic data based on data from an experiment containing three different traffic modes (car, bike and bus) traveling along the road network. It then outline the general procedure for processing GPS tracking data and discuss related issues that are uncovered by using real-world GPS tracking data of 316 cars. Thirdly, it investigates the influence of road network density in finding optimal location for enhancing travel efficiency and decreasing travel cost. The results show that the geographical positioning is reliable. Velocity is slightly underestimated, whereas altitude measurements are unreliable.Post processing techniques with auxiliary information is found necessary and important when solving the inaccuracy of GPS data. The densities of the road network influence the finding of optimal locations. The influence will stabilize at a certain level and do not deteriorate when the node density is higher.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tendo como motivação o desenvolvimento de uma representação gráfica de redes com grande número de vértices, útil para aplicações de filtro colaborativo, este trabalho propõe a utilização de superfícies de coesão sobre uma base temática multidimensionalmente escalonada. Para isso, utiliza uma combinação de escalonamento multidimensional clássico e análise de procrustes, em algoritmo iterativo que encaminha soluções parciais, depois combinadas numa solução global. Aplicado a um exemplo de transações de empréstimo de livros pela Biblioteca Karl A. Boedecker, o algoritmo proposto produz saídas interpretáveis e coerentes tematicamente, e apresenta um stress menor que a solução por escalonamento clássico. O estudo da estabilidade da representação de redes frente à variação amostral dos dados, realizado com base em simulações envolvendo 500 réplicas em 6 níveis de probabilidade de inclusão das arestas nas réplicas, fornece evidência em favor da validade dos resultados obtidos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O trabalho busca analisar e entender se a aplicação de técnicas de Data mining em processos de aquisição de clientes de cartão de crédito, especificamente os que não possuem uma conta corrente em banco, podem trazer resultados positivos para as empresas que contam com processos ativos de conquista de clientes. Serão exploradas três técnicas de amplo reconhecimento na comunidade acadêmica : Regressão logística, Árvores de decisão, e Redes neurais. Será utilizado como objeto de estudo uma empresa do setor financeiro, especificamente nos seus processos de aquisição de clientes não correntistas para o produto cartão de crédito. Serão mostrados resultados da aplicação dos modelos para algumas campanhas passadas de venda de cartão de crédito não correntistas, para que seja possível verificar se o emprego de modelos estatísticos que discriminem os clientes potenciais mais propensos dos menos propensos à contratação podem se traduzir na obtenção de ganhos financeiros. Esses ganhos podem vir mediante redução dos custos de marketing abordando-se somente os clientes com maiores probabilidades de responderem positivamente à campanha. A fundamentação teórica se dará a partir da introdução dos conceitos do mercado de cartões de crédito, do canal telemarketing, de CRM, e das técnicas de data mining. O trabalho apresentará exemplos práticos de aplicação das técnicas mencionadas verificando os potenciais ganhos financeiros. Os resultados indicam que há grandes oportunidades para o emprego das técnicas de data mining nos processos de aquisição de clientes, possibilitando a racionalização da operação do ponto de vista de custos de aquisição.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trata da aplicação de ferramentas de Data Mining e do conceito de Data Warehouse à coleta e análise de dados obtidos a partir das ações da Secretaria de Estado da Educação de São Paulo. A variável dependente considerada na análise é o resultado do rendimento das escolas estaduais obtido através das notas de avaliação do SARESP (prova realizada no estado de São Paulo). O data warehouse possui ainda dados operacionais e de ações já realizadas, possibilitando análise de influência nos resultados

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This document represents a doctoral thesis held under the Brazilian School of Public and Business Administration of Getulio Vargas Foundation (EBAPE/FGV), developed through the elaboration of three articles. The research that resulted in the articles is within the scope of the project entitled “Windows of opportunities and knowledge networks: implications for catch-up in developing countries”, funded by Support Programme for Research and Academic Production of Faculty (ProPesquisa) of Brazilian School of Public and Business Administration (EBAPE) of Getulio Vargas Foundation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Online geographic-databases have been growing increasingly as they have become a crucial source of information for both social networks and safety-critical systems. Since the quality of such applications is largely related to the richness and completeness of their data, it becomes imperative to develop adaptable and persistent storage systems, able to make use of several sources of information as well as enabling the fastest possible response from them. This work will create a shared and extensible geographic model, able to retrieve and store information from the major spatial sources available. A geographic-based system also has very high requirements in terms of scalability, computational power and domain complexity, causing several difficulties for a traditional relational database as the number of results increases. NoSQL systems provide valuable advantages for this scenario, in particular graph databases which are capable of modeling vast amounts of inter-connected data while providing a very substantial increase of performance for several spatial requests, such as finding shortestpath routes and performing relationship lookups with high concurrency. In this work, we will analyze the current state of geographic information systems and develop a unified geographic model, named GeoPlace Explorer (GE). GE is able to import and store spatial data from several online sources at a symbolic level in both a relational and a graph databases, where several stress tests were performed in order to find the advantages and disadvantages of each database paradigm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliography