35 resultados para Data management and analyses
em University of Queensland eSpace - Australia
Resumo:
Remotely sensed data have been used extensively for environmental monitoring and modeling at a number of spatial scales; however, a limited range of satellite imaging systems often. constrained the scales of these analyses. A wider variety of data sets is now available, allowing image data to be selected to match the scale of environmental structure(s) or process(es) being examined. A framework is presented for use by environmental scientists and managers, enabling their spatial data collection needs to be linked to a suitable form of remotely sensed data. A six-step approach is used, combining image spatial analysis and scaling tools, within the context of hierarchy theory. The main steps involved are: (1) identification of information requirements for the monitoring or management problem; (2) development of ideal image dimensions (scene model), (3) exploratory analysis of existing remotely sensed data using scaling techniques, (4) selection and evaluation of suitable remotely sensed data based on the scene model, (5) selection of suitable spatial analytic techniques to meet information requirements, and (6) cost-benefit analysis. Results from a case study show that the framework provided an objective mechanism to identify relevant aspects of the monitoring problem and environmental characteristics for selecting remotely sensed data and analysis techniques.
Resumo:
A two-year study of malaria control began in Henan Province following cuts in government malaria spending in 1993. Cost data were collected from all government levels and on treatment-seeking (diagnosis, treatment) from 12,325 suspected malaria cases in two endemic counties. The cost burden was found to fall mainly on patients, but using government infrastructure. Good stewardship requires continuing government investment, to at least current levels, along with improved case management. In mainland China, vivax malaria is a significant factor in poverty and economic underdevelopment.
Resumo:
We examined the impact of single-tree selective logging and fuel reduction bums on the abundance of hollow-nesting bird species at a regional scale in southeastern Queensland, Australia. Data were collected on species abundance and habitat structure of dry sclerophyll production forest at 36 sites with known logging and fire histories. Sixteen bird species were recorded with most being resident, territorial, obligate hollow nesters that used hollows that were either small (18 cm diameter). Species densities were typically low, but combinations of two forest management and three habitat structural variables influenced the abundances of eight bird species in different and sometimes conflicting ways. The results suggest that habitat tree management for biodiversity in production forests cannot depend upon habitat structural characteristics alone. Management histories appear to have independent influence (on some bird species) that are distinguishable from their impacts on habitat structure per se. Rather than managing to maximize species abundances to maintain biodiversity, we may be better off managing to avoid extinctions of populations by identifying thresholds of acceptable fluctuations in populations of not only hollow-nesting birds but other forest dependent wildlife relative to scientifically valid forest management and habitat structural surrogates.
Resumo:
Fuzzy data has grown to be an important factor in data mining. Whenever uncertainty exists, simulation can be used as a model. Simulation is very flexible, although it can involve significant levels of computation. This article discusses fuzzy decision-making using the grey related analysis method. Fuzzy models are expected to better reflect decision-making uncertainty, at some cost in accuracy relative to crisp models. Monte Carlo simulation is used to incorporate experimental levels of uncertainty into the data and to measure the impact of fuzzy decision tree models using categorical data. Results are compared with decision tree models based on crisp continuous data.
Resumo:
This paper describes and analyses an innovative engineering management course that applies a project management framework in the context of a feasibility study for a prospective research project. The aim is to have students learn aspects of management that will be relevant from the outset of their professional career while simultaneously having immediate value in helping them to manage a research project and capstone design project in their senior year. An integral part of this innovation was the development of a web-based project management tool. While the main objectives of the new course design were achieved, a number of important lessons were learned that would guide the further development and continuous improvement of this course. The most critical of these is the need to achieve the optimum balance in the mind of the students between doing the project and critically analyzing the processes used to accomplish the work.
Resumo:
Large amounts of information can be overwhelming and costly to process, especially when transmitting data over a network. A typical modern Geographical Information System (GIS) brings all types of data together based on the geographic component of the data and provides simple point-and-click query capabilities as well as complex analysis tools. Querying a Geographical Information System, however, can be prohibitively expensive due to the large amounts of data which may need to be processed. Since the use of GIS technology has grown dramatically in the past few years, there is now a need more than ever, to provide users with the fastest and least expensive query capabilities, especially since an approximated 80 % of data stored in corporate databases has a geographical component. However, not every application requires the same, high quality data for its processing. In this paper we address the issues of reducing the cost and response time of GIS queries by preaggregating data by compromising the data accuracy and precision. We present computational issues in generation of multi-level resolutions of spatial data and show that the problem of finding the best approximation for the given region and a real value function on this region, under a predictable error, in general is "NP-complete.
Resumo:
As reported in Volume 1 of Research on Emotions in Organizations (Ashkanasy, Zerbe, & Härtel, 2005), the chapters in this volume are drawn from the best contributions to the 2004 International Conference on Emotion and Organizational Life held at Birkbeck College, London, complemented by additional, invited chapters. (This biannual conference has come to be known as the “Emonet” conference, after the listserv of members.) Previous edited volumes (Ashkanasy, Härtel, & Zerbe, 2000; Ashkanasy, Zerbe, & Härtel, 2002; Härtel, Zerbe, & Ashkanasy, 2004) were published every two years following the Emonet conference. With the birth of this annual Elsevier series came the opportunity for greater focus in the theme of each volume, and for greater scope for invited contributions. This volume contains eight chapters selected from conference contributions for their quality, interest, and appropriateness to the theme of this volume, as well as four invited chapters. We again acknowledge in particular the assistance of the conference paper reviewers (see the appendix). In the year of publication of this volume the 2006 Emonet conference will be held in Atlanta, USA and will be followed by Volumes 3 and 4 of Research on Emotions in Organizations. Readers interested in learning more about the conferences or the Emonet list should check the Emonet website http://www.uq.edu.au/emonet/.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).