963 resultados para Engineering, Industrial|Engineering, System Science|Operations Research


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study presents a comprehensive mathematical formulation model for a short-term open-pit mine block sequencing problem, which considers nearly all relevant technical aspects in open-pit mining. The proposed model aims to obtain the optimum extraction sequences of the original-size (smallest) blocks over short time intervals and in the presence of real-life constraints, including precedence relationship, machine capacity, grade requirements, processing demands and stockpile management. A hybrid branch-and-bound and simulated annealing algorithm is developed to solve the problem. Computational experiments show that the proposed methodology is a promising way to provide quantitative recommendations for mine planning and scheduling engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Chesapeake Bay is the largest estuary in the United States. It is a unique and valuable national treasure because of its ecological, recreational, economic and cultural benefits. The problems facing the Bay are well known and extensively documented, and are largely related to human uses of the watershed and resources within the Bay. Over the past several decades as the origins of the Chesapeake’s problems became clear, citizens groups and Federal, State, and local governments have entered into agreements and worked together to restore the Bay’s productivity and ecological health. In May 2010, President Barack Obama signed Executive Order number 13508 that tasked a team of Federal agencies to develop a way forward in the protection and restoration of the Chesapeake watershed. Success of both State and Federal efforts will depend on having relevant, sound information regarding the ecology and function of the system as the basis of management and decision making. In response to the executive order, the National Oceanic and Atmospheric Administration’s National Centers for Coastal Ocean Science (NCCOS) has compiled an overview of its research in Chesapeake Bay watershed. NCCOS has a long history of Chesapeake Bay research, investigating the causes and consequences of changes throughout the watershed’s ecosystems. This document presents a cross section of research results that have advanced the understanding of the structure and function of the Chesapeake and enabled the accurate and timely prediction of events with the potential to impact both human communities and ecosystems. There are three main focus areas: changes in land use patterns in the watershed and the related impacts on contaminant and pathogen distribution and concentrations; nutrient inputs and algal bloom events; and habitat use and life history patterns of species in the watershed. Land use changes in the Chesapeake Bay watershed have dramatically changed how the system functions. A comparison of several subsystems within the Bay drainages has shown that water quality is directly related to land use and how the land use affects ecosystem health of the rivers and streams that enter the Chesapeake Bay. Across the Chesapeake as a whole, the rivers that drain developed areas, such as the Potomac and James rivers, tend to have much more highly contaminated sediments than does the mainstem of the Bay itself. In addition to what might be considered traditional contaminants, such as hydrocarbons, new contaminants are appearing in measurable amounts. At fourteen sites studied in the Bay, thirteen different pharmaceuticals were detected. The impact of pharmaceuticals on organisms and the people who eat them is still unknown. The effects of water borne infections on people and marine life are known, however, and the exposure to certain bacteria is a significant health risk. A model is now available that predicts the likelihood of occurrence of a strain of bacteria known as Vibrio vulnificus throughout Bay waters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Species` potential distribution modelling consists of building a representation of the fundamental ecological requirements of a species from biotic and abiotic conditions where the species is known to occur. Such models can be valuable tools to understand the biogeography of species and to support the prediction of its presence/absence considering a particular environment scenario. This paper investigates the use of different supervised machine learning techniques to model the potential distribution of 35 plant species from Latin America. Each technique was able to extract a different representation of the relations between the environmental conditions and the distribution profile of the species. The experimental results highlight the good performance of random trees classifiers, indicating this particular technique as a promising candidate for modelling species` potential distribution. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Credit scoring modelling comprises one of the leading formal tools for supporting the granting of credit. Its core objective consists of the generation of a score by means of which potential clients can be listed in the order of the probability of default. A critical factor is whether a credit scoring model is accurate enough in order to provide correct classification of the client as a good or bad payer. In this context the concept of bootstraping aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the fitted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper we propose a new bagging-type variant procedure, which we call poly-bagging, consisting of combining predictors over a succession of resamplings. The study is derived by credit scoring modelling. The proposed poly-bagging procedure was applied to some different artificial datasets and to a real granting of credit dataset up to three successions of resamplings. We observed better classification accuracy for the two-bagged and the three-bagged models for all considered setups. These results lead to a strong indication that the poly-bagging approach may promote improvement on the modelling performance measures, while keeping a flexible and straightforward bagging-type structure easy to implement. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a visual stimulus generator (VSImG) capable of displaying a gray-scale, 256 x 256 x 8 bitmap image with a frame rate of 500 Hz using a boustrophedonic scanning technique. It is designed for experiments with motion-sensitive neurons of the fly`s visual system, where the flicker fusion frequency of the photoreceptors can reach up to 500 Hz. Devices with such a high frame rate are not commercially available, but are required, if sensory systems with high flicker fusion frequency are to be studied. The implemented hardware approach gives us complete real-time control of the displacement sequence and provides all the signals needed to drive an electrostatic deflection display. With the use of analog signals, very small high-resolution displacements, not limited by the image`s pixel size can be obtained. Very slow image displacements with visually imperceptible steps can also be generated. This can be of interest for other vision research experiments. Two different stimulus files can be used simultaneously, allowing the system to generate X-Y displacements on one display or independent movements on two displays as long as they share the same bitmap image. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Managing software maintenance is rarely a precise task due to uncertainties concerned with resources and services descriptions. Even when a well-established maintenance process is followed, the risk of delaying tasks remains if the new services are not precisely described or when resources change during process execution. Also, the delay of a task at an early process stage may represent a different delay at the end of the process, depending on complexity or services reliability requirements. This paper presents a knowledge-based representation (Bayesian Networks) for maintenance project delays based on specialists experience and a corresponding tool to help in managing software maintenance projects. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A chaotic encryption algorithm is proposed based on the "Life-like" cellular automata (CA), which acts as a pseudo-random generator (PRNG). The paper main focus is to use chaos theory to cryptography. Thus, CA was explored to look for this "chaos" property. This way, the manuscript is more concerning on tests like: Lyapunov exponent, Entropy and Hamming distance to measure the chaos in CA, as well as statistic analysis like DIEHARD and ENT suites. Our results achieved higher randomness quality than others ciphers in literature. These results reinforce the supposition of a strong relationship between chaos and the randomness quality. Thus, the "chaos" property of CA is a good reason to be employed in cryptography, furthermore, for its simplicity, low cost of implementation and respectable encryption power. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Texture image analysis is an important field of investigation that has attracted the attention from computer vision community in the last decades. In this paper, a novel approach for texture image analysis is proposed by using a combination of graph theory and partially self-avoiding deterministic walks. From the image, we build a regular graph where each vertex represents a pixel and it is connected to neighboring pixels (pixels whose spatial distance is less than a given radius). Transformations on the regular graph are applied to emphasize different image features. To characterize the transformed graphs, partially self-avoiding deterministic walks are performed to compose the feature vector. Experimental results on three databases indicate that the proposed method significantly improves correct classification rate compared to the state-of-the-art, e.g. from 89.37% (original tourist walk) to 94.32% on the Brodatz database, from 84.86% (Gabor filter) to 85.07% on the Vistex database and from 92.60% (original tourist walk) to 98.00% on the plant leaves database. In view of these results, it is expected that this method could provide good results in other applications such as texture synthesis and texture segmentation. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fraud is a global problem that has required more attention due to an accentuated expansion of modern technology and communication. When statistical techniques are used to detect fraud, whether a fraud detection model is accurate enough in order to provide correct classification of the case as a fraudulent or legitimate is a critical factor. In this context, the concept of bootstrap aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the adjusted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper, for the first time, we aim to present a pioneer study of the performance of the discrete and continuous k-dependence probabilistic networks within the context of bagging predictors classification. Via a large simulation study and various real datasets, we discovered that the probabilistic networks are a strong modeling option with high predictive capacity and with a high increment using the bagging procedure when compared to traditional techniques. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naive Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis attempts to understand who fought for influence within the European Union’s policy area of the Emissions Trading System (ETS). The ETS is a key aspect of the European Union’s (EU) climate change policy and is particularly important in light of the conclusions at the 2015 United Nations Climate Change Conference in Paris. It was first established in 2003 with Directive 2003/87/EC and completed its first major revision in 2008 with Directive 2009/29/EC. Between these two key Directives, the interplay between industrial and environmental incentives means that the ETS has created a dynamic venue for divergent interest groups. So as to identify the relevant actors, this paper applies the Advocacy Coalition Framework (ACF) of Sabatier. Using position papers, semi-structured interviews, and unpublished documents from the EU institutions, this paper answers it primary research question in its identification of an economy-first and an environment-first lobbying coalition. These coalitions have expanded over time with the environment-first coalition incorporating Greenpeace and the economy-first coalition expanding even further in both scope and speed. However, the economy-first coalition has been susceptible to industry-specific interests. In its application of the ACF, the research shows that a hypothesised effect between the ACF’s external events and these lobbying coalitions is inconclusive. Other hypotheses stemming from the ACF relating to electricity prices and the 2004 enlargement seem to be of significance for the relative composition of the lobbying coalitions. This paper finds that there are certain limitations within the ACF. The findings of this thesis provide a unique insight into how lobbying coalitions within a key EU policy area can form and develop.