851 resultados para driver information systems, genetic algorithms, prediction theory, transportation
Resumo:
House File 2196 required the Department of Transportation (DOT) to study the acceptance of electronic payments at its customer service sites and sites operated by county treasurers. Specifically the legislation requires the following: “The department of transportation shall review the current methods the department employs for the collection of fees and other revenues at sites operated by county treasurers under chapter 321M and at customer service sites operated by the department. In conducting its review, the department, in cooperation with the treasurer of state, shall consider providing an electronic payment option for all of its customers. The department shall report its findings and recommendations by December 31, 2008, to the senate and house standing committees on transportation regarding the advantages and disadvantages of implementing one or more electronic payment systems.” This review focused on estimating the costs of providing an electronic payment option for customers of the DOT driver’s license stations and those of the 81 county treasurers. Customers at these sites engage in three primary financial transactions for which acceptance of electronic payments was studied: paying for a driver’s license (DL), paying for a non-operator identification card (ID), and paying certain civil penalties. Both consumer credit cards and PIN-based debit cards were reviewed as electronic payment options. It was assumed that most transactions would be made using a consumer credit card. Credit card companies charge a fee for each transaction for which they are used. The amount of these fees varies among credit card companies. The estimates for credit card fees used in this study were based on the State Treasurer of Iowa’s current credit card contract, which is due to expire in September 2009. Since credit card companies adjust their fees each year, estimates were based on the 2008 fee schedule. There is also a fee for the use of PIN-based debit cards. The estimates for PIN-based debit card transactions were based on information provided by Wells Fargo Merchant Services for current fees charged by debit card networks. Credit and debit card transactions would be processed through vendor-provided hardware and software. The costs would be determined through the competitive bidding process since several vendors provide this function; therefore, these costs are not reflected in this document.
Resumo:
We present new metaheuristics for solving real crew scheduling problemsin a public transportation bus company. Since the crews of thesecompanies are drivers, we will designate the problem by the bus-driverscheduling problem. Crew scheduling problems are well known and severalmathematical programming based techniques have been proposed to solvethem, in particular using the set-covering formulation. However, inpractice, there exists the need for improvement in terms of computationalefficiency and capacity of solving large-scale instances. Moreover, thereal bus-driver scheduling problems that we consider can present variantaspects of the set covering, as for example a different objectivefunction, implying that alternative solutions methods have to bedeveloped. We propose metaheuristics based on the following approaches:GRASP (greedy randomized adaptive search procedure), tabu search andgenetic algorithms. These metaheuristics also present some innovationfeatures based on and genetic algorithms. These metaheuristics alsopresent some innovation features based on the structure of the crewscheduling problem, that guide the search efficiently and able them tofind good solutions. Some of these new features can also be applied inthe development of heuristics to other combinatorial optimizationproblems. A summary of computational results with real-data problems ispresented.
Resumo:
In the last decade, Intelligent Transportation Systems (ITS) have increasingly been deployed in work zones by state departments of transportation. Also known as smart work zone systems they improve traffic operations and safety by providing real-time information to travelers, monitoring traffic conditions, and managing incidents. Although there have been numerous ITS deployments in work zones, a framework for evaluating the effectiveness of these deployments does not exist. To justify the continued development and implementation of smart work zone systems, this study developed a framework to determine ITS effectiveness for specific work zone projects. The framework recommends using one or more of five performance measures: diversion rate, delay time, queue length, crash frequency, and speed. The monetary benefits and costs of ITS deployment in a work zone can then be computed using the performance measure values. Such ITS computations include additional considerations that are typically not present in standard benefit-cost computations. The proposed framework will allow for consistency in performance measures across different ITS studies thus allowing for comparisons across studies or for meta analysis. In addition, guidance on the circumstances under which ITS deployment is recommended for a work zone is provided. The framework was illustrated using two case studies: one urban work zone on I-70 and one rural work zone on I-44, in Missouri. The goals of the two ITS deployments were different – the I-70 ITS deployment was targeted at improving mobility whereas the I-44 deployment was targeted at improving safety. For the I-70 site, only permanent ITS equipment that was already in place was used for the project and no temporary ITS equipment was deployed. The permanent DMS equipment serves multiple purposes, and it is arguable whether that cost should be attributed to the work zone project. The data collection effort for the I-70 site was very significant as portable surveillance captured the actual diversion flows to alternative routes. The benefit-cost ratio for the I-70 site was 2.1 to 1 if adjusted equipment costs were included and 6.9 to 1 without equipment costs. The safety-focused I-44 ITS deployment had an estimated benefit-cost ratio of 3.2 to 1.
Resumo:
The objective of this work was to evaluate sampling density on the prediction accuracy of soil orders, with high spatial resolution, in a viticultural zone of Serra Gaúcha, Southern Brazil. A digital elevation model (DEM), a cartographic base, a conventional soil map, and the Idrisi software were used. Seven predictor variables were calculated and read along with soil classes in randomly distributed points, with sampling densities of 0.5, 1, 1.5, 2, and 4 points per hectare. Data were used to train a decision tree (Gini) and three artificial neural networks: adaptive resonance theory, fuzzy ARTMap; self‑organizing map, SOM; and multi‑layer perceptron, MLP. Estimated maps were compared with the conventional soil map to calculate omission and commission errors, overall accuracy, and quantity and allocation disagreement. The decision tree was less sensitive to sampling density and had the highest accuracy and consistence. The SOM was the less sensitive and most consistent network. The MLP had a critical minimum and showed high inconsistency, whereas fuzzy ARTMap was more sensitive and less accurate. Results indicate that sampling densities used in conventional soil surveys can serve as a reference to predict soil orders in Serra Gaúcha.
Resumo:
Many engineering problems that can be formulatedas constrained optimization problems result in solutionsgiven by a waterfilling structure; the classical example is thecapacity-achieving solution for a frequency-selective channel.For simple waterfilling solutions with a single waterlevel and asingle constraint (typically, a power constraint), some algorithmshave been proposed in the literature to compute the solutionsnumerically. However, some other optimization problems result insignificantly more complicated waterfilling solutions that includemultiple waterlevels and multiple constraints. For such cases, itmay still be possible to obtain practical algorithms to evaluate thesolutions numerically but only after a painstaking inspection ofthe specific waterfilling structure. In addition, a unified view ofthe different types of waterfilling solutions and the correspondingpractical algorithms is missing.The purpose of this paper is twofold. On the one hand, itoverviews the waterfilling results existing in the literature from aunified viewpoint. On the other hand, it bridges the gap betweena wide family of waterfilling solutions and their efficient implementationin practice; to be more precise, it provides a practicalalgorithm to evaluate numerically a general waterfilling solution,which includes the currently existing waterfilling solutions andothers that may possibly appear in future problems.
Resumo:
BACKGROUND AND AIMS: Parental history (PH) and genetic risk scores (GRSs) are separately associated with coronary heart disease (CHD), but evidence regarding their combined effects is lacking. We aimed to evaluate the joint associations and predictive ability of PH and GRSs for incident CHD. METHODS: Data for 4283 Caucasians were obtained from the population-based CoLaus Study, over median follow-up time of 5.6 years. CHD was defined as incident myocardial infarction, angina, percutaneous coronary revascularization or bypass grafting. Single nucleotide polymorphisms for CHD identified by genome-wide association studies were used to construct unweighted and weighted versions of three GRSs, comprising of 38, 53 and 153 SNPs respectively. RESULTS: PH was associated with higher values of all weighted GRSs. After adjustment for age, sex, smoking, diabetes, systolic blood pressure, low and high density lipoprotein cholesterol, PH was significantly associated with CHD [HR 2.61, 95% CI (1.47-4.66)] and further adjustment for GRSs did not change this estimate. Similarly, one standard deviation change of the weighted 153-SNPs GRS was significantly associated with CHD [HR 1.50, 95% CI (1.26-1.80)] and remained so, after further adjustment for PH. The weighted, 153-SNPs GRS, but not PH, modestly improved discrimination [(C-index improvement, 0.016), p = 0.048] and reclassification [(NRI improvement, 8.6%), p = 0.027] beyond cardiovascular risk factors. After including both the GRS and PH, model performance improved further [(C-index improvement, 0.022), p = 0.006]. CONCLUSION: After adjustment for cardiovascular risk factors, PH and a weighted, polygenic GRS were jointly associated with CHD and provided additive information for coronary events prediction.
Resumo:
Transportation and warehousing are large and growing sectors in the society, and their efficiency is of high importance. Transportation also has a large share of global carbondioxide emissions, which are one the leading causes of anthropogenic climate warming. Various countries have agreed to decrease their carbon emissions according to the Kyoto protocol. Transportation is the only sector where emissions have steadily increased since the 1990s, which highlights the importance of transportation efficiency. The efficiency of transportation and warehousing can be improved with the help of simulations, but models alone are not sufficient. This research concentrates on the use of simulations in decision support systems. Three main simulation approaches are used in logistics: discrete-event simulation, systems dynamics, and agent-based modeling. However, individual simulation approaches have weaknesses of their own. Hybridization (combining two or more approaches) can improve the quality of the models, as it allows using a different method to overcome the weakness of one method. It is important to choose the correct approach (or a combination of approaches) when modeling transportation and warehousing issues. If an inappropriate method is chosen (this can occur if the modeler is proficient in only one approach or the model specification is not conducted thoroughly), the simulation model will have an inaccurate structure, which in turn will lead to misleading results. This issue can further escalate, as the decision-maker may assume that the presented simulation model gives the most useful results available, even though the whole model can be based on a poorly chosen structure. In this research it is argued that simulation- based decision support systems need to take various issues into account to make a functioning decision support system. The actual simulation model can be constructed using any (or multiple) approach, it can be combined with different optimization modules, and there needs to be a proper interface between the model and the user. These issues are presented in a framework, which simulation modelers can use when creating decision support systems. In order for decision-makers to fully benefit from the simulations, the user interface needs to clearly separate the model and the user, but at the same time, the user needs to be able to run the appropriate runs in order to analyze the problems correctly. This study recommends that simulation modelers should start to transfer their tacit knowledge to explicit knowledge. This would greatly benefit the whole simulation community and improve the quality of simulation-based decision support systems as well. More studies should also be conducted by using hybrid models and integrating simulations with Graphical Information Systems.
Resumo:
Feature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct - the minimal discerning set of attributes - is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multi--objective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the one-way ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.
Resumo:
Los resultados financieros de las organizaciones son objeto de estudio y análisis permanente, predecir sus comportamientos es una tarea permanente de empresarios, inversionistas, analistas y académicos. En el presente trabajo se explora el impacto del tamaño de los activos (valor total de los activos) en la cuenta de resultados operativos y netos, analizando inicialmente la relación entre dichas variables con indicadores tradicionales del análisis financiero como es el caso de la rentabilidad operativa y neta y con elementos de estadística descriptiva que permiten calificar los datos utilizados como lineales o no lineales. Descubriendo posteriormente que los resultados financieros de las empresas vigiladas por la Superintendencia de Sociedades para el año 2012, tienen un comportamiento no lineal, de esta manera se procede a analizar la relación de los activos y los resultados con la utilización de espacios de fase y análisis de recurrencia, herramientas útiles para sistemas caóticos y complejos. Para el desarrollo de la investigación y la revisión de la relación entre las variables de activos y resultados financieros se tomó como fuente de información los reportes financieros del cierre del año 2012 de la Superintendencia de Sociedades (Superintendencia de Sociedades, 2012).
Resumo:
La computación evolutiva y muy especialmente los algoritmos genéticos son cada vez más empleados en las organizaciones para resolver sus problemas de gestión y toma de decisiones (Apoteker & Barthelemy, 2000). La literatura al respecto es creciente y algunos estados del arte han sido publicados. A pesar de esto, no hay un trabajo explícito que evalúe de forma sistemática el uso de los algoritmos genéticos en problemas específicos de los negocios internacionales (ejemplos de ello son la logística internacional, el comercio internacional, el mercadeo internacional, las finanzas internacionales o estrategia internacional). El propósito de este trabajo de grado es, por lo tanto, realizar un estado situacional de las aplicaciones de los algoritmos genéticos en los negocios internacionales.
Resumo:
Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
In this paper a genetic algorithm based reconfiguration method is proposed to minimize the real power losses of distribution systems. The main innovation of this research work is that new types of crossover and mutation operators are proposed, such that the best possible results are obtained, with an acceptable computational effort. The crossover and mutation operators were developed so as to take advantage of the particular characteristics of distribution systems (as the radial topology). Simulation results indicate that the proposed method is very efficient, being able to find excellent configurations, with low computational effort, especially for larger systems. ©2007 IEEE.
Resumo:
This paper presents a survey of evolutionary algorithms that are designed for decision-tree induction. In this context, most of the paper focuses on approaches that evolve decision trees as an alternate heuristics to the traditional top-down divide-and-conquer approach. Additionally, we present some alternative methods that make use of evolutionary algorithms to improve particular components of decision-tree classifiers. The paper's original contributions are the following. First, it provides an up-to-date overview that is fully focused on evolutionary algorithms and decision trees and does not concentrate on any specific evolutionary approach. Second, it provides a taxonomy, which addresses works that evolve decision trees and works that design decision-tree components by the use of evolutionary algorithms. Finally, a number of references are provided that describe applications of evolutionary algorithms for decision-tree induction in different domains. At the end of this paper, we address some important issues and open questions that can be the subject of future research.