153 resultados para Optimal transportation
Resumo:
State and regional policies, such as low carbon fuel standards (LCFSs), increasingly mandate that transportation fuels be examined according to their greenhouse gas (GHG) emissions. We investigate whether such policies benefit from determining fuel carbon intensities (FCIs) locally to account for variations in fuel production and to stimulate improvements in FCI. In this study, we examine the FCI of transportation fuels on a lifecycle basis within a specific state, Minnesota, and compare the results to FCIs using national averages. Using data compiled from 18 refineries over an 11-year period, we find that ethanol production is highly variable, resulting in a 42% difference between carbon intensities. Historical data suggests that lower FCIs are possible through incremental improvements in refining efficiency and the use of biomass for processing heat. Stochastic modeling of the corn ethanol FCI shows that gains in certainty due to knowledge of specific refinery inputs are overwhelmed by uncertainty in parameters external to the refiner, including impacts of fertilization and land use change. The LCA results are incorporated into multiple policy scenarios to demonstrate the effect of policy configurations on the use of alternative fuels. These results provide a contrast between volumetric mandates and LCFSs. © 2011 Elsevier Ltd.
Resumo:
The optimal control of problems that are constrained by partial differential equations with uncertainties and with uncertain controls is addressed. The Lagrangian that defines the problem is postulated in terms of stochastic functions, with the control function possibly decomposed into an unknown deterministic component and a known zero-mean stochastic component. The extra freedom provided by the stochastic dimension in defining cost functionals is explored, demonstrating the scope for controlling statistical aspects of the system response. One-shot stochastic finite element methods are used to find approximate solutions to control problems. It is shown that applying the stochastic collocation finite element method to the formulated problem leads to a coupling between stochastic collocation points when a deterministic optimal control is considered or when moments are included in the cost functional, thereby forgoing the primary advantage of the collocation method over the stochastic Galerkin method for the considered problem. The application of the presented methods is demonstrated through a number of numerical examples. The presented framework is sufficiently general to also consider a class of inverse problems, and numerical examples of this type are also presented. © 2011 Elsevier B.V.
Resumo:
A new method for the optimal design of Functionally Graded Materials (FGM) is proposed in this paper. Instead of using the widely used explicit functional models, a feature tree based procedural model is proposed to represent generic material heterogeneities. A procedural model of this sort allows more than one explicit function to be incorporated to describe versatile material gradations and the material composition at a given location is no longer computed by simple evaluation of an analytic function, but obtained by execution of customizable procedures. This enables generic and diverse types of material variations to be represented, and most importantly, by a reasonably small number of design variables. The descriptive flexibility in the material heterogeneity formulation as well as the low dimensionality of the design vectors help facilitate the optimal design of functionally graded materials. Using the nature-inspired Particle Swarm Optimization (PSO) method, functionally graded materials with generic distributions can be efficiently optimized. We demonstrate, for the first time, that a PSO based optimizer outperforms classical mathematical programming based methods, such as active set and trust region algorithms, in the optimal design of functionally graded materials. The underlying reason for this performance boost is also elucidated with the help of benchmarked examples. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
POMDP algorithms have made significant progress in recent years by allowing practitioners to find good solutions to increasingly large problems. Most approaches (including point-based and policy iteration techniques) operate by refining a lower bound of the optimal value function. Several approaches (e.g., HSVI2, SARSOP, grid-based approaches and online forward search) also refine an upper bound. However, approximating the optimal value function by an upper bound is computationally expensive and therefore tightness is often sacrificed to improve efficiency (e.g., sawtooth approximation). In this paper, we describe a new approach to efficiently compute tighter bounds by i) conducting a prioritized breadth first search over the reachable beliefs, ii) propagating upper bound improvements with an augmented POMDP and iii) using exact linear programming (instead of the sawtooth approximation) for upper bound interpolation. As a result, we can represent the bounds more compactly and significantly reduce the gap between upper and lower bounds on several benchmark problems. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Resumo:
The 'optimal' or 'best' design process may be the shortest or cheapest process, or the one that leads to a particularly desirable product, or to a reliable and maintainable product, or to a manufacturable product, or some combination of all of these. It is likely to satisfy the aspirations of the organisation to invest an appropriate amount of resource in the development of a specific new market opportunity, set in the context of longer-term business goals. This paper describes the progress made in over ten years of research on process modelling undertaken at the Cambridge Engineering Design Centre to identify an 'optimal' design process with which to develop an 'adequate' product.
Innovative Stereo Vision-Based Approach to Generate Dense Depth Map of Transportation Infrastructure
Resumo:
Three-dimensional (3-D) spatial data of a transportation infrastructure contain useful information for civil engineering applications, including as-built documentation, on-site safety enhancements, and progress monitoring. Several techniques have been developed for acquiring 3-D point coordinates of infrastructure, such as laser scanning. Although the method yields accurate results, the high device costs and human effort required render the process infeasible for generic applications in the construction industry. A quick and reliable approach, which is based on the principles of stereo vision, is proposed for generating a depth map of an infrastructure. Initially, two images are captured by two similar stereo cameras at the scene of the infrastructure. A Harris feature detector is used to extract feature points from the first view, and an innovative adaptive window-matching technique is used to compute feature point correspondences in the second view. A robust algorithm computes the nonfeature point correspondences. Thus, the correspondences of all the points in the scene are obtained. After all correspondences have been obtained, the geometric principles of stereo vision are used to generate a dense depth map of the scene. The proposed algorithm has been tested on several data sets, and results illustrate its potential for stereo correspondence and depth map generation.
Innovative Stereo Vision-Based Approach to Generate Dense Depth Map of Transportation Infrastructure
Resumo:
Three-dimensional (3-D) spatial data of a transportation infrastructure contain useful information for civil engineering applications, including as-built documentation, on-site safety enhancements, and progress monitoring. Several techniques have been developed for acquiring 3-D point coordinates of infrastructure, such as laser scanning. Although the method yields accurate results, the high device costs and human effort required render the process infeasible for generic applications in the construction industry. A quick and reliable approach, which is based on the principles of stereo vision, is proposed for generating a depth map of an infrastructure. Initially, two images are captured by two similar stereo cameras at the scene of the infrastructure. A Harris feature detector is used to extract feature points from the first view, and an innovative adaptive window-matching technique is used to compute feature point correspondences in the second view. A robust algorithm computes the nonfeature point correspondences. Thus, the correspondences of all the points in the scene are obtained. After all correspondences have been obtained, the geometric principles of stereo vision are used to generate a dense depth map of the scene. The proposed algorithm has been tested on several data sets, and results illustrate its potential for stereo correspondence and depth map generation.
Resumo:
Deciding whether a set of objects are the same or different is a cornerstone of perception and cognition. Surprisingly, no principled quantitative model of sameness judgment exists. We tested whether human sameness judgment under sensory noise can be modeled as a form of probabilistically optimal inference. An optimal observer would compare the reliability-weighted variance of the sensory measurements with a set size-dependent criterion. We conducted two experiments, in which we varied set size and individual stimulus reliabilities. We found that the optimal-observer model accurately describes human behavior, outperforms plausible alternatives in a rigorous model comparison, and accounts for three key findings in the animal cognition literature. Our results provide a normative footing for the study of sameness judgment and indicate that the notion of perception as near-optimal inference extends to abstract relations.