904 resultados para Bounds
Resumo:
The quality of a heuristic solution to a NP-hard combinatorial problem is hard to assess. A few studies have advocated and tested statistical bounds as a method for assessment. These studies indicate that statistical bounds are superior to the more widely known and used deterministic bounds. However, the previous studies have been limited to a few metaheuristics and combinatorial problems and, hence, the general performance of statistical bounds in combinatorial optimization remains an open question. This work complements the existing literature on statistical bounds by testing them on the metaheuristic Greedy Randomized Adaptive Search Procedures (GRASP) and four combinatorial problems. Our findings confirm previous results that statistical bounds are reliable for the p-median problem, while we note that they also seem reliable for the set covering problem. For the quadratic assignment problem, the statistical bounds has previously been found reliable when obtained from the Genetic algorithm whereas in this work they found less reliable. Finally, we provide statistical bounds to four 2-path network design problem instances for which the optimum is currently unknown.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
A significant part of the life of a mechanical component occurs, the crack propagation stage in fatigue. Currently, it is had several mathematical models to describe the crack growth behavior. These models are classified into two categories in terms of stress range amplitude: constant and variable. In general, these propagation models are formulated as an initial value problem, and from this, the evolution curve of the crack is obtained by applying a numerical method. This dissertation presented the application of the methodology "Fast Bounds Crack" for the establishment of upper and lower bounds functions for model evolution of crack size. The performance of this methodology was evaluated by the relative deviation and computational times, in relation to approximate numerical solutions obtained by the Runge-Kutta method of 4th explicit order (RK4). Has been reached a maximum relative deviation of 5.92% and the computational time was, for examples solved, 130,000 times more higher than achieved by the method RK4. Was performed yet an Engineering application in order to obtain an approximate numerical solution, from the arithmetic mean of the upper and lower bounds obtained in the methodology applied in this work, when you don’t know the law of evolution. The maximum relative error found in this application was 2.08% which proves the efficiency of the methodology "Fast Bounds Crack".
Resumo:
This research develops an econometric framework to analyze time series processes with bounds. The framework is general enough that it can incorporate several different kinds of bounding information that constrain continuous-time stochastic processes between discretely-sampled observations. It applies to situations in which the process is known to remain within an interval between observations, by way of either a known constraint or through the observation of extreme realizations of the process. The main statistical technique employs the theory of maximum likelihood estimation. This approach leads to the development of the asymptotic distribution theory for the estimation of the parameters in bounded diffusion models. The results of this analysis present several implications for empirical research. The advantages are realized in the form of efficiency gains, bias reduction and in the flexibility of model specification. A bias arises in the presence of bounding information that is ignored, while it is mitigated within this framework. An efficiency gain arises, in the sense that the statistical methods make use of conditioning information, as revealed by the bounds. Further, the specification of an econometric model can be uncoupled from the restriction to the bounds, leaving the researcher free to model the process near the bound in a way that avoids bias from misspecification. One byproduct of the improvements in model specification is that the more precise model estimation exposes other sources of misspecification. Some processes reveal themselves to be unlikely candidates for a given diffusion model, once the observations are analyzed in combination with the bounding information. A closer inspection of the theoretical foundation behind diffusion models leads to a more general specification of the model. This approach is used to produce a set of algorithms to make the model computationally feasible and more widely applicable. Finally, the modeling framework is applied to a series of interest rates, which, for several years, have been constrained by the lower bound of zero. The estimates from a series of diffusion models suggest a substantial difference in estimation results between models that ignore bounds and the framework that takes bounding information into consideration.
Resumo:
This paper proposes adolescence as a useful concept rather than definitive. It explores the notion of adolescence and its relevance to contemporary society and schooling. We reflect on the purposes for the emergence of research into adolescence during the early 20th century, particularly the particular scientific and societal pressures that served to bring this field to prominence. Recent debate has started to problematise many of the early parameters used to define and provide bounds for understanding adolescents and adolescent experience and for the rationale for some notionally tailored educational contexts. This paper provides an overview of this debate and argues for a reconsideration of some of the basic tenets for definition. In particular we discuss the cultural construction of adolescence in the light of our new globalised society. A possibility for thinking about contemporary adolescents is by considering them in terms of generational characteristics. What makes a new generation? Typically, members of a generation share age, a set of experiences during formative years, and a set of social and economic conditions. The adolescents of today fall into the group known collectively as the ‘Y Generation’, the ‘D (digital) Generation’, Generation C (consumer) and the ‘Millennial’s’. Born after mid-1980, they are characterised as computer and internet competent, multi-taskers, with a global perspective. They respond best to visual language, and are heavily influenced by the media. We consider the generational traits and how this impacts on the teaching and learning.
Resumo:
This paper proposes a novel relative entropy rate (RER) based approach for multiple HMM (MHMM) approximation of a class of discrete-time uncertain processes. Under different uncertainty assumptions, the model design problem is posed either as a min-max optimisation problem or stochastic minimisation problem on the RER between joint laws describing the state and output processes (rather than the more usual RER between output processes). A suitable filter is proposed for which performance results are established which bound conditional mean estimation performance and show that estimation performance improves as the RER is reduced. These filter consistency and convergence bounds are the first results characterising multiple HMM approximation performance and suggest that joint RER concepts provide a useful model selection criteria. The proposed model design process and MHMM filter are demonstrated on an important image processing dim-target detection problem.
Resumo:
This PhD study examines some of what happens in an individual’s mind regarding creativity during problem solving within an organisational context. It presents innovations related to creative motivation, cognitive style and framing effects that can be applied by managers to enhance individual employee creativity within the organisation and thereby assist organisations to become more innovative. The project delivers an understanding of how to leverage natural changes in creative motivation levels during problem solving. This pattern of response is called Creative Resolve Response (CRR). The project also presents evidence of how framing effects can be used to influence decisions involving creative options in order to enhance the potential for managers get employees to select creative options more often for implementation. The study’s objectives are to understand: • How creative motivation changes during problem solving • How cognitive style moderates these creative motivation changes • How framing effects apply to decisions involving creative options to solve problems • How cognitive style moderate these framing effects The thesis presents the findings from three controlled experiments based around self reports during contrived problem solving and decision making situations. The first experiment suggests that creative motivation varies in a predictable and systematic way during problem solving as a function of the problem solver’s perception of progress. The second experiment suggests that there are specific framing effects related to decisions involving creativity. It seems that simply describing an alternative as innovative may activate perceptual biases that overcome risk based framing effects. The third experiment suggests that cognitive style moderates decisions involving creativity in complex ways. It seems that in some contexts, decision makers will prefer a creative option, regardless of their cognitive style, if this option is both outside the bounds of what is officially allowed and yet ultimately safe. The thesis delivers innovation on three levels: theoretical, methodological and empirical. The highlights of these findings are outlined below: 1. Theoretical innovation with the conceptualisation of Creative Resolve Response based on an extension of Amabile’s research regarding creative motivation. 2. Theoretical innovation linking creative motivation and Kirton’s research on cognitive style. 3. Theoretical innovation linking both risk based and attribute framing effects to cognitive style. 4. Methodological innovation for defining and testing preferences for creative solution implementation in the form of operationalised creativity decision alternatives. 5. Methodological innovation to identify extreme decision options by applying Shafir’s findings regarding attribute framing effects in reverse to create a test. 6. Empirical innovation with statistically significant research findings which indicate creative motivation varies in a systematic way. 7. Empirical innovation with statistically significant research findings which identify innovation descriptor framing effects 8. Empirical innovation with statistically significant research findings which expand understanding of Kirton’s cognitive style descriptors including the importance of safe rule breaking. 9. Empirical innovation with statistically significant research findings which validate how framing effects do apply to decisions involving operationalised creativity. Drawing on previous research related to creative motivation, cognitive style, framing effects and supervisor interactions with employees, this study delivers insights which can assist managers to increase the production and implementation of creativity in organisations. Hopefully this will result in organisations which are more innovative. Such organisations have the potential to provide ongoing economic and social benefits.
Resumo:
This study considers the solution of a class of linear systems related with the fractional Poisson equation (FPE) (−∇2)α/2φ=g(x,y) with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix A raised to the fractional power α/2. The solution of the linear system then requires the action of the matrix function f(A)=A−α/2 on a vector b. For large, sparse, and symmetric positive definite matrices, the Lanczos approximation generates f(A)b≈β0Vmf(Tm)e1. This method works well when both the analytic grade of A with respect to b and the residual for the linear system are sufficiently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.
Resumo:
Programs written in languages of the Oberon family usually contain runtime tests on the dynamic type of variables. In some cases it may be desirable to reduce the number of such tests. Typeflow analysis is a static method of determining bounds on the types that objects may possess at runtime. We show that this analysis is able to reduce the number of tests in certain plausible circumstances. Furthermore, the same analysis is able to detect certain program errors at compile time, which would normally only be detected at program execution. This paper introduces the concepts of typeflow analysis and details its use in the reduction of runtime overhead in Oberon-2.
Resumo:
The results of a numerical investigation into the errors for least squares estimates of function gradients are presented. The underlying algorithm is obtained by constructing a least squares problem using a truncated Taylor expansion. An error bound associated with this method contains in its numerator terms related to the Taylor series remainder, while its denominator contains the smallest singular value of the least squares matrix. Perhaps for this reason the error bounds are often found to be pessimistic by several orders of magnitude. The circumstance under which these poor estimates arise is elucidated and an empirical correction of the theoretical error bounds is conjectured and investigated numerically. This is followed by an indication of how the conjecture is supported by a rigorous argument.
Resumo:
A point interpolation method with locally smoothed strain field (PIM-LS2) is developed for mechanics problems using a triangular background mesh. In the PIM-LS2, the strain within each sub-cell of a nodal domain is assumed to be the average strain over the adjacent sub-cells of the neighboring element sharing the same field node. We prove theoretically that the energy norm of the smoothed strain field in PIM-LS2 is equivalent to that of the compatible strain field, and then prove that the solution of the PIM- LS2 converges to the exact solution of the original strong form. Furthermore, the softening effects of PIM-LS2 to system and the effects of the number of sub-cells that participated in the smoothing operation on the convergence of PIM-LS2 are investigated. Intensive numerical studies verify the convergence, softening effects and bound properties of the PIM-LS2, and show that the very ‘‘tight’’ lower and upper bound solutions can be obtained using PIM-LS2.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.