128 resultados para Solving-problem algorithms
em University of Queensland eSpace - Australia
Resumo:
A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.
Resumo:
This study evaluated the effectiveness of the Problem Solving For Life program as it universal approach to the prevention of adolescent depression. Short-term results indicated that participants with initially elevated depressions scores (high risk) who received the intervention showed a significantly greater decrease in depressive symptoms and increase in life problem-solving scores from pre- to postintervention compared with a high-risk control group. Low-risk participants who received the intervention reported a small but significant decrease in depression scores over the intervention period, whereas the low-risk controls reported an increase in depression scores. The low-risk group reported a significantly greater increase in problem-solving scores over the intervention period compared with low-risk controls. These results were not maintained, however, at 12-month follow-up.
Resumo:
Rumor discourse has been conceptualized as an attempt to reduce anxiety and uncertainty via a process of social sensemaking. Fourteen rumors transmitted on various Internet discussion groups were observed and content analyzed over the life of each rumor With this (previously unavailable) more ecologically robust methodology, the intertwined threads of sensemaking and the gaining of interpretive control are clearly evident in the tapestry of rumor discourse. We propose a categorization of statements (the Rumor Interaction Analysis System) and find differences between dread rumors and wish rumors in anxiety-related content categories. Cluster analysis of these statements reveals a typology of voices (communicative postures) exhibiting sensemaking activities of the rumor discussion group, such as hypothesizing, skeptical critique, directing of activities to gain information, and presentation of evidence. These findings enrich our understanding of the long-implicated sensemaking function of rumor by clarifying the elements of communication that operate in rumor's social context.
Resumo:
The buffer allocation problem (BAP) is a well-known difficult problem in the design of production lines. We present a stochastic algorithm for solving the BAP, based on the cross-entropy method, a new paradigm for stochastic optimization. The algorithm involves the following iterative steps: (a) the generation of buffer allocations according to a certain random mechanism, followed by (b) the modification of this mechanism on the basis of cross-entropy minimization. Through various numerical experiments we demonstrate the efficiency of the proposed algorithm and show that the method can quickly generate (near-)optimal buffer allocations for fairly large production lines.
Resumo:
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Resumo:
Creativity is increasingly recognised as an essential component of engineering design. This paper describes an exploratory study into the nature and importance of creativity in engineering design problem solving in relation to the possible impact of software design tools. The first stage of the study involved an empirical investigation in the form of a case study of the use of standard CAD tool sets and the development of a systems engineering software support tool. It was found that there were several ways in which CAD influenced the creative process, including enhancing visualisation and communication, premature fixation, circumscribed thinking and bounded ideation. The tool development experience uncovered the difficulty in supporting creative processes from the developer's perspective. The issues were the necessity of making assumptions, achieving a balance between structure and flexibility, and the pitfalls of satisfying user wants and needs. The second part of the study involved the development of a model of the creative problem solving process in engineering design. This provided a possible explanation for why purpose designed engineering software tools might encourage an analytical problem solving approach and discourage a more creative approach.
Resumo:
An approximate analytical technique employing a finite integral transform is developed to solve the reaction diffusion problem with Michaelis-Menten kinetics in a solid of general shape. A simple infinite series solution for the substrate concentration is obtained as a function of the Thiele modulus, modified Sherwood number, and Michaelis constant. An iteration scheme is developed to bring the approximate solution closer to the exact solution. Comparison with the known exact solutions for slab geometry (quadrature) and numerically exact solutions for spherical geometry (orthogonal collocation) shows excellent agreement for all values of the Thiele modulus and Michaelis constant.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.