395 resultados para QA76
Resumo:
The liquid metal flow in induction crucible models is known to be unstable, turbulent and difficult to predict in the regime of medium frequencies when the electromagnetic skin-layer is of considerable extent. We present long term turbulent flow measurements by a permanent magnet incorporated potential difference velocity probe in a cylindrical container filled with eutectic melt In-Ga-Sn. The parallel numerical simulation of the long time scale development of the turbulent average flow is presented. The numerical flow model uses an implicit pseudo-spectral code and k-w turbulence model, which was recently developed for the transitional flow modelling. The results compare reasonably to the experiment and demonstrate the time development of the turbulent flow field and the turbulence energy.
Resumo:
There have been few genuine success stories about industrial use of formal methods. Perhaps the best known and most celebrated is the use of Z by IBM (in collaboration with Oxford University's Programming Research Group) during the development of CICS/ESA (version 3.1). This work was rewarded with the prestigious Queen's Award for Technological Achievement in 1992 and is especially notable for two reasons: 1) because it is a commercial, rather than safety- or security-critical, system and 2) because the claims made about the effectiveness of Z are quantitative as well as qualitative. The most widely publicized claims are: less than half the normal number of customer-reported errors and a 9% savings in the total development costs of the release. This paper provides an independent assessment of the effectiveness of using Z on CICS based on the set of public domain documents. Using this evidence, we believe that the case study was important and valuable, but that the quantitative claims have not been substantiated. The intellectual arguments and rationale for formal methods are attractive, but their widespread commercial use is ultimately dependent upon more convincing quantitative demonstrations of effectiveness. Despite the pioneering efforts of IBM and PRG, there is still a need for rigorous, measurement-based case studies to assess when and how the methods are most effective. We describe how future similar case studies could be improved so that the results are more rigorous and conclusive.
Resumo:
Aerodynamic generation of sound is governed by the Navier–Stokes equations while acoustic propagation in a non-uniform medium is effectively described by the linearised Euler equations. Different numerical schemes are required for the efficient solution of these two sets of equations, and therefore, coupling techniques become an essential issue. Two types of one-way coupling between the flow solver and the acoustic solver are discussed: (a) for aerodynamic sound generated at solid surfaces, and (b) in the free stream. Test results indicate how the coupling achieves the necessary accuracy so that Computational Fluid Dynamics codes can be used in aeroacoustic simulations.
Resumo:
When designing a new passenger ship or modifiying an existing design, how do we ensure that the proposed design is safe from an evacuation point of view? In the building and aviation industries, computer based evacuation models are being used to tackle similar issues. In these industries, the traditonal restrictive prescriptive approach to design is making way for performance based design methodologies using risk assessment and computer simulation. In the maritime industry, ship evacuation models off the promise to quickly and efficiently bring these considerations into the design phase, while the ship is "on the drawing board". This paper describes the development of evacuation models with applications to passenger ships and further discusses issues concerning data requirements and validation.
Resumo:
Given the importance of occupant behavior on evacuation efficiency, a new behavioral feature has been implemented into building EXODUS. This feature concerns the response of occupants to exit selection and re-direction, given that the occupant is queuing at an external exit. This behavior is not simply pre-determined by the user as part of the initialization process, but involves the occupant taking decisions based on their previous experiences with the enclosure and the information available to them. This information concerns the occupant's prior knowledge of the enclosure and line-of-sight information concerning queues at neighboring exits. This new feature is demonstrated and reviewed through several examples.
Resumo:
In this paper we present some work concerned with the development and testing of a simple solid fuel combustion model incorporated within a Computational Fluid Dynamics (CFD) framework. The model is intended for use in engineering applications of fire field modeling and represents an extension of this technique to situations involving the combustion of solid fuels. The CFD model is coupled with a simple thermal pyrolysis model for combustible solid noncharring fuels, a six-flux radiation model and an eddy-dissipation model for gaseous combustion. The model is then used to simulate a series of small-scale room fire experiments in which the target solid fuel is polymethylmethacrylate. The numerical predictions produced by this coupled model are found to be in very good agreement with experimental data. Furthermore, numerical predictions of the relationship between the air entrained into the fire compartment and the ventilation factor produce a characteristic linear correlation with constant of proportionality 0.38 kg/sm5/12. The simulation results also suggest that the model is capable of predicting the onset of "flashover" type behavior within the fire compartment.
Resumo:
Over recent years there has been an increase in the use of generic Computational Fluid Dynamics (CFD) software packages spread across various application fields. This has created the need for the integration of expertise into CFD software. Expertise can be integrated into CFD software in the form of an Intelligent Knowledge-Based System (IKBS). The advantages of integrating intelligence into generic engineering software are discussed with a special view to software engineering considerations. The software modelling cycle of a typical engineering problem is identified and the respective expertise and user control needed for each modelling phase is shown. The requirements of an IKBS for CFD software are discussed and compared to current practice. The blackboard software architecture is presented. This is shown to be appropriate for the integration of an IKBS into an engineering software package. This is demonstrated through the presentation of the prototype CFD software package FLOWES.
Resumo:
The paper considers the job shop scheduling problem to minimize the makespan. It is assumed that each job consists of at most two operations, one of which is to be processed on one of m⩾2 machines, while the other operation must be performed on a single bottleneck machine, the same for all jobs. For this strongly NP-hard problem we present two heuristics with improved worst-case performance. One of them guarantees a worst-case performance ratio of 3/2. The other algorithm creates a schedule with the makespan that exceeds the largest machine workload by at most the length of the largest operation.
Resumo:
This paper examines scheduling problems in which the setup phase of each operation needs to be attended by a single server, common for all jobs and different from the processing machines. The objective in each situation is to minimize the makespan. For the processing system consisting of two parallel dedicated machines we prove that the problem of finding an optimal schedule is NP-hard in the strong sense even if all setup times are equal or if all processing times are equal. For the case of m parallel dedicated machines, a simple greedy algorithm is shown to create a schedule with the makespan that is at most twice the optimum value. For the two machine case, an improved heuristic guarantees a tight worst-case ratio of 3/2. We also describe several polynomially solvable cases of the later problem. The two-machine flow shop and the open shop problems with a single server are also shown to be NP-hard in the strong sense. However, we reduce the two-machine flow shop no-wait problem with a single server to the Gilmore-Gomory traveling salesman problem and solve it in polynomial time. (c) 2000 John Wiley & Sons, Inc.
Resumo:
This paper studies the problem of scheduling jobs in a two-machine open shop to minimize the makespan. Jobs are grouped into batches and are processed without preemption. A batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. For this NP-hard problem, we propose a linear-time heuristic algorithm that creates a group technology schedule, in which no batch is split into sub-batches. We demonstrate that our heuristic is a -approximation algorithm. Moreover, we show that no group technology algorithm can guarantee a worst-case performance ratio less than 5/4.
Resumo:
This paper considers the problem of processing n jobs in a two-machine non-preemptive open shop to minimize the makespan, i.e., the maximum completion time. One of the machines is assumed to be non-bottleneck. It is shown that, unlike its flow shop counterpart, the problem is NP-hard in the ordinary sense. On the other hand, the problem is shown to be solvable by a dynamic programming algorithm that requires pseudopolynomial time. The latter algorithm can be converted into a fully polynomial approximation scheme that runs in time. An O(n log n) approximation algorithm is also designed whi finds a schedule with makespan at most 5/4 times the optimal value, and this bound is tight.
Resumo:
The paper considers a problem of scheduling n jobs in a two-machine open shop to minimise the makespan, provided that preemption is not allowed and the interstage transportation times are involved. In general, this problem is known to be NP-hard. We present a linear time algorithm that finds an optimal schedule if no transportation time exceeds the smallest of the processing times. We also describe an algorithm that creates a heuristic solution to the problem with job-independent transportation times. Our algorithm provides a worst-case performance ratio of 8/5 if the transportation time of a job depends on the assigned processing route. The ratio reduces to 3/2 if all transportation times are equal.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight, however it has been shown that for certain classes of solution algorithm, the convergence of the solver is strongly influenced by the subdomain aspect ratio. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.
Resumo:
We present a dynamic distributed load balancing algorithm for parallel, adaptive Finite Element simulations in which we use preconditioned Conjugate Gradient solvers based on domain-decomposition. The load balancing is designed to maintain good partition aspect ratio and we show that cut size is not always the appropriate measure in load balancing. Furthermore, we attempt to answer the question why the aspect ratio of partitions plays an important role for certain solvers. We define and rate different kinds of aspect ratio and present a new center-based partitioning method of calculating the initial distribution which implicitly optimizes this measure. During the adaptive simulation, the load balancer calculates a balancing flow using different versions of the diffusion algorithm and a variant of breadth first search. Elements to be migrated are chosen according to a cost function aiming at the optimization of subdomain shapes. Experimental results for Bramble's preconditioner and comparisons to state-of-the-art load balancers show the benefits of the construction.
Resumo:
Three parallel optimisation algorithms, for use in the context of multilevel graph partitioning of unstructured meshes, are described. The first, interface optimisation, reduces the computation to a set of independent optimisation problems in interface regions. The next, alternating optimisation, is a restriction of this technique in which mesh entities are only allowed to migrate between subdomains in one direction. The third treats the gain as a potential field and uses the concept of relative gain for selecting appropriate vertices to migrate. The results are compared and seen to produce very high global quality partitions, very rapidly. The results are also compared with another partitioning tool and shown to be of higher quality although taking longer to compute.