388 resultados para Computer software reusability


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Given the importance of occupant behavior on evacuation efficiency, a new behavioral feature has been implemented into building EXODUS. This feature concerns the response of occupants to exit selection and re-direction, given that the occupant is queuing at an external exit. This behavior is not simply pre-determined by the user as part of the initialization process, but involves the occupant taking decisions based on their previous experiences with the enclosure and the information available to them. This information concerns the occupant's prior knowledge of the enclosure and line-of-sight information concerning queues at neighboring exits. This new feature is demonstrated and reviewed through several examples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we present some work concerned with the development and testing of a simple solid fuel combustion model incorporated within a Computational Fluid Dynamics (CFD) framework. The model is intended for use in engineering applications of fire field modeling and represents an extension of this technique to situations involving the combustion of solid fuels. The CFD model is coupled with a simple thermal pyrolysis model for combustible solid noncharring fuels, a six-flux radiation model and an eddy-dissipation model for gaseous combustion. The model is then used to simulate a series of small-scale room fire experiments in which the target solid fuel is polymethylmethacrylate. The numerical predictions produced by this coupled model are found to be in very good agreement with experimental data. Furthermore, numerical predictions of the relationship between the air entrained into the fire compartment and the ventilation factor produce a characteristic linear correlation with constant of proportionality 0.38 kg/sm5/12. The simulation results also suggest that the model is capable of predicting the onset of "flashover" type behavior within the fire compartment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper considers the job shop scheduling problem to minimize the makespan. It is assumed that each job consists of at most two operations, one of which is to be processed on one of m⩾2 machines, while the other operation must be performed on a single bottleneck machine, the same for all jobs. For this strongly NP-hard problem we present two heuristics with improved worst-case performance. One of them guarantees a worst-case performance ratio of 3/2. The other algorithm creates a schedule with the makespan that exceeds the largest machine workload by at most the length of the largest operation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper examines scheduling problems in which the setup phase of each operation needs to be attended by a single server, common for all jobs and different from the processing machines. The objective in each situation is to minimize the makespan. For the processing system consisting of two parallel dedicated machines we prove that the problem of finding an optimal schedule is NP-hard in the strong sense even if all setup times are equal or if all processing times are equal. For the case of m parallel dedicated machines, a simple greedy algorithm is shown to create a schedule with the makespan that is at most twice the optimum value. For the two machine case, an improved heuristic guarantees a tight worst-case ratio of 3/2. We also describe several polynomially solvable cases of the later problem. The two-machine flow shop and the open shop problems with a single server are also shown to be NP-hard in the strong sense. However, we reduce the two-machine flow shop no-wait problem with a single server to the Gilmore-Gomory traveling salesman problem and solve it in polynomial time. (c) 2000 John Wiley & Sons, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper studies the problem of scheduling jobs in a two-machine open shop to minimize the makespan. Jobs are grouped into batches and are processed without preemption. A batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. For this NP-hard problem, we propose a linear-time heuristic algorithm that creates a group technology schedule, in which no batch is split into sub-batches. We demonstrate that our heuristic is a -approximation algorithm. Moreover, we show that no group technology algorithm can guarantee a worst-case performance ratio less than 5/4.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers the problem of processing n jobs in a two-machine non-preemptive open shop to minimize the makespan, i.e., the maximum completion time. One of the machines is assumed to be non-bottleneck. It is shown that, unlike its flow shop counterpart, the problem is NP-hard in the ordinary sense. On the other hand, the problem is shown to be solvable by a dynamic programming algorithm that requires pseudopolynomial time. The latter algorithm can be converted into a fully polynomial approximation scheme that runs in time. An O(n log n) approximation algorithm is also designed whi finds a schedule with makespan at most 5/4 times the optimal value, and this bound is tight.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper considers a problem of scheduling n jobs in a two-machine open shop to minimise the makespan, provided that preemption is not allowed and the interstage transportation times are involved. In general, this problem is known to be NP-hard. We present a linear time algorithm that finds an optimal schedule if no transportation time exceeds the smallest of the processing times. We also describe an algorithm that creates a heuristic solution to the problem with job-independent transportation times. Our algorithm provides a worst-case performance ratio of 8/5 if the transportation time of a job depends on the assigned processing route. The ratio reduces to 3/2 if all transportation times are equal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut-edge weight, however it has been shown that for certain classes of solution algorithm, the convergence of the solver is strongly influenced by the subdomain aspect ratio. In this paper therefore, we modify the multilevel algorithms in order to optimise a cost function based on aspect ratio. Several variants of the algorithms are tested and shown to provide excellent results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a dynamic distributed load balancing algorithm for parallel, adaptive Finite Element simulations in which we use preconditioned Conjugate Gradient solvers based on domain-decomposition. The load balancing is designed to maintain good partition aspect ratio and we show that cut size is not always the appropriate measure in load balancing. Furthermore, we attempt to answer the question why the aspect ratio of partitions plays an important role for certain solvers. We define and rate different kinds of aspect ratio and present a new center-based partitioning method of calculating the initial distribution which implicitly optimizes this measure. During the adaptive simulation, the load balancer calculates a balancing flow using different versions of the diffusion algorithm and a variant of breadth first search. Elements to be migrated are chosen according to a cost function aiming at the optimization of subdomain shapes. Experimental results for Bramble's preconditioner and comparisons to state-of-the-art load balancers show the benefits of the construction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Three parallel optimisation algorithms, for use in the context of multilevel graph partitioning of unstructured meshes, are described. The first, interface optimisation, reduces the computation to a set of independent optimisation problems in interface regions. The next, alternating optimisation, is a restriction of this technique in which mesh entities are only allowed to migrate between subdomains in one direction. The third treats the gain as a potential field and uses the concept of relative gain for selecting appropriate vertices to migrate. The results are compared and seen to produce very high global quality partitions, very rapidly. The results are also compared with another partitioning tool and shown to be of higher quality although taking longer to compute.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

High-integrity castings require sophisticated design and manufacturing procedures to ensure they are essentially macrodefect free. Unfortunately, an important class of such defects—macroporosity, misruns, and pipe shrinkage—are all functions of the interactions of free surface flow, heat transfer, and solidication in complex geometries. Because these defects arise as an interaction of the preceding continuum phenomena, genuinely predictive models of these defects must represent these interactions explicitly. This work describes an attempt to model the formation of macrodefects explicitly as a function of the interacting continuum phenomena in arbitrarily complex three-dimensional geometries. The computational approach exploits a compatible set of finite volume procedures extended to unstructured meshes. The implementation of the model is described together with its testing and a measure of validation. The model demonstrates the potential to predict reliably shrinkage macroporosity, misruns, and pipe shrinkage directly as a result of interactions among free-surface fluid flow, heat transfer, and solidification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The liquid metal flow in inducation crucible models is known to be higly unstable and turbutlen in the regim e of medium frequecies when the elctronmagnetic skin-layer is of considerable extent. We present long term turbulent flow measurements by a permanent magnet incorporated potential difference veolocity probe in a cylindirical container filled with eutecti mlt In-Ga-SN. The parallel numerical simulation of the long time scale development of the turbulen average flow is presented. The numerical lfow model uses a pseud-spectral code and k-w turbulence model, which was recently developed for the transitional flow modelling. The result compare reasonably to the experiment and demonstrate the time development of the turbulent flow field.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper considers a problem of scheduling n jobs in a two-machine open shop to minimize the makespan, provided that preemption is not allowed and the interstage transportation times are involved. This problem is known to be unary NP-hard. We present an algorithm that requires O (n log n) time and provides a worst-case performance ratio of 3/2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A commercial pyrometallurgical process for the extraction of platinum-group metals (PGM) from a feedstock slag was analysed with the use of a model based on computational fluid dynamics. The results of the modelling indicate that recovery depends on the behaviour of the collector phase. A possible method is proposed for estimation of the rate at which PGM particles in slag are absorbed into an iron collector droplet that falls through it. Nanoscale modelling techniques (for particle migration or capture) are combined with a diffusion-controlled mass-transfer model to determine the iron collector droplet size needed for >95% PGM recovery in a typical process bath (70 mm deep) in a realistic time-scale (<1 h). The results show that an iron droplet having a diameter in the range 0.1–0.3 mm gives good recovery (>90%) within a reasonable time. This finding is compatible with published experimental data. Pyrometallurgical processes similar to that investigated should be applicable to other types of waste that contain low levels of potentially valuable metals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the load-balancing problems which arise from parallel scientific codes containing multiple computational phases, or loops over subsets of the data, which are separated by global synchronisation points. We motivate, derive and describe the implementation of an approach which we refer to as the multiphase mesh partitioning strategy to address such issues. The technique is tested on several examples of meshes, both real and artificial, containing multiple computational phases and it is demonstrated that our method can achieve high quality partitions where a standard mesh partitioning approach fails.