863 resultados para critical path methods


Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A pert-type system, a combination of the program evaluation and review technique (PERT) and the critical path method (CPM), might be used by the hospitality industry to improve planning and control of complex functions. The author discusses this management science technique and how it can assist.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report concerns the stabilization of three crushed limestones by an ss-1 asphalt emulsion and an asphalt cement, 120-150 penetration. Stabilization is evaluated by marshall stability and triaxial shear tests. Test specimens were compacted by the marshall, standard proctor and vibratory methods. Stabilization is evaluated primarily by triaxial shear tests in which confining pressures of 0 to 80 psi were used. Data were obtained on the angle of internal friction, cohesion, volume change, pore water pressure and strain characteristics of the treated and untreated aggregates. The MOHR envelope, bureau of reclamation and modified stress path methods were used to determine shear strength parameters at failure. Several significant conclusions developed by the authors are as follows: (1) the values for effective angle of internal friction and effective cohesion were substantially independent of asphalt content, (2) straight line MOHR envelopes of failure were observed for all treated stones, (3) bituminous admixtures did little to improve volume change (deformation due to load) characteristics of the three crushed limestones, (4) with respect to pore water characteristics (pore pressures and suctions due to lateral loading), bituminous treatment notably improved only the bedford stone, and (5) at low lateral pressures bituminous treatments increased stability by limiting axial strain. This would reduce rutting of highway bases. At high lateral pressures treated stone was less stable than untreated stone.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the present study we elaborated algorithms by using concepts from percolation theory which analyze the connectivity conditions in geological models of petroleum reservoirs. From the petrophysical parameters such as permeability, porosity, transmittivity and others, which may be generated by any statistical process, it is possible to determine the portion of the model with more connected cells, what the interconnected wells are, and the critical path between injector and source wells. This allows to classify the reservoir according to the modeled petrophysical parameters. This also make it possible to determine the percentage of the reservoir to which each well is connected. Generally, the connected regions and the respective minima and/or maxima in the occurrence of the petrophysical parameters studied constitute a good manner to characterize a reservoir volumetrically. Therefore, the algorithms allow to optimize the positioning of wells, offering a preview of the general conditions of the given model s connectivity. The intent is not to evaluate geological models, but to show how to interpret the deposits, how their petrophysical characteristics are spatially distributed, and how the connections between the several parts of the system are resolved, showing their critical paths and backbones. The execution of these algorithms allows us to know the properties of the model s connectivity before the work on reservoir flux simulation is started

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Robot Path Planning (RPP) in dynamic environments is a search problem based on the examination of collision-free paths in the presence of dynamic and static obstacles. Many techniques have been developed to solve this problem. Trapping in a local minima and maintaining a Real-Time performance are known as the two most important challenges that these techniques face to solve such problem. This study presents a comprehensive survey of the various techniques that have been proposed in this domain. As part of this survey, we include a classification of the approaches and identify their methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A multimodal trip planner that produces optimal journeys involving both public transport and private vehicle legs has to solve a number of shortest path problems, both on the road network and the public transport network. The algorithms that are used to solve these shortest path problems have been researched since the late 1950s. However, in order to provide accurate journey plans that can be trusted by the user, the variability of travel times caused by traffic congestion must be taken into consideration. This requires the use of more sophisticated time-dependent shortest path algorithms, which have only been researched in depth over the last two decades, from the mid-1990s. This paper will review and compare nine algorithms that have been proposed in the literature, discussing the advantages and disadvantages of each algorithm on the basis of five important criteria that must be considered when choosing one or more of them to implement in a multimodal trip planner.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The shear difference method which is commonly used for the separation of normal stresses using photoelastic techniques depends on the step-by-step integration of one of the differential equations of equilibrium. It is assumed that the isoclinic and the isochromatic parameters measured by the conventional methods pertain to the state of stress at the midpoint of the light path. In practice, a slice thin enough for the above assumption to be true and at the same time thick enough to give differences in the shear-stress values over the thickness is necessary. The paper discusses the errors introduced in the isoclinic and isochromatic values by the conventional methods neglecting the variation of stresses along the light path. It is shown that while the error introduced in the measurement of the isochromatic parameter may not be serious, the error caused in the isoclinic measurement may lead to serious errors. Since the shear-difference method involves step-by-step integration the error introduced will be of a cumulative nature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Critical Discourse Analysis (CDA) is an approach to analysing the discourses that operate in social contexts, such as classrooms in schools, and their material effects on people, such as teachers and learners. CDA offers a range of ways of engaging with the relationship between texts in context and the power they exercise. In this article, I overview key approaches and provide detail of Fairclough’s (1992, 2003) textually-oriented, linguistic method of CDA, with an example from my own research. I offer a challenge for English teachers, as researchers, to ‘make strange’ the familiar world of their classroom work, and in so doing, identify possibilities for productive change.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thin films are the basis of much of recent technological advance, ranging from coatings with mechanical or optical benefits to platforms for nanoscale electronics. In the latter, semiconductors have been the norm ever since silicon became the main construction material for a multitude of electronical components. The array of characteristics of silicon-based systems can be widened by manipulating the structure of the thin films at the nanoscale - for instance, by making them porous. The different characteristics of different films can then to some extent be combined by simple superposition. Thin films can be manufactured using many different methods. One emerging field is cluster beam deposition, where aggregates of hundreds or thousands of atoms are deposited one by one to form a layer, the characteristics of which depend on the parameters of deposition. One critical parameter is deposition energy, which dictates how porous, if at all, the layer becomes. Other parameters, such as sputtering rate and aggregation conditions, have an effect on the size and consistency of the individual clusters. Understanding nanoscale processes, which cannot be observed experimentally, is fundamental to optimizing experimental techniques and inventing new possibilities for advances at this scale. Atomistic computer simulations offer a window to the world of nanometers and nanoseconds in a way unparalleled by the most accurate of microscopes. Transmission electron microscope image simulations can then bridge this gap by providing a tangible link between the simulated and the experimental. In this thesis, the entire process of cluster beam deposition is explored using molecular dynamics and image simulations. The process begins with the formation of the clusters, which is investigated for Si/Ge in an Ar atmosphere. The structure of the clusters is optimized to bring it as close to the experimental ideal as possible. Then, clusters are deposited, one by one, onto a substrate, until a sufficiently thick layer has been produced. Finally, the concept is expanded by further deposition with different parameters, resulting in multiple superimposed layers of different porosities. This work demonstrates how the aggregation of clusters is not entirely understood within the scope of the approximations used in the simulations; yet, it is also shown how the continued deposition of clusters with a varying deposition energy can lead to a novel kind of nanostructured thin film: a multielemental porous multilayer. According to theory, these new structures have characteristics that can be tailored for a variety of applications, with precision heretofore unseen in conventional multilayer manufacture.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many dynamical systems, including lakes, organisms, ocean circulation patterns, or financial markets, are now thought to have tipping points where critical transitions to a contrasting state can happen. Because critical transitions can occur unexpectedly and are difficult to manage, there is a need for methods that can be used to identify when a critical transition is approaching. Recent theory shows that we can identify the proximity of a system to a critical transition using a variety of so-called `early warning signals', and successful empirical examples suggest a potential for practical applicability. However, while the range of proposed methods for predicting critical transitions is rapidly expanding, opinions on their practical use differ widely, and there is no comparative study that tests the limitations of the different methods to identify approaching critical transitions using time-series data. Here, we summarize a range of currently available early warning methods and apply them to two simulated time series that are typical of systems undergoing a critical transition. In addition to a methodological guide, our work offers a practical toolbox that may be used in a wide range of fields to help detect early warning signals of critical transitions in time series data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The effect of strain path change during rolling on the evolution of deformation texture has been studied for nanocrystalline (nc) nickel. An orthogonal change in strain path, as imparted by alternating rolling and transverse directions, leads to a texture with a strong Bs {110}aOE (c) 112 > component. The microstructural features, after large deformation, show distinct grain morphology for the cross-rolled material. Crystal plasticity simulations, based on viscoplastic self-consistent model, indicate that slip involving partial dislocation plays a vital role in accommodating plastic deformation during the initial stages of rolling. The brass-type texture evolved after cross rolling to large strains is attributed to change in strain path.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The physical meaning and methods of determining loudness were reviewed Loudness is a psychoacoustic metric which closely corresponds to the perceived intensity of a sound stimulus. It can be determined by graphical procedures, numerical methods, or by commercial software. These methods typically require the consideration of the 1/3 octave band spectrum of the sound of interest. The sounds considered in this paper are a 1 kHz tone and pink noise. The loudness of these sounds was calculated in eight ways using different combinations of input data and calculation methods. All the methods considered are based on Zwicker loudness. It was determined that, of the combinations considered, only the commercial software dBSonic and the loudness calculation procedure detailed in DIN 45631 using 1/3 octave band levels filtered using ANSI S1.11-1986 gave the correct values of loudness for a 1 kHz tone. Comparing the results between the sources also demonstrated the difference between sound pressure level and loudness. It was apparent that the calculation and filtering methods must be considered together, as a given calculation will produce different results for different 1/3 octave band input. In the literature reviewed, no reference provided a guide to the selection of the type of filtering that should be used in conjunction with the loudness computation method.