975 resultados para Warehouse layout


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes ongoing work on a system using spatial descriptions to construct abstract maps that can be used for goal-directed exploration in an unfamiliar office environment. Abstract maps contain membership, connectivity, and spatial layout information extracted from symbolic spatial information. In goal-directed exploration, the robot would then link this information with observed symbolic information and its grounded world representation. We demonstrate the ability of the system to extract and represent membership, connectivity, and spatial layout information from spatial descriptions of an office environment. In the planned study, the robot will navigate to the goal location using the abstract map to inform the best direction to explore in.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents stylized models for conducting performance analysis of the manufacturing supply chain network (SCN) in a stochastic setting for batch ordering. We use queueing models to capture the behavior of SCN. The analysis is clubbed with an inventory optimization model, which can be used for designing inventory policies . In the first case, we model one manufacturer with one warehouse, which supplies to various retailers. We determine the optimal inventory level at the warehouse that minimizes total expected cost of carrying inventory, back order cost associated with serving orders in the backlog queue, and ordering cost. In the second model we impose service level constraint in terms of fill rate (probability an order is filled from stock at warehouse), assuming that customers do not balk from the system. We present several numerical examples to illustrate the model and to illustrate its various features. In the third case, we extend the model to a three-echelon inventory model which explicitly considers the logistics process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Reeb graph tracks topology changes in level sets of a scalar function and finds applications in scientific visualization and geometric modeling. This paper describes a near-optimal two-step algorithm that constructs the Reeb graph of a Morse function defined over manifolds in any dimension. The algorithm first identifies the critical points of the input manifold, and then connects these critical points in the second step to obtain the Reeb graph. A simplification mechanism based on topological persistence aids in the removal of noise and unimportant features. A radial layout scheme results in a feature-directed drawing of the Reeb graph. Experimental results demonstrate the efficiency of the Reeb graph construction in practice and its applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report the design and characterization of a circuit technique to measure the on-chip delay of an individual logic gate (both inverting and noninverting) in its unmodified form. The test circuit comprises of digitally reconfigurable ring oscillator (RO). The gate under test is embedded in each stage of the ring oscillator. A system of linear equations is then formed with different configuration settings of the RO, relating the individual gate delay to the measured period of the RO, whose solution gives the delay of the individual gates. Experimental results from a test chip in 65-nm process node show the feasibility of measuring the delay of an individual inverter to within 1 ps accuracy. Delay measurements of different nominally identicall inverters in close physical proximity show variations of up to 28% indicating the large impact of local variations. As a demonstration of this technique, we have studied delay variation with poly-pitch, length of diffusion (LOD) and different orientations of layout in silicon. The proposed technique is quite suitable for early process characterization, monitoring mature process in manufacturing and correlating model-to-hardware.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

'Goldfinger', a tetraploid banana produced from the Fundación Hondureña de Investigación Agrícola (FHIA) breeding program, was released to the Australian industry in 1995. It was promoted as an apple-flavoured dessert banana with resistance to Fusarium wilt race 1 and subtropical race 4, as well as resistance to black and yellow Sigatoka (Mycosphaerella fijiensis and M. musicola, respectively). This study was initiated to provide agronomic information to the banana industry, which was under threat from Fusarium wilt, on a new cultivar which could replace 'Williams' (AAA, Cavendish subgroup) or 'Lady Finger' (AAB, Pome subgroup) in those areas affected by Fusarium wilt. Also few studies had reported on the production characteristics of the new tetraploid hybrids, especially from subtropical areas, and therefore two field sites, one a steep-land farm and the other a level, more productive site, were selected for planting density and spatial arrangement treatments. The optimum density in terms of commercial production, taking into account bunch weight, finger size, length of the production cycle, plant height and ease of management, was 1680 plants/ha on the steep-land site where plants were planted in single rows with 2.5 m × 2.5 m spacings. However on the level site a double-row triangular layout with inter-row distances of 4.5 m to allow vehicular access (1724 plants/ha) gave the best results. With this arrangement plants were in an alternate, triangular arrangement along a row and a spacing of 1.5 m between plants at the points of each triangle and between each block of triangles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a new incremental algorithm for layout compaction is proposed. In addition to its linear time performance in terms of the number of rectangles in the layout, we also describe how incremental compaction can form a good feature in the design of a layout editor. The design of such an editor is also described. In the design of the editor, we describe how arrays can be used to implement quadtrees that represent VLSI layouts. Such a representation provides speed of data access and low storage requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a symbolic navigation system that uses spatial language descriptions to inform goal-directed exploration in unfamiliar office environments. An abstract map is created from a collection of natural language phrases describing the spatial layout of the environment. The spatial representation in the abstract map is controlled by a constraint based interpretation of each natural language phrase. In goal-directed exploration of an unseen office environment, the robot links the information in the abstract map to observed symbolic information and its grounded world representation. This paper demonstrates the ability of the system, in both simulated and real-world trials, to efficiently find target rooms in environments that it has never been to previously. In three unexplored environments, it is shown that on average the system travels only 8.42% further than the optimal path when using only natural language phrases to complete navigation tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error) was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT) and ordered distance methods have been the subjects of optimization studies. Results: An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT) method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion: Plotless density estimators can provide an estimate of density in situations where it would not be practical to layout a plot or quadrat and can in many cases reduce the workload in the field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In visual search one tries to find the currently relevant item among other, irrelevant items. In the present study, visual search performance for complex objects (characters, faces, computer icons and words) was investigated, and the contribution of different stimulus properties, such as luminance contrast between characters and background, set size, stimulus size, colour contrast, spatial frequency, and stimulus layout were investigated. Subjects were required to search for a target object among distracter objects in two-dimensional stimulus arrays. The outcome measure was threshold search time, that is, the presentation duration of the stimulus array required by the subject to find the target with a certain probability. It reflects the time used for visual processing separated from the time used for decision making and manual reactions. The duration of stimulus presentation was controlled by an adaptive staircase method. The number and duration of eye fixations, saccade amplitude, and perceptual span, i.e., the number of items that can be processed during a single fixation, were measured. It was found that search performance was correlated with the number of fixations needed to find the target. Search time and the number of fixations increased with increasing stimulus set size. On the other hand, several complex objects could be processed during a single fixation, i.e., within the perceptual span. Search time and the number of fixations depended on object type as well as luminance contrast. The size of the perceptual span was smaller for more complex objects, and decreased with decreasing luminance contrast within object type, especially for very low contrasts. In addition, the size and shape of perceptual span explained the changes in search performance for different stimulus layouts in word search. Perceptual span was scale invariant for a 16-fold range of stimulus sizes, i.e., the number of items processed during a single fixation was independent of retinal stimulus size or viewing distance. It is suggested that saccadic visual search consists of both serial (eye movements) and parallel (processing within perceptual span) components, and that the size of the perceptual span may explain the effectiveness of saccadic search in different stimulus conditions. Further, low-level visual factors, such as the anatomical structure of the retina, peripheral stimulus visibility and resolution requirements for the identification of different object types are proposed to constrain the size of the perceptual span, and thus, limit visual search performance. Similar methods were used in a clinical study to characterise the visual search performance and eye movements of neurological patients with chronic solvent-induced encephalopathy (CSE). In addition, the data about the effects of different stimulus properties on visual search in normal subjects were presented as simple practical guidelines, so that the limits of human visual perception could be taken into account in the design of user interfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The results from laboratory model tests and numerical simulations on square footings resting on sand are presented. Bearing capacity of footings on geosynthetic reinforced sand is evaluated and the effect of various reinforcement parameters like the type and tensile strength of geosynthetic material, amount of reinforcement, layout and configuration of geosynthetic layers below the footing on the bearing capacity improvement of the footings is studied through systemati model studies. A steel tank of size 900 x 900 x 600 mm is used for conducting model tests. Four types of grids, namely strong biaxial geogrid, weak biaxial geogrid, uniaxial geogrid and a geonet, each with different tensile strength, are used in the tests. Geosynthetic reinforcement is provided in the form of planar layers, varying the depth of reinforced zone below the footing, number of geosynthetic layers within the reinforced zone and the width of geosynthetic layers in different tests. Influence of all these parameters on the bearing capacity improvement of square footing and its settlement is studied by comparing with the test on unreinforced sand. Results show that the effective depth of reinforcement is twice the width of the footing and optimum spacing of geosynthetic layers is half the width of the footing. It is observed that the layout and configuration of reinforcement play a vital role in bearing capacity improvement rather than the tensile strength of the geosynthetic material. Experimental observations are supported by the findings from numerical analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A channel router is an important design aid in the design automation of VLSI circuit layout. Many algorithms have been developed based on various wiring models with routing done on two layers. With the recent advances in VLSI process technology, it is possible to have three independent layers for interconnection. In this paper two algorithms are presented for three-layer channel routing. The first assumes a very simple wiring model. This enables the routing problem to be solved optimally in a time of O(n log n). The second algorithm is for a different wiring model and has an upper bound of O(n2) for its execution time. It uses fewer horizontal tracks than the first algorithm. For the second model the channel width is not bounded by the channel density.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The availability and quality of irrigation water has become an issue limiting productivity in many Australian vegetable regions. Production is also under competitive pressure from supply chain forces. Producers look to new technologies, including changing irrigation infrastructure, exploring new water sources, and more complex irrigation management, to survive these stresses. Often there is little objective information investigating which improvements could improve outcomes for vegetable producers, and external communities (e.g. meeting NRM targets). This has led to investment in inappropriate technologies, and costly repetition of errors, as business independently discover the worth of technologies by personal experience. In our project, we investigated technology improvements for vegetable irrigation. Through engagement with industry and other researchers, we identified technologies most applicable to growers, particularly those that addressed priority issues. We developed analytical tools for ‘what if’ scenario testing of technologies. We conducted nine detailed experiments in the Lockyer Valley and Riverina vegetable growing districts, as well as case studies on grower properties in southern Queensland. We investigated root zone monitoring tools (FullStop™ wetting front detectors and Soil Solution Extraction Tubes - SSET), drip system layout, fertigation equipment, and altering planting arrangements. Our project team developed and validated models for broccoli, sweet corn, green beans and lettuce, and spreadsheets for evaluating economic risks associated with new technologies. We presented project outcomes at over 100 extension events, including irrigation showcases, conferences, field days, farm walks and workshops. The FullStops™ were excellent for monitoring root zone conditions (EC, nitrate levels), and managing irrigation with poor quality water. They were easier to interpret than the SSET. The SSET were simpler to install, but required wet soil to be reliable. SSET were an option for monitoring deeper soil zones, unsuitable for FullStop™ installations. Because these root zone tools require expertise, and are labour intensive, we recommend they be used to address specific problems, or as a periodic auditing strategy, not for routine monitoring. In our research, we routinely found high residual N in horticultural soils, with subsequently little crop yield response to additional nitrogen fertiliser. With improved irrigation efficiency (and less leaching), it may be timely to re-examine nitrogen budgets and recommendations for vegetable crops. Where the drip irrigation tube was located close to the crop row (i.e. within 5-8 cm), management of irrigation was easier. It improved nitrogen uptake, water use efficiency, and reduced the risk of poor crop performance through moisture stress, particularly in the early crop establishment phases. Close proximity of the drip tube to the crop row gives the producer more options for managing salty water, and more flexibility in taking risks with forecast rain. In many vegetable crops, proximate drip systems may not be cost-effective. The next best alternative is to push crop rows closer to the drip tube (leading to an asymmetric row structure). The vegetable crop models are good at predicting crop phenology (development stages, time to harvest), input use (water, fertiliser), environmental impacts (nutrient, salt movement) and total yields. The two immediate applications for the models are understanding/predicting/manipulating harvest dates and nitrogen movements in vegetable cropping systems. From the economic tools, the major influences on accumulated profit are price and yield. In doing ‘what if’ analyses, it is very important to be as accurate as possible in ascertaining what the assumed yield and price ranges are. In most vegetable production systems, lowering the required inputs (e.g. irrigation requirement, fertiliser requirement) is unlikely to have a major influence on accumulated profit. However, if a resource is constraining (e.g. available irrigation water), it is usually most profitable to maximise return per unit of that resource.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The layout of this second edition follows that of the first, though the content has been substantially rewritten to reflect 10 years of research and development, as well as the emergence of new pest species. Chapter 1 presents an overview, from a somewhat entomological perspective, of tropical forestry in its many guises. Chapters 2, 3 and 4 then discuss the 'pure' biology and ecology of tropical insects and their co-evolved relationships with the trees and forests in which they live. Chapter 5 is necessarily the largest chapter in the book, looking in detail at a selection of major pest species from all over the tropical world. Chapters 6, 7, 8 and 9 then discuss the theory and practice of insect pest management, starting at the fundamental planning stage, before any seeds hit the soil. Nursery management and stand management were considered in Chapters 7 and 8. Chapter 9 covers the topics of forest health surveillance, quarantine and forest invasive species, topics which again have significance at all stages of forestry but for convenience are presented after nursery and forest management. This, in fact, we attempt to do in the final chapter, Chapter 10, which combines most of the previous nine chapters in examples illustrating the concept of integrated pest management. ©CABI Publishing CABI Publishing

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new rock mass classification scheme, the Host Rock Classification system (HRC-system) has been developed for evaluating the suitability of volumes of rock mass for the disposal of high-level nuclear waste in Precambrian crystalline bedrock. To support the development of the system, the requirements of host rock to be used for disposal have been studied in detail and the significance of the various rock mass properties have been examined. The HRC-system considers both the long-term safety of the repository and the constructability in the rock mass. The system is specific to the KBS-3V disposal concept and can be used only at sites that have been evaluated to be suitable at the site scale. By using the HRC-system, it is possible to identify potentially suitable volumes within the site at several different scales (repository, tunnel and canister scales). The selection of the classification parameters to be included in the HRC-system is based on an extensive study on the rock mass properties and their various influences on the long-term safety, the constructability and the layout and location of the repository. The parameters proposed for the classification at the repository scale include fracture zones, strength/stress ratio, hydraulic conductivity and the Groundwater Chemistry Index. The parameters proposed for the classification at the tunnel scale include hydraulic conductivity, Q´ and fracture zones and the parameters proposed for the classification at the canister scale include hydraulic conductivity, Q´, fracture zones, fracture width (aperture + filling) and fracture trace length. The parameter values will be used to determine the suitability classes for the volumes of rock to be classified. The HRC-system includes four suitability classes at the repository and tunnel scales and three suitability classes at the canister scale and the classification process is linked to several important decisions regarding the location and acceptability of many components of the repository at all three scales. The HRC-system is, thereby, one possible design tool that aids in locating the different repository components into volumes of host rock that are more suitable than others and that are considered to fulfil the fundamental requirements set for the repository host rock. The generic HRC-system, which is the main result of this work, is also adjusted to the site-specific properties of the Olkiluoto site in Finland and the classification procedure is demonstrated by a test classification using data from Olkiluoto. Keywords: host rock, classification, HRC-system, nuclear waste disposal, long-term safety, constructability, KBS-3V, crystalline bedrock, Olkiluoto