924 resultados para Logistica layout


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a symbolic navigation system that uses spatial language descriptions to inform goal-directed exploration in unfamiliar office environments. An abstract map is created from a collection of natural language phrases describing the spatial layout of the environment. The spatial representation in the abstract map is controlled by a constraint based interpretation of each natural language phrase. In goal-directed exploration of an unseen office environment, the robot links the information in the abstract map to observed symbolic information and its grounded world representation. This paper demonstrates the ability of the system, in both simulated and real-world trials, to efficiently find target rooms in environments that it has never been to previously. In three unexplored environments, it is shown that on average the system travels only 8.42% further than the optimal path when using only natural language phrases to complete navigation tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error) was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT) and ordered distance methods have been the subjects of optimization studies. Results: An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT) method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion: Plotless density estimators can provide an estimate of density in situations where it would not be practical to layout a plot or quadrat and can in many cases reduce the workload in the field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In visual search one tries to find the currently relevant item among other, irrelevant items. In the present study, visual search performance for complex objects (characters, faces, computer icons and words) was investigated, and the contribution of different stimulus properties, such as luminance contrast between characters and background, set size, stimulus size, colour contrast, spatial frequency, and stimulus layout were investigated. Subjects were required to search for a target object among distracter objects in two-dimensional stimulus arrays. The outcome measure was threshold search time, that is, the presentation duration of the stimulus array required by the subject to find the target with a certain probability. It reflects the time used for visual processing separated from the time used for decision making and manual reactions. The duration of stimulus presentation was controlled by an adaptive staircase method. The number and duration of eye fixations, saccade amplitude, and perceptual span, i.e., the number of items that can be processed during a single fixation, were measured. It was found that search performance was correlated with the number of fixations needed to find the target. Search time and the number of fixations increased with increasing stimulus set size. On the other hand, several complex objects could be processed during a single fixation, i.e., within the perceptual span. Search time and the number of fixations depended on object type as well as luminance contrast. The size of the perceptual span was smaller for more complex objects, and decreased with decreasing luminance contrast within object type, especially for very low contrasts. In addition, the size and shape of perceptual span explained the changes in search performance for different stimulus layouts in word search. Perceptual span was scale invariant for a 16-fold range of stimulus sizes, i.e., the number of items processed during a single fixation was independent of retinal stimulus size or viewing distance. It is suggested that saccadic visual search consists of both serial (eye movements) and parallel (processing within perceptual span) components, and that the size of the perceptual span may explain the effectiveness of saccadic search in different stimulus conditions. Further, low-level visual factors, such as the anatomical structure of the retina, peripheral stimulus visibility and resolution requirements for the identification of different object types are proposed to constrain the size of the perceptual span, and thus, limit visual search performance. Similar methods were used in a clinical study to characterise the visual search performance and eye movements of neurological patients with chronic solvent-induced encephalopathy (CSE). In addition, the data about the effects of different stimulus properties on visual search in normal subjects were presented as simple practical guidelines, so that the limits of human visual perception could be taken into account in the design of user interfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The results from laboratory model tests and numerical simulations on square footings resting on sand are presented. Bearing capacity of footings on geosynthetic reinforced sand is evaluated and the effect of various reinforcement parameters like the type and tensile strength of geosynthetic material, amount of reinforcement, layout and configuration of geosynthetic layers below the footing on the bearing capacity improvement of the footings is studied through systemati model studies. A steel tank of size 900 x 900 x 600 mm is used for conducting model tests. Four types of grids, namely strong biaxial geogrid, weak biaxial geogrid, uniaxial geogrid and a geonet, each with different tensile strength, are used in the tests. Geosynthetic reinforcement is provided in the form of planar layers, varying the depth of reinforced zone below the footing, number of geosynthetic layers within the reinforced zone and the width of geosynthetic layers in different tests. Influence of all these parameters on the bearing capacity improvement of square footing and its settlement is studied by comparing with the test on unreinforced sand. Results show that the effective depth of reinforcement is twice the width of the footing and optimum spacing of geosynthetic layers is half the width of the footing. It is observed that the layout and configuration of reinforcement play a vital role in bearing capacity improvement rather than the tensile strength of the geosynthetic material. Experimental observations are supported by the findings from numerical analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A channel router is an important design aid in the design automation of VLSI circuit layout. Many algorithms have been developed based on various wiring models with routing done on two layers. With the recent advances in VLSI process technology, it is possible to have three independent layers for interconnection. In this paper two algorithms are presented for three-layer channel routing. The first assumes a very simple wiring model. This enables the routing problem to be solved optimally in a time of O(n log n). The second algorithm is for a different wiring model and has an upper bound of O(n2) for its execution time. It uses fewer horizontal tracks than the first algorithm. For the second model the channel width is not bounded by the channel density.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The availability and quality of irrigation water has become an issue limiting productivity in many Australian vegetable regions. Production is also under competitive pressure from supply chain forces. Producers look to new technologies, including changing irrigation infrastructure, exploring new water sources, and more complex irrigation management, to survive these stresses. Often there is little objective information investigating which improvements could improve outcomes for vegetable producers, and external communities (e.g. meeting NRM targets). This has led to investment in inappropriate technologies, and costly repetition of errors, as business independently discover the worth of technologies by personal experience. In our project, we investigated technology improvements for vegetable irrigation. Through engagement with industry and other researchers, we identified technologies most applicable to growers, particularly those that addressed priority issues. We developed analytical tools for ‘what if’ scenario testing of technologies. We conducted nine detailed experiments in the Lockyer Valley and Riverina vegetable growing districts, as well as case studies on grower properties in southern Queensland. We investigated root zone monitoring tools (FullStop™ wetting front detectors and Soil Solution Extraction Tubes - SSET), drip system layout, fertigation equipment, and altering planting arrangements. Our project team developed and validated models for broccoli, sweet corn, green beans and lettuce, and spreadsheets for evaluating economic risks associated with new technologies. We presented project outcomes at over 100 extension events, including irrigation showcases, conferences, field days, farm walks and workshops. The FullStops™ were excellent for monitoring root zone conditions (EC, nitrate levels), and managing irrigation with poor quality water. They were easier to interpret than the SSET. The SSET were simpler to install, but required wet soil to be reliable. SSET were an option for monitoring deeper soil zones, unsuitable for FullStop™ installations. Because these root zone tools require expertise, and are labour intensive, we recommend they be used to address specific problems, or as a periodic auditing strategy, not for routine monitoring. In our research, we routinely found high residual N in horticultural soils, with subsequently little crop yield response to additional nitrogen fertiliser. With improved irrigation efficiency (and less leaching), it may be timely to re-examine nitrogen budgets and recommendations for vegetable crops. Where the drip irrigation tube was located close to the crop row (i.e. within 5-8 cm), management of irrigation was easier. It improved nitrogen uptake, water use efficiency, and reduced the risk of poor crop performance through moisture stress, particularly in the early crop establishment phases. Close proximity of the drip tube to the crop row gives the producer more options for managing salty water, and more flexibility in taking risks with forecast rain. In many vegetable crops, proximate drip systems may not be cost-effective. The next best alternative is to push crop rows closer to the drip tube (leading to an asymmetric row structure). The vegetable crop models are good at predicting crop phenology (development stages, time to harvest), input use (water, fertiliser), environmental impacts (nutrient, salt movement) and total yields. The two immediate applications for the models are understanding/predicting/manipulating harvest dates and nitrogen movements in vegetable cropping systems. From the economic tools, the major influences on accumulated profit are price and yield. In doing ‘what if’ analyses, it is very important to be as accurate as possible in ascertaining what the assumed yield and price ranges are. In most vegetable production systems, lowering the required inputs (e.g. irrigation requirement, fertiliser requirement) is unlikely to have a major influence on accumulated profit. However, if a resource is constraining (e.g. available irrigation water), it is usually most profitable to maximise return per unit of that resource.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The layout of this second edition follows that of the first, though the content has been substantially rewritten to reflect 10 years of research and development, as well as the emergence of new pest species. Chapter 1 presents an overview, from a somewhat entomological perspective, of tropical forestry in its many guises. Chapters 2, 3 and 4 then discuss the 'pure' biology and ecology of tropical insects and their co-evolved relationships with the trees and forests in which they live. Chapter 5 is necessarily the largest chapter in the book, looking in detail at a selection of major pest species from all over the tropical world. Chapters 6, 7, 8 and 9 then discuss the theory and practice of insect pest management, starting at the fundamental planning stage, before any seeds hit the soil. Nursery management and stand management were considered in Chapters 7 and 8. Chapter 9 covers the topics of forest health surveillance, quarantine and forest invasive species, topics which again have significance at all stages of forestry but for convenience are presented after nursery and forest management. This, in fact, we attempt to do in the final chapter, Chapter 10, which combines most of the previous nine chapters in examples illustrating the concept of integrated pest management. ©CABI Publishing CABI Publishing

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new rock mass classification scheme, the Host Rock Classification system (HRC-system) has been developed for evaluating the suitability of volumes of rock mass for the disposal of high-level nuclear waste in Precambrian crystalline bedrock. To support the development of the system, the requirements of host rock to be used for disposal have been studied in detail and the significance of the various rock mass properties have been examined. The HRC-system considers both the long-term safety of the repository and the constructability in the rock mass. The system is specific to the KBS-3V disposal concept and can be used only at sites that have been evaluated to be suitable at the site scale. By using the HRC-system, it is possible to identify potentially suitable volumes within the site at several different scales (repository, tunnel and canister scales). The selection of the classification parameters to be included in the HRC-system is based on an extensive study on the rock mass properties and their various influences on the long-term safety, the constructability and the layout and location of the repository. The parameters proposed for the classification at the repository scale include fracture zones, strength/stress ratio, hydraulic conductivity and the Groundwater Chemistry Index. The parameters proposed for the classification at the tunnel scale include hydraulic conductivity, Q´ and fracture zones and the parameters proposed for the classification at the canister scale include hydraulic conductivity, Q´, fracture zones, fracture width (aperture + filling) and fracture trace length. The parameter values will be used to determine the suitability classes for the volumes of rock to be classified. The HRC-system includes four suitability classes at the repository and tunnel scales and three suitability classes at the canister scale and the classification process is linked to several important decisions regarding the location and acceptability of many components of the repository at all three scales. The HRC-system is, thereby, one possible design tool that aids in locating the different repository components into volumes of host rock that are more suitable than others and that are considered to fulfil the fundamental requirements set for the repository host rock. The generic HRC-system, which is the main result of this work, is also adjusted to the site-specific properties of the Olkiluoto site in Finland and the classification procedure is demonstrated by a test classification using data from Olkiluoto. Keywords: host rock, classification, HRC-system, nuclear waste disposal, long-term safety, constructability, KBS-3V, crystalline bedrock, Olkiluoto

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of an atomic decomposition was introduced by Coifman and Rochberg (1980) for weighted Bergman spaces on the unit disk. By the Riemann mapping theorem, functions in every simply connected domain in the complex plane have an atomic decomposition. However, a decomposition resulting from a conformal mapping of the unit disk tends to be very implicit and often lacks a clear connection to the geometry of the domain that it has been mapped into. The lattice of points, where the atoms of the decomposition are evaluated, usually follows the geometry of the original domain, but after mapping the domain into another this connection is easily lost and the layout of points becomes seemingly random. In the first article we construct an atomic decomposition directly on a weighted Bergman space on a class of regulated, simply connected domains. The construction uses the geometric properties of the regulated domain, but does not explicitly involve any conformal Riemann map from the unit disk. It is known that the Bergman projection is not bounded on the space L-infinity of bounded measurable functions. Taskinen (2004) introduced the locally convex spaces LV-infinity consisting of measurable and HV-infinity of analytic functions on the unit disk with the latter being a closed subspace of the former. They have the property that the Bergman projection is continuous from LV-infinity onto HV-infinity and, in some sense, the space HV-infinity is the smallest possible substitute to the space H-infinity of analytic functions. In the second article we extend the above result to a smoothly bounded strictly pseudoconvex domain. Here the related reproducing kernels are usually not known explicitly, and thus the proof of continuity of the Bergman projection is based on generalised Forelli-Rudin estimates instead of integral representations. The minimality of the space LV-infinity is shown by using peaking functions first constructed by Bell (1981). Taskinen (2003) showed that on the unit disk the space HV-infinity admits an atomic decomposition. This result is generalised in the third article by constructing an atomic decomposition for the space HV-infinity on a smoothly bounded strictly pseudoconvex domain. In this case every function can be presented as a linear combination of atoms such that the coefficient sequence belongs to a suitable Köthe co-echelon space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction Electronic medication administration record (eMAR) systems are promoted as a potential intervention to enhance medication safety in residential aged care facilities (RACFs). The purpose of this study was to conduct an in-practice evaluation of an eMAR being piloted in one Australian RACF before its roll out, and to provide recommendations for system improvements. Methods A multidisciplinary team conducted direct observations of workflow (n=34 hours) in the RACF site and the community pharmacy. Semi-structured interviews (n=5) with RACF staff and the community pharmacist were conducted to investigate their views of the eMAR system. Data were analysed using a grounded theory approach to identify challenges associated with the design of the eMAR system. Results The current eMAR system does not offer an end-to-end solution for medication management. Many steps, including prescribing by doctors and communication with the community pharmacist, are still performed manually using paper charts and fax machines. Five major challenges associated with the design of eMAR system were identified: limited interactivity; inadequate flexibility; problems related to information layout and semantics; the lack of relevant decision support; and system maintenance issues.We suggest recommendations to improve the design of the eMAR system and to optimize existing workflows. Discussion Immediate value can be achieved by improving the system interactivity, reducing inconsistencies in data entry design and offering dedicated organisational support to minimise connectivity issues. Longer-term benefits can be achieved by adding decision support features and establishing system interoperability requirements with stakeholder groups (e.g. community pharmacies) prior to system roll out. In-practice evaluations of technologies like eMAR system have great value in identifying design weaknesses which inhibit optimal system use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current research proposed a conceptual design framework for airports to obtain flexible departure layouts based on passenger activity analysis obtained from Business Process Models (BPM). BPMs available for airport terminals were used as a design tool in the current research to uncover the relationships existing between spatial layout and corresponding passenger activities. An algorithm has been developed that demonstrates the applicability of the proposed design framework by obtaining relative spatial layouts based on passenger activity analysis. The generated relative spatial layout assists architects in achieving suitable alternative layouts to meet the changing needs of an airport terminal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Printed Circuit Board (PCB) layout design is one of the most important and time consuming phases during equipment design process in all electronic industries. This paper is concerned with the development and implementation of a computer aided PCB design package. A set of programs which operate on a description of the circuit supplied by the user in the form of a data file and subsequently design the layout of a double-sided PCB has been developed. The algorithms used for the design of the PCB optimise the board area and the length of copper tracks used for the interconnections. The output of the package is the layout drawing of the PCB, drawn on a CALCOMP hard copy plotter and a Tektronix 4012 storage graphics display terminal. The routing density (the board area required for one component) achieved by this package is typically 0.8 sq. inch per IC. The package is implemented on a DEC 1090 system in Pascal and FORTRAN and SIGN(1) graphics package is used for display generation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The noted 19th century biologist, Ernst Haeckel, put forward the idea that the growth (ontogenesis) of an organism recapitulated the history of its evolutionary development. While this idea is defunct within biology, the idea has been promoted in areas such as education (the idea of an education being the repetition of the civilizations before). In the research presented in this paper, recapitulation is used as a metaphor within computer-aided design as a way of grouping together different generations of spatial layouts. In most CAD programs, a spatial layout is represented as a series of objects (lines, or boundary representations) that stand in as walls. The relationships between spaces are not usually explicitly stated. A representation using Lindenmayer Systems (originally designed for the purpose of modelling plant morphology) is put forward as a way of representing the morphology of a spatial layout. The aim of this research is not just to describe an individual layout, but to find representations that link together lineages of development. This representation can be used in generative design as a way of creating more meaningful layouts which have particular characteristics. The use of genetic operators (mutation and crossover) is also considered, making this representation suitable for use with genetic algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper shows that by using only symbolic language phrases, a mobile robot can purposefully navigate to specified rooms in previously unexplored environments. The robot intelligently organises a symbolic language description of the unseen environment and “imagines” a representative map, called the abstract map. The abstract map is an internal representation of the topological structure and spatial layout of symbolically defined locations. To perform goal-directed exploration, the abstract map creates a high-level semantic plan to reason about spaces beyond the robot’s known world. While completing the plan, the robot uses the metric guidance provided by a spatial layout, and grounded observations of door labels, to efficiently guide its navigation. The system is shown to complete exploration in unexplored spaces by travelling only 13.3% further than the optimal path.