872 resultados para Network cost allocation
Resumo:
The use of containers have greatly reduced handling operations at ports and at all other transfer points, thus increasing the efficiency and speed of transportation. This was done in an attempt to cut down the cost of maritime transport, mainly by reducing cargo handling and costs, and ships' time in port by speeding up handling operations. This paper discusses the major factors influencing the transfer efficiency of seaport container terminals. A network model is designed to analyse container progress in the system and applied to a seaport container terminal. The model presented here can be seen as a decision support system in the context of investment appraisal of multimodal container terminals. (C) 2000 Elsevier Science Ltd.
Resumo:
The monitoring sites comprising a state of the environment (SOE) network must be carefully selected to ensure that they will be representative of the broader resource. Hierarchical cluster analysis (HCA) is a data-driven technique that can potentially be employed to assess the representativeness of a SOE monitoring network. The objective of this paper is to explore the use of HCA as an approach for assessing the representativeness of the New Zealand National Groundwater Monitoring Programme (NGMP), which is comprised of 110 monitoring sites across the country.
Resumo:
As a part of vital infrastructure and transportation network, bridge structures must function safely at all times. Bridges are designed to have a long life span. At any point in time, however, some bridges are aged. The ageing of bridge structures, given the rapidly growing demand of heavy and fast inter-city passages and continuous increase of freight transportation, would require diligence on bridge owners to ensure that the infrastructure is healthy at reasonable cost. In recent decades, a new technique, structural health monitoring (SHM), has emerged to meet this challenge. In this new engineering discipline, structural modal identification and damage detection have formed a vital component. Witnessed by an increasing number of publications is that the change in vibration characteristics is widely and deeply investigated to assess structural damage. Although a number of publications have addressed the feasibility of various methods through experimental verifications, few of them have focused on steel truss bridges. Finding a feasible vibration-based damage indicator for steel truss bridges and solving the difficulties in practical modal identification to support damage detection motivated this research project. This research was to derive an innovative method to assess structural damage in steel truss bridges. First, it proposed a new damage indicator that relies on optimising the correlation between theoretical and measured modal strain energy. The optimisation is powered by a newly proposed multilayer genetic algorithm. In addition, a selection criterion for damage-sensitive modes has been studied to achieve more efficient and accurate damage detection results. Second, in order to support the proposed damage indicator, the research studied the applications of two state-of-the-art modal identification techniques by considering some practical difficulties: the limited instrumentation, the influence of environmental noise, the difficulties in finite element model updating, and the data selection problem in the output-only modal identification methods. The numerical (by a planer truss model) and experimental (by a laboratory through truss bridge) verifications have proved the effectiveness and feasibility of the proposed damage detection scheme. The modal strain energy-based indicator was found to be sensitive to the damage in steel truss bridges with incomplete measurement. It has shown the damage indicator's potential in practical applications of steel truss bridges. Lastly, the achievement and limitation of this study, and lessons learnt from the modal analysis have been summarised.
Resumo:
The increasingly widespread use of large-scale 3D virtual environments has translated into an increasing effort required from designers, developers and testers. While considerable research has been conducted into assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. In the work presented in this paper, two novel neural network-based approaches are presented to predict the correct visualization of 3D content. Multilayer perceptrons and self-organizing maps are trained to learn the normal geometric and color appearance of objects from validated frames and then used to detect novel or anomalous renderings in new images. Our approach is general, for the appearance of the object is learned rather than explicitly represented. Experiments were conducted on a game engine to determine the applicability and effectiveness of our algorithms. The results show that the neural network technology can be effectively used to address the problem of automatic and reliable visual testing of 3D virtual environments.
Resumo:
The Texas Department of Transportation (TxDOT) is concerned about the widening gap between pavement preservation needs and available funding. Thus, the TxDOT Austin District Pavement Engineer (DPE) has investigated methods to strategically allocate available pavement funding to potential projects that improve the overall performance of the District and Texas highway systems. The primary objective of the study presented in this paper is to develop a network-level project screening and ranking method that supports the Austin District 4-year pavement management plan development. The study developed candidate project selection and ranking algorithms that evaluated pavement conditions of each project candidate using data contained in the Pavement Management Information system (PMIS) database and incorporated insights from Austin District pavement experts; and implemented the developed method and supporting algorithm. This process previously required weeks to complete, but now requires about 10 minutes including data preparation and running the analysis algorithm, which enables the Austin DPE to devote more time and resources to conducting field visits, performing project-level evaluation and testing candidate projects. The case study results showed that the proposed method assisted the DPE in evaluating and prioritizing projects and allocating funds to the right projects at the right time.
Resumo:
Safety at Railway Level Crossings (RLXs) is an important issue within the Australian transport system. Crashes at RLXs involving road vehicles in Australia are estimated to cost $10 million each year. Such crashes are mainly due to human factors; unintentional errors contribute to 46% of all fatal collisions and are far more common than deliberate violations. This suggests that innovative intervention targeting drivers are particularly promising to improve RLX safety. In recent years there has been a rapid development of a variety of affordable technologies which can be used to increase driver’s risk awareness around crossings. To date, no research has evaluated the potential effects of such technologies at RLXs in terms of safety, traffic and acceptance of the technology. Integrating driving and traffic simulations is a safe and affordable approach for evaluating these effects. This methodology will be implemented in a driving simulator, where we recreated realistic driving scenario with typical road environments and realistic traffic. This paper presents a methodology for evaluating comprehensively potential benefits and negative effects of such interventions: this methodology evaluates driver awareness at RLXs , driver distraction and workload when using the technology . Subjective assessment on perceived usefulness and ease of use of the technology is obtained from standard questionnaires. Driving simulation will provide a model of driving behaviour at RLXs which will be used to estimate the effects of such new technology on a road network featuring RLX for different market penetrations using a traffic simulation. This methodology can assist in evaluating future safety interventions at RLXs.
Resumo:
Software as a Service (SaaS) in Cloud is getting more and more significant among software users and providers recently. A SaaS that is delivered as composite application has many benefits including reduced delivery costs, flexible offers of the SaaS functions and decreased subscription cost for users. However, this approach has introduced a new problem in managing the resources allocated to the composite SaaS. The resource allocation that has been done at the initial stage may be overloaded or wasted due to the dynamic environment of a Cloud. A typical data center resource management usually triggers a placement reconfiguration for the SaaS in order to maintain its performance as well as to minimize the resource used. Existing approaches for this problem often ignore the underlying dependencies between SaaS components. In addition, the reconfiguration also has to comply with SaaS constraints in terms of its resource requirements, placement requirement as well as its SLA. To tackle the problem, this paper proposes a penalty-based Grouping Genetic Algorithm for multiple composite SaaS components clustering in Cloud. The main objective is to minimize the resource used by the SaaS by clustering its component without violating any constraint. Experimental results demonstrate the feasibility and the scalability of the proposed algorithm.
Resumo:
BACKGROUND: Frequent illness and injury among workers with high body mass index (BMI) can raise the costs of employee healthcare and reduce workforce maintenance and productivity. These issues are particularly important in vocational settings such as the military, which require good physical health, regular attendance and teamwork to operate efficiently. The purpose of this study was to compare the incidence of injury and illness, absenteeism, productivity, healthcare usage and administrative outcomes among Australian Defence Force personnel with varying BMI. METHODS: Personnel were grouped into cohorts according to the following ranges for (BMI): normal (18.5-24.9 kg/m²; n = 197), overweight (25-29.9 kg/m²; n = 154) and obese (≥30 kg/m²) with restricted body fat (≤28 % for females, ≤24 % for males) (n = 148) and with no restriction on body fat (n = 180). Medical records for each individual were audited retrospectively to record the incidence of injury and illness, absenteeism, productivity, healthcare usage (i.e., consultation with medical specialists, hospital stays, medical investigations, prescriptions) and administrative outcomes (e.g., discharge from service) over one year. These data were then grouped and compared between the cohorts. RESULTS: The prevalence of injury and illness, cost of medical specialist consultations and cost of medical scans were all higher (p <0.05) in both obese cohorts compared with the normal cohort. The estimated productivity losses from restricted work days were also higher (p <0.05) in the obese cohort with no restriction on body fat compared with the normal cohort. Within the obese cohort, the prevalence of injury and illness, healthcare usage and productivity were not significantly greater in the obese cohort with no restriction on body fat compared with the cohort with restricted body fat. The number of restricted work days, the rate of re-classification of Medical Employment Classification and the rate of discharge from service were similar between all four cohorts. CONCLUSIONS: High BMI in the military increases healthcare usage, but does not disrupt workforce maintenance. The greater prevalence of injury and illness, greater healthcare usage and lower productivity in obese Australian Defence Force personnel is not related to higher levels of body fat.
Resumo:
A cost estimation method is required to estimate the life cycle cost of a product family at the early stage of product development in order to evaluate the product family design. There are difficulties with existing cost estimation techniques in estimating the life cycle cost for a product family at the early stage of product development. This paper proposes a framework that combines a knowledge based system and an activity based costing techniques in estimating the life cycle cost of a product family at the early stage of product development. The inputs of the framework are the product family structure and its sub function. The output of the framework is the life cycle cost of a product family that consists of all costs at each product family level and the costs of each product life cycle stage. The proposed framework provides a life cycle cost estimation tool for a product family at the early stage of product development using high level information as its input. The framework makes it possible to estimate the life cycle cost of various product family that use any types of product structure. It provides detailed information related to the activity and resource costs of both parts and products that can assist the designer in analyzing the cost of the product family design. In addition, it can reduce the required amount of information and time to construct the cost estimation system.
Resumo:
Organisations are constantly seeking efficiency improvements for their business processes in terms of time and cost. Management accounting enables reporting of detailed cost of operations for decision making purpose, although significant effort is required to gather accurate operational data. Business process management is concerned with systematically documenting, managing, automating, and optimising processes. Process mining gives valuable insight into processes through analysis of events recorded by an IT system in the form of an event log with the focus on efficient utilisation of time and resources, although its primary focus is not on cost implications. In this paper, we propose a framework to support management accounting decisions on cost control by automatically incorporating cost data with historical data from event logs for monitoring, predicting and reporting process-related costs. We also illustrate how accurate, relevant and timely management accounting style cost reports can be produced on demand by extending open-source process mining framework ProM.
Resumo:
Quality based frame selection is a crucial task in video face recognition, to both improve the recognition rate and to reduce the computational cost. In this paper we present a framework that uses a variety of cues (face symmetry, sharpness, contrast, closeness of mouth, brightness and openness of the eye) to select the highest quality facial images available in a video sequence for recognition. Normalized feature scores are fused using a neural network and frames with high quality scores are used in a Local Gabor Binary Pattern Histogram Sequence based face recognition system. Experiments on the Honda/UCSD database shows that the proposed method selects the best quality face images in the video sequence, resulting in improved recognition performance.
Resumo:
Management scholars and practitioners emphasize the importance of the size and diversity of a knowledge worker's social network. Constraints on knowledge workers’ time and energy suggest that more is not always better. Further, why and how larger networks contribute to valuable outcomes deserves further understanding. In this study, we offer hypotheses to shed insight on the question of the diminishing returns of large networks and the specific form of network diversity that may contribute to innovative performance among knowledge workers. We tested our hypotheses using data collected from 93 R&D engineers in a Sino-German automobile electronics company located in China. Study findings identified an inflection point, confirming our hypothesis that the size of the knowledge worker's egocentric network has an inverted U-shaped effect on job performance. We further demonstrate that network dispersion richness (the number of cohorts that the focal employee has connections to) rather than network dispersion evenness (equal distribution of ties across the cohorts) has more influence on the knowledge worker's job performance. Additionally, we found that the curvilinear effect of network size is fully mediated by network dispersion richness. Implications for future research on social networks in China and Western contexts are discussed.
Resumo:
Summary Appropriate assessment and management of diabetes-related foot ulcers (DRFUs) is essential to reduce amputation risk. Management requires debridement, wound dressing, pressure off-loading, good glycaemic control and potentially antibiotic therapy and vascular intervention. As a minimum, all DRFUs should be managed by a doctor and a podiatrist and/or wound care nurse. Health professionals unable to provide appropriate care for people with DRFUs should promptly refer individuals to professionals with the requisite knowledge and skills. Indicators for immediate referral to an emergency department or multidisciplinary foot care team (MFCT) include gangrene, limb-threatening ischaemia, deep ulcers (bone, joint or tendon in the wound base), ascending cellulitis, systemic symptoms of infection and abscesses. Referral to an MFCT should occur if there is lack of wound progress after 4 weeks of appropriate treatment.
Resumo:
We consider a joint relay selection and subcarrier allocation problem that minimizes the total system power for a multi-user, multi-relay and single source cooperative OFDM based two hop system. The system is constrained to all users having a specific subcarrier requirement (user fairness). However no specific fairness constraints for relays are considered. To ensure the optimum power allocation, the subcarriers in two hops are paired with each other. We obtain an optimal subcarrier allocation for the single user case using a similar method to what is described in [1] and modify the algorithm for multiuser scenario. Although the optimality is not achieved in multiuser case the probability of all users being served fairly is improved significantly with a relatively low cost trade off.