895 resultados para cost model
Resumo:
Construction projects are complex endeavors that require the involvement of different professional disciplines in order to meet various project objectives that are often conflicting. The level of complexity and the multi-objective nature of construction projects lend themselves to collaborative design and construction such as integrated project delivery (IPD), in which relevant disciplines work together during project conception, design and construction. Traditionally, the main objectives of construction projects have been to build in the least amount of time with the lowest cost possible, thus the inherent and well-established relationship between cost and time has been the focus of many studies. The importance of being able to effectively model relationships among multiple objectives in building construction has been emphasized in a wide range of research. In general, the trade-off relationship between time and cost is well understood and there is ample research on the subject. However, despite sustainable building designs, relationships between time and environmental impact, as well as cost and environmental impact, have not been fully investigated. The objectives of this research were mainly to analyze and identify relationships of time, cost, and environmental impact, in terms of CO2 emissions, at different levels of a building: material level, component level, and building level, at the pre-use phase, including manufacturing and construction, and the relationships of life cycle cost and life cycle CO2 emissions at the usage phase. Additionally, this research aimed to develop a robust simulation-based multi-objective decision-support tool, called SimulEICon, which took construction data uncertainty into account, and was capable of incorporating life cycle assessment information to the decision-making process. The findings of this research supported the trade-off relationship between time and cost at different building levels. Moreover, the time and CO2 emissions relationship presented trade-off behavior at the pre-use phase. The results of the relationship between cost and CO2 emissions were interestingly proportional at the pre-use phase. The same pattern continually presented after the construction to the usage phase. Understanding the relationships between those objectives is a key in successfully planning and designing environmentally sustainable construction projects.
Resumo:
To predict the maneuvering performance of a propelled SPAR vessel, a mathematical model was established as a path simulator. A system-based mathematical model was chosen as it offers advantages in cost and time over full Computational Fluid Dynamics (CFD) simulations. The model is intended to provide a means of optimizing the maneuvering performance of this new vessel type. In this study the hydrodynamic forces and control forces are investigated as individual components, combined in a vectorial setting, and transferred to a body-fixed basis. SPAR vessels are known to be very sensitive to large amplitude motions during maneuvers due to the relatively small hydrostatic restoring forces. Previous model tests of SPAR vessels have shown significant roll and pitch amplitudes, especially during course change maneuvers. Thus, a full 6 DOF equation of motion was employed in the current numerical model. The mathematical model employed in this study was a combination of the model introduced by the Maneuvering Modeling Group (MMG) and the Abkowitz (1964) model. The new model represents the forces applied to the ship hull, the propeller forces and the rudder forces independently, as proposed by the MMG, but uses a 6DOF equation of motion introduced by Abkowitz to describe the motion of a maneuvering ship. The mathematical model was used to simulate the trajectory and motions of the propelled SPAR vessel in 10˚/10˚, 20˚/20˚ and 30˚/30˚ standard zig-zag maneuvers, as well as turning circle tests at rudder angles of 20˚ and 30˚. The simulation results were used to determine the maneuverability parameters (e.g. advance, transfer and tactical diameter) of the vessel. The final model provides the means of predicting and assessing the performance of the vessel type and can be easily adapted to specific vessel configurations based on the generic SPAR-type vessel used in this study.
Resumo:
In developing countries, access to modern energy for cooking and heating still remains a challenge to raising households out of poverty. About 2.5 billion people depend on solid fuels such as biomass, wood, charcoal and animal dung. The use of solid fuels has negative outcomes for health, the environment and economic development (Universal Energy Access, UNDP). In low income countries, 1.3 million deaths occur due to indoor smoke or air pollution from burning solid fuels in small, confined and unventilated kitchens or homes. In addition, pollutants such as black carbon, methane and ozone, emitted when burning inefficient fuels, are responsible for a fraction of the climate change and air pollution. There are international efforts to promote the use of clean cookstoves in developing countries but limited evidence on the economic benefits of such distribution programs. This study undertook a systematic economic evaluation of a program that distributed subsidized improved cookstoves to rural households in India. The evaluation examined the effect of different levels of subsidies on the net benefits to the household and to society. This paper answers the question, “Ex post, what are the economic benefits to various stakeholders of a program that distributed subsidized improved cookstoves?” In addressing this question, the evaluation used empirical data from India applied to a cost-benefit model to examine how subsidies affect the costs and the benefits of the biomass improved cookstove and the electric improved cookstove to different stakeholders.
Resumo:
RATIONALE: Limitations in methods for the rapid diagnosis of hospital-acquired infections often delay initiation of effective antimicrobial therapy. New diagnostic approaches offer potential clinical and cost-related improvements in the management of these infections. OBJECTIVES: We developed a decision modeling framework to assess the potential cost-effectiveness of a rapid biomarker assay to identify hospital-acquired infection in high-risk patients earlier than standard diagnostic testing. METHODS: The framework includes parameters representing rates of infection, rates of delayed appropriate therapy, and impact of delayed therapy on mortality, along with assumptions about diagnostic test characteristics and their impact on delayed therapy and length of stay. Parameter estimates were based on contemporary, published studies and supplemented with data from a four-site, observational, clinical study. Extensive sensitivity analyses were performed. The base-case analysis assumed 17.6% of ventilated patients and 11.2% of nonventilated patients develop hospital-acquired infection and that 28.7% of patients with hospital-acquired infection experience delays in appropriate antibiotic therapy with standard care. We assumed this percentage decreased by 50% (to 14.4%) among patients with true-positive results and increased by 50% (to 43.1%) among patients with false-negative results using a hypothetical biomarker assay. Cost of testing was set at $110/d. MEASUREMENTS AND MAIN RESULTS: In the base-case analysis, among ventilated patients, daily diagnostic testing starting on admission reduced inpatient mortality from 12.3 to 11.9% and increased mean costs by $1,640 per patient, resulting in an incremental cost-effectiveness ratio of $21,389 per life-year saved. Among nonventilated patients, inpatient mortality decreased from 7.3 to 7.1% and costs increased by $1,381 with diagnostic testing. The resulting incremental cost-effectiveness ratio was $42,325 per life-year saved. Threshold analyses revealed the probabilities of developing hospital-acquired infection in ventilated and nonventilated patients could be as low as 8.4 and 9.8%, respectively, to maintain incremental cost-effectiveness ratios less than $50,000 per life-year saved. CONCLUSIONS: Development and use of serial diagnostic testing that reduces the proportion of patients with delays in appropriate antibiotic therapy for hospital-acquired infections could reduce inpatient mortality. The model presented here offers a cost-effectiveness framework for future test development.
Resumo:
The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
Background
Increasing physical activity in the workplace can provide employee physical and mental health benefits, and employer economic benefits through reduced absenteeism and increased productivity. The workplace is an opportune setting to encourage habitual activity. However, there is limited evidence on effective behaviour change interventions that lead to maintained physical activity. This study aims to address this gap and help build the necessary evidence base for effective, and cost-effective, workplace interventions
Methods/design
This cluster randomised control trial will recruit 776 office-based employees from public sector organisations in Belfast and Lisburn city centres, Northern Ireland. Participants will be randomly allocated by cluster to either the Intervention Group or Control Group (waiting list control). The 6-month intervention consists of rewards (retail vouchers, based on similar principles to high street loyalty cards), feedback and other evidence-based behaviour change techniques. Sensors situated in the vicinity of participating workplaces will promote and monitor minutes of physical activity undertaken by participants. Both groups will complete all outcome measures. The primary outcome is steps per day recorded using a pedometer (Yamax Digiwalker CW-701) for 7 consecutive days at baseline, 6, 12 and 18 months. Secondary outcomes include health, mental wellbeing, quality of life, work absenteeism and presenteeism, and use of healthcare resources. Process measures will assess intervention “dose”, website usage, and intervention fidelity. An economic evaluation will be conducted from the National Health Service, employer and retailer perspective using both a cost-utility and cost-effectiveness framework. The inclusion of a discrete choice experiment will further generate values for a cost-benefit analysis. Participant focus groups will explore who the intervention worked for and why, and interviews with retailers will elucidate their views on the sustainability of a public health focused loyalty card scheme.
Discussion
The study is designed to maximise the potential for roll-out in similar settings, by engaging the public sector and business community in designing and delivering the intervention. We have developed a sustainable business model using a ‘points’ based loyalty platform, whereby local businesses ‘sponsor’ the incentive (retail vouchers) in return for increased footfall to their business.
Resumo:
A novel surrogate model is proposed in lieu of Computational Fluid Dynamics (CFD) solvers, for fast nonlinear aerodynamic and aeroelastic modeling. A nonlinear function is identified on selected interpolation points by
a discrete empirical interpolation method (DEIM). The flow field is then reconstructed using a least square approximation of the flow modes extracted
by proper orthogonal decomposition (POD). The aeroelastic reduce order
model (ROM) is completed by introducing a nonlinear mapping function
between displacements and the DEIM points. The proposed model is investigated to predict the aerodynamic forces due to forced motions using
a N ACA 0012 airfoil undergoing a prescribed pitching oscillation. To investigate aeroelastic problems at transonic conditions, a pitch/plunge airfoil
and a cropped delta wing aeroelastic models are built using linear structural models. The presence of shock-waves triggers the appearance of limit
cycle oscillations (LCO), which the model is able to predict. For all cases
tested, the new ROM shows the ability to replicate the nonlinear aerodynamic forces, structural displacements and reconstruct the complete flow
field with sufficient accuracy at a fraction of the cost of full order CFD
model.
Resumo:
Composites are fast becoming a cost effective option when considering the design of engineering structures in a broad range of applications. If the strength to weight benefits of these material systems can be exploited and challenges in developing lower cost manufacturing methods overcome, then the advanced composite systems will play a bigger role in the diverse range of sectors outside the aerospace industry where they have been used for decades.
This paper presents physical testing results that showcase the advantages of GRP (Glass Reinforced Plastics), such as the ability to endure loading with minimal deformation. The testing involved is a cross comparison of GRP grating vs. GRP encapsulated foam core. Resulting data gained within this paper will then be coupled with design optimization (utilising model simulation) to bring forward layup alterations to meet the specified load classifications involved.
Resumo:
A novel surrogate model is proposed in lieu of computational fluid dynamic (CFD) code for fast nonlinear aerodynamic modeling. First, a nonlinear function is identified on selected interpolation points defined by discrete empirical interpolation method (DEIM). The flow field is then reconstructed by a least square approximation of flow modes extracted by proper orthogonal decomposition (POD). The proposed model is applied in the prediction of limit cycle oscillation for a plunge/pitch airfoil and a delta wing with linear structural model, results are validate against a time accurate CFD-FEM code. The results show the model is able to replicate the aerodynamic forces and flow fields with sufficient accuracy while requiring a fraction of CFD cost.
Resumo:
We study work extraction from the Dicke model achieved using simple unitary cyclic transformations keeping into account both a non optimal unitary protocol, and the energetic cost of creating the initial state. By analyzing the role of entanglement, we find that highly entangled states can be inefficient for energy storage when considering the energetic cost of creating the state. Such surprising result holds notwithstanding the fact that the criticality of the model at hand can sensibly improve the extraction of work. While showing the advantages of using a many-body system for work extraction, our results demonstrate that entanglement is not necessarily advantageous for energy storage purposes, when non optimal processes are considered. Our work shows the importance of better understanding the complex interconnections between non-equilibrium thermodynamics of quantum systems and correlations among their subparts.
Resumo:
The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP) method or Simple Multi-Attribute Rating Technique (SMART). The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
The purpose of the study was to explore how a public, IT services transferor, organization, comprised of autonomous entities, can effectively develop and organize its data center cost recovery mechanisms in a fair manner. The lack of a well-defined model for charges and a cost recovery scheme could cause various problems. For example one entity may be subsidizing the costs of another entity(s). Transfer pricing is in the best interest of each autonomous entity in a CCA. While transfer pricing plays a pivotal role in the price settings of services and intangible assets, TCE focuses on the arrangement at the boundary between entities. TCE is concerned with the costs, autonomy, and cooperation issues of an organization. The theory is concern with the factors that influence intra-firm transaction costs and attempting to manifest the problems involved in the determination of the charges or prices of the transactions. This study was carried out, as a single case study, in a public organization. The organization intended to transfer the IT services of its own affiliated public entities and was in the process of establishing a municipal-joint data center. Nine semi-structured interviews, including two pilot interviews, were conducted with the experts and managers of the case company and its affiliating entities. The purpose of these interviews was to explore the charging and pricing issues of the intra-firm transactions. In order to process and summarize the findings, this study employed qualitative techniques with the multiple methods of data collection. The study, by reviewing the TCE theory and a sample of transfer pricing literature, created an IT services pricing framework as a conceptual tool for illustrating the structure of transferring costs. Antecedents and consequences of the transfer price based on TCE were developed. An explanatory fair charging model was eventually developed and suggested. The findings of the study suggested that the Chargeback system was inappropriate scheme for an organization with affiliated autonomous entities. The main contribution of the study was the application of TP methodologies in the public sphere with no tax issues consideration.
Resumo:
The role of computer modeling has grown recently to integrate itself as an inseparable tool to experimental studies for the optimization of automotive engines and the development of future fuels. Traditionally, computer models rely on simplified global reaction steps to simulate the combustion and pollutant formation inside the internal combustion engine. With the current interest in advanced combustion modes and injection strategies, this approach depends on arbitrary adjustment of model parameters that could reduce credibility of the predictions. The purpose of this study is to enhance the combustion model of KIVA, a computational fluid dynamics code, by coupling its fluid mechanics solution with detailed kinetic reactions solved by the chemistry solver, CHEMKIN. As a result, an engine-friendly reaction mechanism for n-heptane was selected to simulate diesel oxidation. Each cell in the computational domain is considered as a perfectly-stirred reactor which undergoes adiabatic constant- volume combustion. The model was applied to an ideally-prepared homogeneous- charge compression-ignition combustion (HCCI) and direct injection (DI) diesel combustion. Ignition and combustion results show that the code successfully simulates the premixed HCCI scenario when compared to traditional combustion models. Direct injection cases, on the other hand, do not offer a reliable prediction mainly due to the lack of turbulent-mixing model, inherent in the perfectly-stirred reactor formulation. In addition, the model is sensitive to intake conditions and experimental uncertainties which require implementation of enhanced predictive tools. It is recommended that future improvements consider turbulent-mixing effects as well as optimization techniques to accurately simulate actual in-cylinder process with reduced computational cost. Furthermore, the model requires the extension of existing fuel oxidation mechanisms to include pollutant formation kinetics for emission control studies.