25 resultados para travel cost method
em Aston University Research Archive
Resumo:
Renewable energy project development is highly complex and success is by no means guaranteed. Decisions are often made with approximate or uncertain information yet the current methods employed by decision-makers do not necessarily accommodate this. Levelised energy costs (LEC) are one such commonly applied measure utilised within the energy industry to assess the viability of potential projects and inform policy. The research proposes a method for achieving this by enhancing the traditional discounting LEC measure with fuzzy set theory. Furthermore, the research develops the fuzzy LEC (F-LEC) methodology to incorporate the cost of financing a project from debt and equity sources. Applied to an example bioenergy project, the research demonstrates the benefit of incorporating fuzziness for project viability, optimal capital structure and key variable sensitivity analysis decision-making. The proposed method contributes by incorporating uncertain and approximate information to the widely utilised LEC measure and by being applicable to a wide range of energy project viability decisions. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
In developed countries travel time savings can account for as much as 80% of the overall benefits arising from transport infrastructure and service improvements. In developing countries they are generally ignored in transport project appraisals, notwithstanding their importance. One of the reasons for ignoring these benefits in the developing countries is that there is insufficient empirical evidence to support the conventional models for valuing travel time where work patterns, particularly of the poor, are diverse and it is difficult to distinguish between work and non-work activities. The exclusion of time saving benefits may lead to a bias against investment decisions that benefit the poor and understate the poverty reduction potential of transport investments in Least Developed Countries (LDCs). This is because the poor undertake most travel and transport by walking and headloading on local roads, tracks and paths and improvements of local infrastructure and services bring large time saving benefits for them through modal shifts. The paper reports on an empirical study to develop a methodology for valuing rural travel time savings in the LDCs. Apart from identifying the theoretical and empirical issues in valuing travel time savings in the LDCs, the paper presents and discusses the results of an analysis of data from Bangladesh. Some of the study findings challenge the conventional wisdom concerning the time saving values. The Bangladesh study suggests that the western concept of dividing travel time savings into working and non-working time savings is broadly valid in the developing country context. The study validates the use of preference methods in valuing non-working time saving values. However, stated preference (SP) method is more appropriate than revealed preference (RP) method.
Resumo:
This paper presents a new method for the optimisation of the mirror element spacing arrangement and operating temperature of linear Fresnel reflectors (LFR). The specific objective is to maximise available power output (i.e. exergy) and operational hours whilst minimising cost. The method is described in detail and compared to an existing design method prominent in the literature. Results are given in terms of the exergy per total mirror area (W/m2) and cost per exergy (US $/W). The new method is applied principally to the optimisation of an LFR in Gujarat, India, for which cost data have been gathered. It is recommended to use a spacing arrangement such that the onset of shadowing among mirror elements occurs at a transversal angle of 45°. This results in a cost per exergy of 2.3 $/W. Compared to the existing design approach, the exergy averaged over the year is increased by 9% to 50 W/m2 and an additional 122 h of operation per year are predicted. The ideal operating temperature at the surface of the absorber tubes is found to be 300 °C. It is concluded that the new method is an improvement over existing techniques and a significant tool for any future design work on LFR systems
Resumo:
OBJECTIVES: To assess whether blood pressure control in primary care could be improved with the use of patient held targets and self monitoring in a practice setting, and to assess the impact of these on health behaviours, anxiety, prescribed antihypertensive drugs, patients' preferences, and costs. DESIGN: Randomised controlled trial. SETTING: Eight general practices in south Birmingham. PARTICIPANTS: 441 people receiving treatment in primary care for hypertension but not controlled below the target of < 140/85 mm Hg. INTERVENTIONS: Patients in the intervention group received treatment targets along with facilities to measure their own blood pressure at their general practice; they were also asked to visit their general practitioner or practice nurse if their blood pressure was repeatedly above the target level. Patients in the control group received usual care (blood pressure monitoring by their practice). MAIN OUTCOME MEASURES: Primary outcome: change in systolic blood pressure at six months and one year in both intervention and control groups. Secondary outcomes: change in health behaviours, anxiety, prescribed antihypertensive drugs, patients' preferences of method of blood pressure monitoring, and costs. RESULTS: 400 (91%) patients attended follow up at one year. Systolic blood pressure in the intervention group had significantly reduced after six months (mean difference 4.3 mm Hg (95% confidence interval 0.8 mm Hg to 7.9 mm Hg)) but not after one year (mean difference 2.7 mm Hg (- 1.2 mm Hg to 6.6 mm Hg)). No overall difference was found in diastolic blood pressure, anxiety, health behaviours, or number of prescribed drugs. Patients who self monitored lost more weight than controls (as evidenced by a drop in body mass index), rated self monitoring above monitoring by a doctor or nurse, and consulted less often. Overall, self monitoring did not cost significantly more than usual care (251 pounds sterling (437 dollars; 364 euros) (95% confidence interval 233 pounds sterling to 275 pounds sterling) versus 240 pounds sterling (217 pounds sterling to 263 pounds sterling). CONCLUSIONS: Practice based self monitoring resulted in small but significant improvements of blood pressure at six months, which were not sustained after a year. Self monitoring was well received by patients, anxiety did not increase, and there was no appreciable additional cost. Practice based self monitoring is feasible and results in blood pressure control that is similar to that in usual care.
Resumo:
Most parametric software cost estimation models used today evolved in the late 70's and early 80's. At that time, the dominant software development techniques being used were the early 'structured methods'. Since then, several new systems development paradigms and methods have emerged, one being Jackson Systems Development (JSD). As current cost estimating methods do not take account of these developments, their non-universality means they cannot provide adequate estimates of effort and hence cost. In order to address these shortcomings two new estimation methods have been developed for JSD projects. One of these methods JSD-FPA, is a top-down estimating method, based on the existing MKII function point method. The other method, JSD-COCOMO, is a sizing technique which sizes a project, in terms of lines of code, from the process structure diagrams and thus provides an input to the traditional COCOMO method.The JSD-FPA method allows JSD projects in both the real-time and scientific application areas to be costed, as well as the commercial information systems applications to which FPA is usually applied. The method is based upon a three-dimensional view of a system specification as opposed to the largely data-oriented view traditionally used by FPA. The method uses counts of various attributes of a JSD specification to develop a metric which provides an indication of the size of the system to be developed. This size metric is then transformed into an estimate of effort by calculating past project productivity and utilising this figure to predict the effort and hence cost of a future project. The effort estimates produced were validated by comparing them against the effort figures for six actual projects.The JSD-COCOMO method uses counts of the levels in a process structure chart as the input to an empirically derived model which transforms them into an estimate of delivered source code instructions.
Resumo:
The work described in the following pages was carried out at various sites in the Rod Division of the Delta Metal Company. Extensive variation in the level of activity in the industry during the years 1974 to I975 had led to certain inadequacies being observed 1n the traditional cost control procedure. In an attempt to remedy this situation it was suggested that a method be found of constructing a system to improve the flexibility of cost control procedures. The work involved an assimilation of the industrial and financial environment via pilot studies which would later prove invaluable to home in on the really interesting and important areas. Weaknesses in the current systems which came to light made the methodology of data collection and the improvement of cost control and profit planning procedures easier to adopt. Because of the requirements of the project to investigate the implications of Cost behaviour for profit planning and control, the next stage of the research work was to utilise the on-site experience to examine at a detailed level the nature of cost behaviour. The analysis of factory costs then showed that certain costs, which were the most significant exhibited a stable relationship with respect to some known variable, usually a specific measure of Output. These costs were then formulated in a cost model, to establish accurate standards in a complex industrial setting in order to provide a meaningful comparison against which to judge actual performance. The necessity of a cost model was •reinforced by the fact that the cost behaviour found to exist was, in the main, a step function, and this complex cost behaviour, the traditional cost and profit planning procedures could not possibly incorporate. Already implemented from this work is the establishment of the post of information officer to co-ordinate data collection and information provision.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to heat conduction in two-dimensional bodies, where the thermal diffusivity is piecewise constant. We extend the MFS proposed in Johansson and Lesnic [A method of fundamental solutions for transient heat conduction, Eng. Anal. Bound. Elem. 32 (2008), pp. 697–703] for one-dimensional heat conduction with the sources placed outside the space domain of interest, to the two-dimensional setting. Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate results can be obtained efficiently with small computational cost.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the one-dimensional inverse Stefan problem for the heat equation by extending the MFS proposed in [5] for the one-dimensional direct Stefan problem. The sources are placed outside the space domain of interest and in the time interval (-T, T). Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate and stable results can be obtained efficiently with small computational cost.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the backward heat conduction problem (BHCP). We extend the MFS in Johansson and Lesnic (2008) [5] and Johansson et al. (in press) [6] proposed for one and two-dimensional direct heat conduction problems, respectively, with the sources placed outside the space domain of interest. Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate and stable results can be obtained efficiently with small computational cost.
Resumo:
We investigate an application of the method of fundamental solutions (MFS) to the one-dimensional parabolic inverse Cauchy–Stefan problem, where boundary data and the initial condition are to be determined from the Cauchy data prescribed on a given moving interface. In [B.T. Johansson, D. Lesnic, and T. Reeve, A method of fundamental solutions for the one-dimensional inverse Stefan Problem, Appl. Math Model. 35 (2011), pp. 4367–4378], the inverse Stefan problem was considered, where only the boundary data is to be reconstructed on the fixed boundary. We extend the MFS proposed in Johansson et al. (2011) and show that the initial condition can also be simultaneously recovered, i.e. the MFS is appropriate for the inverse Cauchy-Stefan problem. Theoretical properties of the method, as well as numerical investigations, are included, showing that accurate results can be efficiently obtained with small computational cost.
Resumo:
We present a novel numerical method for a mixed initial boundary value problem for the unsteady Stokes system in a planar doubly-connected domain. Using a Laguerre transformation the unsteady problem is reduced to a system of boundary value problems for the Stokes resolvent equations. Employing a modied potential approach we obtain a system of boundary integral equations with various singularities and we use a trigonometric quadrature method for their numerical solution. Numerical examples are presented showing that accurate approximations can be obtained with low computational cost.
Resumo:
Transportation service operators are witnessing a growing demand for bi-directional movement of goods. Given this, the following thesis considers an extension to the vehicle routing problem (VRP) known as the delivery and pickup transportation problem (DPP), where delivery and pickup demands may occupy the same route. The problem is formulated here as the vehicle routing problem with simultaneous delivery and pickup (VRPSDP), which requires the concurrent service of the demands at the customer location. This formulation provides the greatest opportunity for cost savings for both the service provider and recipient. The aims of this research are to propose a new theoretical design to solve the multi-objective VRPSDP, provide software support for the suggested design and validate the method through a set of experiments. A new real-life based multi-objective VRPSDP is studied here, which requires the minimisation of the often conflicting objectives: operated vehicle fleet size, total routing distance and the maximum variation between route distances (workload variation). The former two objectives are commonly encountered in the domain and the latter is introduced here because it is essential for real-life routing problems. The VRPSDP is defined as a hard combinatorial optimisation problem, therefore an approximation method, Simultaneous Delivery and Pickup method (SDPmethod) is proposed to solve it. The SDPmethod consists of three phases. The first phase constructs a set of diverse partial solutions, where one is expected to form part of the near-optimal solution. The second phase determines assignment possibilities for each sub-problem. The third phase solves the sub-problems using a parallel genetic algorithm. The suggested genetic algorithm is improved by the introduction of a set of tools: genetic operator switching mechanism via diversity thresholds, accuracy analysis tool and a new fitness evaluation mechanism. This three phase method is proposed to address the shortcoming that exists in the domain, where an initial solution is built only then to be completely dismantled and redesigned in the optimisation phase. In addition, a new routing heuristic, RouteAlg, is proposed to solve the VRPSDP sub-problem, the travelling salesman problem with simultaneous delivery and pickup (TSPSDP). The experimental studies are conducted using the well known benchmark Salhi and Nagy (1999) test problems, where the SDPmethod and RouteAlg solutions are compared with the prominent works in the VRPSDP domain. The SDPmethod has demonstrated to be an effective method for solving the multi-objective VRPSDP and the RouteAlg for the TSPSDP.
Resumo:
For the development of communication systems such as Internet of Things, integrating communication with power supplies is an attractive solution to reduce supply cost. This paper presents a novel method of power/signal dual modulation (PSDM), by which signal transmission is integrated with power conversion. This method takes advantage of the intrinsic ripple initiated in switch mode power supplies as signal carriers, by which cost-effective communications can be realized. The principles of PSDM are discussed, and two basic dual modulation methods (specifically PWM/FSK and PWM/PSK) are concluded. The key points of designing a PWM/FSK system, including topology selection, carrier shape, and carrier frequency, are discussed to provide theoretical guidelines. A practical signal modulation-demodulation method is given, and a prototype system provides experimental results to verify the effectiveness of the proposed solution.
Resumo:
N-doped ZnO/g-C3N4 hybrid core–shell nanoplates have been successfully prepared via a facile, cost-effective and eco-friendly ultrasonic dispersion method for the first time. HRTEM studies confirm the formation of the N-doped ZnO/g-C3N4 hybrid core–shell nanoplates with an average diameter of 50 nm and the g-C3N4 shell thickness can be tuned by varying the content of loaded g-C3N4. The direct contact of the N-doped ZnO surface and g-C3N4 shell without any adhesive interlayer introduced a new carbon energy level in the N-doped ZnO band gap and thereby effectively lowered the band gap energy. Consequently, the as-prepared hybrid core–shell nanoplates showed a greatly enhanced visible-light photocatalysis for the degradation of Rhodamine B compare to that of pure N-doped ZnO surface and g-C3N4. Based on the experimental results, a proposed mechanism for the N-doped ZnO/g-C3N4 photocatalyst was discussed. Interestingly, the hybrid core–shell nanoplates possess high photostability. The improved photocatalytic performance is due to a synergistic effect at the interface of the N-doped ZnO and g-C3N4 including large surface-exposure area, energy band structure and enhanced charge-separation properties. Significantly, the enhanced performance also demonstrates the importance of evaluating new core–shell composite photocatalysts with g-C3N4 as shell material.