957 resultados para on-ramp queue estimation
Resumo:
Well, it has been Clem 7 month here in Brisbane and my impression is “so far, so good!” For those of you who know Brisbane, the four lane twin Clem Jones Tunnel (M7) is approximately 4.5km long, and connects Ipswich Road (A7) at the Princess Alexandra Hospital on the south side with Bowen Bridge Road (A3) at the Royal Brisbane Hospital on the north side. There are also south access ramps to the Pacific Motorway and east access ramps to Shafston Avenue (headed to/from Wynnum). Brisbanites have been enjoying a three week no-toll taste test, and I paced through it one evening with minimal fuss. The tunnel seems to have eased the congestion at the Stanley Street on-ramp to the Pacific Motorway quite a bit, and Ipswich Road – Main Street through the ‘Gabba. One must watch the signage carefully, but once we get used to the infrastructure, this will not likely be problematic. It will be interesting to see how traffic behaves when the system settles after tolling, which has likely commenced by the time you’re reading. I believe a passenger car toll is about $4.20 one way but saves about 24 signalised intersection pass-throughs.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
Nutrition interventions in the form of both self-management education and individualised diet therapy are considered essential for the long-term management of type 2 diabetes mellitus (T2DM). The measurement of diet is essential to inform, support and evaluate nutrition interventions in the management of T2DM. Barriers inherent within health care settings and systems limit ongoing access to personnel and resources, while traditional prospective methods of assessing diet are burdensome for the individual and often result in changes in typical intake to facilitate recording. This thesis investigated the inclusion of information and communication technologies (ICT) to overcome limitations to current approaches in the nutritional management of T2DM, in particular the development, trial and evaluation of the Nutricam dietary assessment method (NuDAM) consisting of a mobile phone photo/voice application to assess nutrient intake in a free-living environment with older adults with T2DM. Study 1: Effectiveness of an automated telephone system in promoting change in dietary intake among adults with T2DM The effectiveness of an automated telephone system, Telephone-Linked Care (TLC) Diabetes, designed to deliver self-management education was evaluated in terms of promoting dietary change in adults with T2DM and sub-optimal glycaemic control. In this secondary data analysis independent of the larger randomised controlled trial, complete data was available for 95 adults (59 male; mean age(±SD)=56.8±8.1 years; mean(±SD)BMI=34.2±7.0kg/m2). The treatment effect showed a reduction in total fat of 1.4% and saturated fat of 0.9% energy intake, body weight of 0.7 kg and waist circumference of 2.0 cm. In addition, a significant increase in the nutrition self-efficacy score of 1.3 (p<0.05) was observed in the TLC group compared to the control group. The modest trends observed in this study indicate that the TLC Diabetes system does support the adoption of positive nutrition behaviours as a result of diabetes self-management education, however caution must be applied in the interpretation of results due to the inherent limitations of the dietary assessment method used. The decision to use a close-list FFQ with known bias may have influenced the accuracy of reporting dietary intake in this instance. This study provided an example of the methodological challenges experienced with measuring changes in absolute diet using a FFQ, and reaffirmed the need for novel prospective assessment methods capable of capturing natural variance in usual intakes. Study 2: The development and trial of NuDAM recording protocol The feasibility of the Nutricam mobile phone photo/voice dietary record was evaluated in 10 adults with T2DM (6 Male; age=64.7±3.8 years; BMI=33.9±7.0 kg/m2). Intake was recorded over a 3-day period using both Nutricam and a written estimated food record (EFR). Compared to the EFR, the Nutricam device was found to be acceptable among subjects, however, energy intake was under-recorded using Nutricam (-0.6±0.8 MJ/day; p<0.05). Beverages and snacks were the items most frequently not recorded using Nutricam; however forgotten meals contributed to the greatest difference in energy intake between records. In addition, the quality of dietary data recorded using Nutricam was unacceptable for just under one-third of entries. It was concluded that an additional mechanism was necessary to complement dietary information collected via Nutricam. Modifications to the method were made to allow for clarification of Nutricam entries and probing forgotten foods during a brief phone call to the subject the following morning. The revised recording protocol was evaluated in Study 4. Study 3: The development and trial of the NuDAM analysis protocol Part A explored the effect of the type of portion size estimation aid (PSEA) on the error associated with quantifying four portions of 15 single foods items contained in photographs. Seventeen dietetic students (1 male; age=24.7±9.1 years; BMI=21.1±1.9 kg/m2) estimated all food portions on two occasions: without aids and with aids (food models or reference food photographs). Overall, the use of a PSEA significantly reduced mean (±SD) group error between estimates compared to no aid (-2.5±11.5% vs. 19.0±28.8%; p<0.05). The type of PSEA (i.e. food models vs. reference food photograph) did not have a notable effect on the group estimation error (-6.7±14.9% vs. 1.4±5.9%, respectively; p=0.321). This exploratory study provided evidence that the use of aids in general, rather than the type, was more effective in reducing estimation error. Findings guided the development of the Dietary Estimation and Assessment Tool (DEAT) for use in the analysis of the Nutricam dietary record. Part B evaluated the effect of the DEAT on the error associated with the quantification of two 3-day Nutricam dietary records in a sample of 29 dietetic students (2 males; age=23.3±5.1 years; BMI=20.6±1.9 kg/m2). Subjects were randomised into two groups: Group A and Group B. For Record 1, the use of the DEAT (Group A) resulted in a smaller error compared to estimations made without the tool (Group B) (17.7±15.8%/day vs. 34.0±22.6%/day, p=0.331; respectively). In comparison, all subjects used the DEAT to estimate Record 2, with resultant error similar between Group A and B (21.2±19.2%/day vs. 25.8±13.6%/day; p=0.377 respectively). In general, the moderate estimation error associated with quantifying food items did not translate into clinically significant differences in the nutrient profile of the Nutricam dietary records, only amorphous foods were notably over-estimated in energy content without the use of the DEAT (57kJ/day vs. 274kJ/day; p<0.001). A large proportion (89.6%) of the group found the DEAT helpful when quantifying food items contained in the Nutricam dietary records. The use of the DEAT reduced quantification error, minimising any potential effect on the estimation of energy and macronutrient intake. Study 4: Evaluation of the NuDAM The accuracy and inter-rater reliability of the NuDAM to assess energy and macronutrient intake was evaluated in a sample of 10 adults (6 males; age=61.2±6.9 years; BMI=31.0±4.5 kg/m2). Intake recorded using both the NuDAM and a weighed food record (WFR) was coded by three dietitians and compared with an objective measure of total energy expenditure (TEE) obtained using the doubly labelled water technique. At the group level, energy intake (EI) was under-reported to a similar extent using both methods, with the ratio of EI:TEE was 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. At the individual level, four subjects reported implausible levels of energy intake using the WFR method, compared to three using the NuDAM. Overall, moderate to high correlation coefficients (r=0.57-0.85) were found across energy and macronutrients except fat (r=0.24) between the two dietary measures. High agreement was observed between dietitians for estimates of energy and macronutrient derived for both the NuDAM (ICC=0.77-0.99; p<0.001) and WFR (ICC=0.82-0.99; p<0.001). All subjects preferred using the NuDAM over the WFR to record intake and were willing to use the novel method again over longer recording periods. This research program explored two novel approaches which utilised distinct technologies to aid in the nutritional management of adults with T2DM. In particular, this thesis makes a significant contribution to the evidence base surrounding the use of PhRs through the development, trial and evaluation of a novel mobile phone photo/voice dietary record. The NuDAM is an extremely promising advancement in the nutritional management of individuals with diabetes and other chronic conditions. Future applications lie in integrating the NuDAM with other technologies to facilitate practice across the remaining stages of the nutrition care process.
Resumo:
An online survey was conducted to investigate the views and experiences of Australian traffic and transport professionals about practical problems and issues in terms of trip generation and trip chaining for use in Transport Impact Assessment (TIA). Findings from this survey revealed that there is a shortage of appropriate data related to trip generation estimation for use in TIAs in Australia. Establishing a National Trip Generation Database (NTGD) with a centralised responsible organisation for collecting and publishing trip generation data based on federal and state governments’ contribution was found the most accepted solution for resolving this shortage as well as providing national standards and guidelines associated with trip generation definitions, data collection methodology, and TIA preparation process based on updated research. Finally, the study recognised the importance of the trip chaining effects on trip generation estimation and identified most prevalent land uses subject to trip chaining in terms of TIA.
Early mathematical learning: Number processing skills and executive function at 5 and 8 years of age
Resumo:
This research investigated differences and associations in performance in number processing and executive function for children attending primary school in a large Australian metropolitan city. In a cross-sectional study, performance of 25 children in the first full-time year of school, (Prep; mean age = 5.5 years) and 21 children in Year 3 (mean age = 8.5 years) completed three number processing tasks and three executive function tasks. Year 3 children consistently outperformed the Prep year children on measures of accuracy and reaction time, on the tasks of number comparison, calculation, shifting, and inhibition but not on number line estimation. The components of executive function (shifting, inhibition, and working memory) showed different patterns of correlation to performance on number processing tasks across the early years of school. Findings could be used to enhance teachers’ understanding about the role of the cognitive processes employed by children in numeracy learning, and so inform teachers’ classroom practices.
Resumo:
Spotted gum dominant forests occur from Cooktown in northern Queensland (Qld) to Orbost in Victoria (Boland et al. 2006) and these forests are commercially very important with spotted gum the most commonly harvested hardwood timber in Qld and one of the most important in New South Wales (NSW). Spotted gum has a wide range of end uses from solid wood products through to power transmission poles and generally has excellent sawing and timber qualities (Hopewell 2004). The private native forest resource in southern Qld and northern NSW is a critical component of the hardwood timber industry (Anon 2005, Timber Qld 2006) and currently half or more of the native forest timber resource harvested in northern NSW and Qld is sourced from private land. However, in many cases productivity on private lands is well below what could be achieved with appropriate silvicultural management. This project provides silvicultural management tools to assist extension staff, land owners and managers in the south east Qld and north eastern NSW regions. The intent was that this would lead to improvement of the productivity of the private estate through implementation of appropriate management. The other intention of this project was to implement a number of silvicultural experiments and demonstration sites to provide data on growth rates of managed and unmanaged forests so that landholders can make informed decisions on the future management of their forests. To assist forest managers and improve the ability to predict forest productivity in the private resource, the project has developed: • A set of spotted gum specific silvicultural guidelines for timber production on private land that cover both silvicultural treatment and harvesting. The guidelines were developed for extension officers and property owners. • A simple decision support tool, referred to as the spotted gum productivity assessment tool (SPAT), that allows an estimation of: 1. Tree growth productivity on specific sites. Estimation is based on the analysis of site and growth data collected from a large number of yield and experimental plots on Crown land across a wide range of spotted gum forest types. Growth algorithms were developed using tree growth and site data and the algorithms were used to formulate basic economic predictors. 2. Pasture development under a range of tree stockings and the expected livestock carrying capacity at nominated tree stockings for a particular area. 3. Above-ground tree biomass and carbon stored in trees. •A series of experiments in spotted gum forests on private lands across the study area to quantify growth and to provide measures of the effect of silvicultural thinning and different agro-forestry regimes. The adoption and use of these tools by farm forestry extension officers and private land holders in both field operations and in training exercises will, over time, improve the commercial management of spotted gum forests for both timber and grazing. Future measurement of the experimental sites at ages five, 10 and 15 years will provide longer term data on the effects of various stocking rates and thinning regimes and facilitate modification and improvement of these silvicultural prescriptions.
Resumo:
The simultaneous state and parameter estimation problem for a linear discrete-time system with unknown noise statistics is treated as a large-scale optimization problem. The a posterioriprobability density function is maximized directly with respect to the states and parameters subject to the constraint of the system dynamics. The resulting optimization problem is too large for any of the standard non-linear programming techniques and hence an hierarchical optimization approach is proposed. It turns out that the states can be computed at the first levelfor given noise and system parameters. These, in turn, are to be modified at the second level.The states are to be computed from a large system of linear equations and two solution methods are considered for solving these equations, limiting the horizon to a suitable length. The resulting algorithm is a filter-smoother, suitable for off-line as well as on-line state estimation for given noise and system parameters. The second level problem is split up into two, one for modifying the noise statistics and the other for modifying the system parameters. An adaptive relaxation technique is proposed for modifying the noise statistics and a modified Gauss-Newton technique is used to adjust the system parameters.
Resumo:
The problem of identification of parameters of a beam-moving oscillator system based on measurement of time histories of beam strains and displacements is considered. The governing equations of motion here have time varying coefficients. The parameters to be identified are however time invariant and consist of mass, stiffness and damping characteristics of the beam and oscillator subsystems. A strategy based on dynamic state estimation method, that employs particle filtering algorithms, is proposed to tackle the identification problem. The method can take into account measurement noise, guideway unevenness, spatially incomplete measurements, finite element models for supporting structure and moving vehicle, and imperfections in the formulation of the mathematical models. Numerical illustrations based on synthetic data on beam-oscillator system are presented to demonstrate the satisfactory performance of the proposed procedure.
Resumo:
The primary aim of this thesis was the evaluation of the perfusion of normal organs in cats using contrast-enhanced ultrasound (CEUS), to serve as a reference for later clinical studies. Little is known of the use of CEUS in cats, especially regarding its safety and the effects of anesthesia on the procedure, thus, secondary aims here were to validate the quantitative analyzing method, to investigate the biological effects of CEUS on feline kidneys, and to assess the effect of anesthesia on splenic perfusion in cats undergoing CEUS. -- The studies were conducted on healthy, young, purpose-bred cats. CEUS of the liver, left kidney, spleen, pancreas, small intestine, and mesenteric lymph nodes was performed to characterize the normal perfusion of these organs on ten anesthetized, male cats. To validate the quantification method, the effects of placement and size of the region of interest (ROI) on perfusion parameters were investigated using CEUS: Three separate sets of ROIs were placed in the kidney cortex, varying in location, size, or depth. The biological effects of CEUS on feline kidneys were estimated by measuring urinary enzymatic activities, analyzing urinary specific gravity, pH, protein, creatinine, albumin, and sediment, and measuring plasma urea and creatinine concentrations before and after CEUS. Finally, the impact of anesthesia on contrast enhancement of the spleen was investigated by imaging cats with CEUS first awake and later under anesthesia on separate days. -- Typical perfusion patterns were found for each of the studied organs. The liver had a gradual and more heterogeneous perfusion pattern due to its dual blood flow and close proximity to the diaphragm. An obvious and statistically significant difference emerged in the perfusion between the kidney cortex and medulla. Enhancement in the spleen was very heterogeneous at the beginning of imaging, indicating focal dissimilarities in perfusion. No significant differences emerged in the perfusion parameters between the pancreas, small intestine, and mesenteric lymph nodes. -- The ROI placement and size were found to have an influence on the quantitative measurements of CEUS. Increasing the depth or the size of the ROI decreased the peak intensity value significantly, suggesting that where and how the ROI is placed does matter in quantitative analyses. --- A significant increase occurred in the urinary N-acetyl-β-D-glucosaminidase (NAG) to creatinine ratio after CEUS. No changes were noted in the serum biochemistry profile after CEUS, with the exception of a small decrease in blood urea concentration. The magnitude of the rise in the NAG/creatinine ratio was, however, less than the circadian variation reported earlier in healthy cats. Thus, the changes observed in the laboratory values after CEUS of the left kidney did not indicate any detrimental effects in kidneys. Heterogeneity of the spleen was observed to be less and time of first contrast appearance earlier in nonanesthetized cats than in anesthetized ones, suggesting that anesthesia increases heterogeneity of the feline spleen in CEUS. ---- In conclusion, the results suggest that CEUS can be used also in feline veterinary patients as an additional diagnostics aid. The perfusion patterns found in the imaged organs were typical and similar to those seen earlier in other species, with the exception of the heterogeneous perfusion pattern in the cat spleen. Differences in the perfusion between organs corresponded with physiology. Based on the results, estimation of focal perfusion defects of the spleen in cats should be performed with caution and after the disappearance of the initial heterogeneity, especially in anesthetized or sedated cats. Finally, these results indicate that CEUS can be used safely to analyze kidney perfusion also in cats. Future clinical studies are needed to evaluate the full potential of CEUS in feline medicine as a tool for diagnosing lesions in various organ systems.
Resumo:
Effective usage of image guidance by incorporating the refractive index (RI) variation in computational modeling of light propagation in tissue is investigated to assess its impact on optical-property estimation. With the aid of realistic patient breast three-dimensional models, the variation in RI for different regions of tissue under investigation is shown to influence the estimation of optical properties in image-guided diffuse optical tomography (IG-DOT) using numerical simulations. It is also shown that by assuming identical RI for all regions of tissue would lead to erroneous estimation of optical properties. The a priori knowledge of the RI for the segmented regions of tissue in IG-DOT, which is difficult to obtain for the in vivo cases, leads to more accurate estimates of optical properties. Even inclusion of approximated RI values, obtained from the literature, for the regions of tissue resulted in better estimates of optical properties, with values comparable to that of having the correct knowledge of RI for different regions of tissue.
Resumo:
Relay selection combined with buffering of packets of relays can substantially increase the throughput of a cooperative network that uses rateless codes. However, buffering also increases the end-to-end delays due to the additional queuing delays at the relay nodes. In this paper we propose a novel method that exploits a unique property of rateless codes that enables a receiver to decode a packet from non-contiguous and unordered portions of the received signal. In it, each relay, depending on its queue length, ignores its received coded bits with a given probability. We show that this substantially reduces the end-to-end delays while retaining almost all of the throughput gain achieved by buffering. In effect, the method increases the odds that the packet is first decoded by a relay with a smaller queue. Thus, the queuing load is balanced across the relays and traded off with transmission times. We derive explicit necessary and sufficient conditions for the stability of this system when the various channels undergo fading. Despite encountering analytically intractable G/GI/1 queues in our system, we also gain insights about the method by analyzing a similar system with a simpler model for the relay-to-destination transmission times.
Resumo:
[ES] La necesidad de gestionar y repartir eficazmente los recursos escasos entre las diferentes operaciones de las empresas, hacen que éstas recurran a aplicar técnicas de la Investigación de Operaciones. Éste es el caso de los centros de llamadas, un sector emergente y dinámico que se encuentra en constante desarrollo. En este sector, la administración del trabajo requiere de técnicas predictivas para determinar el número de trabajadores adecuado y así evitar en la medida de lo posible tanto el exceso como la escasez del mismo. Este trabajo se centrará en el estudio del centro de llamadas de emergencias 112 de Andalucía. Partiendo de los datos estadísticos del número medio de llamadas que se realiza en cada franja horaria, facilitados por la Junta de esta Comunidad Autónoma, formularemos y modelizaremos el problema aplicando la Programación Lineal. Posteriormente, lo resolveremos con dos programas de software, con la finalidad de obtener una distribución óptima de agentes que minimice el coste salarial, ya que supone un 65% del gasto de explotación total. Finalmente, mediante la teoría de colas, observaremos los tiempos de espera en cola y calcularemos el número objetivo de agentes que permita no sólo minimizar el coste salarial sino mejorar la calidad de servicio teniendo unos tiempos de espera razonables.
Resumo:
We employed ultrasonic transmitters to follow (for up to 48 h) the horizontal and vertical movements of five juvenile (6.8–18.7 kg estimated body mass) bluefin tuna (Thunnus thynnus) in the western North Atlantic (off the eastern shore of Virginia). Our objective was to document the fishes’ behavior and distribution in relation to oceanographic conditions and thus begin to address issues that currently limit population assessments based on aerial surveys. Estimation of the trends in adult and juvenile Atlantic bluefin tuna abundance by aerial surveys, and other fishery-independent measures, is considered a priority. Juvenile bluefin tuna spent the majority of their time over the continental shelf in relatively shallow water (generally less then 40 m deep). Fish used the entire water column in spite of relatively steep vertical thermal gradients (≈24°C at the surface and ≈12°C at 40 m depth), but spent the majority of their time (≈90%) above 15 m and in water warmer then 20°C. Mean swimming speeds ranged from 2.8 to 3.3 knots, and total distance covered from 152 to 289 km (82–156 nmi). Because fish generally remained within relatively con-fined areas, net displacement was only 7.7–52.7 km (4.1–28.4 nmi). Horizontal movements were not correlated with sea surface temperature. We propose that it is unlikely that juvenile bluefin tuna in this area can detect minor horizontal temperature gradients (generally less then 0.5°C/km) because of the steep vertical temperature gradients (up to ≈0.6°C/m) they experience during their regular vertical movements. In contrast, water clarity did appear to influence behavior because the fish remained in the intermediate water mass between the turbid and phytoplankton-rich plume exiting Chesapeake Bay (and similar coastal waters) and the clear oligotrophic water east of the continental shelf.
Resumo:
The workshop focused on capacity utilisation estimation using Data Envelopment Analysis (DEA) to assess the fishing capacity of fleets.
Resumo:
软件成本估算作为软件项目可行性分析、预算、计划以及控制的基础,是软件工程中的一个重要研究领域。自上世纪60年代以来,尽管软件成本估算一直受到研究者的持续关注,但在现实环境中软件成本估算仍然是软件行业面临的一项难题,仍然有很大的深入研究与改进空间。 现实环境下,软件成本估算方法需要接受不完整且不能完全确定的信息,估算可能的软件开发工作量与开发周期,并衡量估算结果的不确定性与风险。软件成本估算方法还需要满足为使用者创造价值、实施成本低、得到人文与技术上支撑等前提条件,才能在现实环境中被接受。成本估算的应用,也还需要随着项目的演进,与涉众协商、项目计划与项目监控过程密切互动。不能处理好现实环境中的不确定性,不能解决方法实施和应用中所面临的多种关键困难,是已提出的大量成本估算方法和模型难以在现实环境得到广泛使用并发挥影响的重要原因。 本文从确定问题、方法改进、方法应用、工具支撑等多个角度,对软件成本估算进行了较为系统的研究,为解决在现实环境改进成本估算现状所面临的关键困难,以及处理软件成本估算的不确定性这个核心问题,提出了一套包含方法、过程与支撑工具的比较系统、完整的针对实际需要的解决方案。本文研究工作的主要贡献包括: 1)提出软件成本估算的问题模型。设计并实施了中国软件行业的软件成本估算现状调查,探索了我国软件成本估算的现存问题,以及改进成本估算所面临的困难。结合文献综述与产业调查结果,运用“技术接受与使用模型”以及“结果链”等方法,提出软件成本估算的问题模型。涵盖技术、人文、经济与管理多个角度,系统地总结了软件成本估算所面临的问题和潜在的改进。 2)提出集成的成本估算方法。不同于目前估算方法均依赖某种固定估算模型,而是将多种子估算模型作为有用的信息输入,在不同环境下基于历史项目数据自动生成适应该环境的集成的估算模型。 3)针对成本估算在应用环节所面临的关键困难,提出了WikiWinWin软件项目多赢协商方法并开发相应的支持工具。帮助项目涉众正确认识并使用成本估算,促进项目演进过程中涉众协商、成本估算及项目计划与执行过程的有效融合,使成本估算更有效发挥作用。 4)提出了分析框架与相应的综合性方法以处理软件成本估算的不确定性。对软件成本估算的不确定性这个核心问题,进行了系统的分析。使用贝叶斯网络以及Monte Carlo仿真对现有估算模型进行扩展,以处理估算输入的不确定性;使用集成成本估算解决估算模型本身的不确定性;并在成本估算的应用环节,以WikiWinWin方法为核心处理软件成本估算的不确定性。 5)设计开发了软件成本建模与估算支撑工具。结合本文前面提出的成本估算方法形成了工具支撑下的集成的成本建模与估算方法(InCoME方法),在处理估算的不确定性、准确性、稳定性、客观与可重复性、透明性以及建模与估算的自动化支持方面都具有较好的能力,较为全面地满足了企业在现实环境中的需要。