973 resultados para cost prediction
Resumo:
An advanced rule-based Transit Signal Priority (TSP) control method is presented in this paper. An on-line transit travel time prediction model is the key component of the proposed method, which enables the selection of the most appropriate TSP plans for the prevailing traffic and transit condition. The new method also adopts a priority plan re-development feature that enables modifying or even switching the already implemented priority plan to accommodate changes in the traffic conditions. The proposed method utilizes conventional green extension and red truncation strategies and also two new strategies including green truncation and queue clearance. The new method is evaluated against a typical active TSP strategy and also the base case scenario assuming no TSP control in microsimulation. The evaluation results indicate that the proposed method can produce significant benefits in reducing the bus delay time and improving the service regularity with negligible adverse impacts on the non-transit street traffic.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Transition metal-free magnetism and half-metallicity recently has been the subject of intense research activity due to its potential in spintronics application. Here we, for the first time, demonstrate via density functional theory that the most recently experimentally realized graphitic carbon nitride (g-C4N3) displays a ferromagnetic ground state. Furthermore, this novel material is predicted to possess an intrinsic half-metallicity never reported to date. Our results highlight a new promising material toward realistic metal-free spintronics application.
Resumo:
Low-cost level crossings are often criticized as being unsafe. Does a SIL (safety integrity level) rating make the railway crossing any safer? This paper discusses how a supporting argument might be made for low-cost level crossing warning devices with lower levels of safety integrity and issues such as risk tolerability and derivation of tolerable hazard rates for system-level hazards. As part of the design of such systems according to fail-safe principles, the paper considers the assumptions around the pre-defined safe states of existing warning devices and how human factors issues around such states can give rise to additional hazards.
Resumo:
Objective: The aim of the study was to assess the relationship between dimensions of perfectionism and suicide ideation in a tertiary student population in Australia. Method: The methodology involved 405 students completing the General Health Questionnaire (GHQ-28) which includes a subset of questions which can be used to assess suicide ideation, and the Multidimensional Perfectionism Scale. Results: The presence of suicide ideation was associated with higher scores on total perfectionism and two perfectionism dimensions, and total GHQ scores. There were significant differences between participants with high levels of perfectionism and participants with moderate to low levels of perfectionism on a measure of suicide ideation. Neither gender nor age were associated with differences in the scores, with results indicating high levels of perfectionism may indicate a vulnerability to suicide ideation. Conclusions: Perfectionism is a valued attribute in high-achieving populations. The question needs to be asked, however, at what cost? The findings indicate that high levels of perfectionism may be associated with an increased vulnerability to suicide ideation. Future research is needed to gain a better understanding of the complex interrelationship between personality and temperament, environmental factors and self-destructive behaviour.
Resumo:
This paper examines the relationship between financial performance and ethical screening intensity of a special class of ethical funds that is rooted in Islamic values – Islamic equity funds (IEFs). These faith-based ethical funds screen investments on compliance with Islamic values where conventional interest expense (riba), gambling (maysir), excessive uncertainty (gharar), and non-ethical (non-halal) products are prohibited. We test whether these extra screens affect the financial performance of IEFs relative to non-Islamic funds. Based on a large survivorship-free international sample of 387 Islamic funds, our results show that IEFs on average underperform conventional funds by 40 basis points per month, or 4.8% per year (supporting the underperformance hypothesis). While Islamic funds do not generally perform better during crisis periods, they outperformed conventional funds during the recent sub-prime crisis (supporting the outperformance hypothesis). Using holdings-based measures for ethical screening intensity, results show IEFs that apply more intensive screening perform worse, suggesting that there is a cost to being ethical.
Resumo:
Previous studies have enabled exact prediction of probabilities of identity-by-descent (IBD) in randommating populations for a few loci (up to four or so), with extension to more using approximate regression methods. Here we present a precise predictor of multiple-locus IBD using simple formulas based on exact results for two loci. In particular, the probability of non-IBD X ABC at each of ordered loci A, B, and C can be well approximated by XABC = XABXBC/XB and generalizes to X123. . .k = X12X23. . .Xk-1,k/ Xk-2, where X is the probability of non-IBD at each locus. Predictions from this chain rule are very precise with population bottlenecks and migration, but are rather poorer in the presence of mutation. From these coefficients, the probabilities of multilocus IBD and non-IBD can also be computed for genomic regions as functions of population size, time, and map distances. An approximate but simple recurrence formula is also developed, which generally is less accurate than the chain rule but is more robust with mutation. Used together with the chain rule it leads to explicit equations for non-IBD in a region. The results can be applied to detection of quantitative trait loci (QTL) by computing the probability of IBD at candidate loci in terms of identity-by-state at neighboring markers.
Resumo:
A novel multiple regression method (RM) is developed to predict identity-by-descent probabilities at a locus L (IBDL), among individuals without pedigree, given information on surrounding markers and population history. These IBDL probabilities are a function of the increase in linkage disequilibrium (LD) generated by drift in a homogeneous population over generations. Three parameters are sufficient to describe population history: effective population size (Ne), number of generations since foundation (T), and marker allele frequencies among founders (p). IBD L are used in a simulation study to map a quantitative trait locus (QTL) via variance component estimation. RM is compared to a coalescent method (CM) in terms of power and robustness of QTL detection. Differences between RM and CM are small but significant. For example, RM is more powerful than CM in dioecious populations, but not in monoecious populations. Moreover, RM is more robust than CM when marker phases are unknown or when there is complete LD among founders or Ne is wrong, and less robust when p is wrong. CM utilises all marker haplotype information, whereas RM utilises information contained in each individual marker and all possible marker pairs but not in higher order interactions. RM consists of a family of models encompassing four different population structures, and two ways of using marker information, which contrasts with the single model that must cater for all possible evolutionary scenarios in CM.
Resumo:
A new deterministic method for predicting simultaneous inbreeding coefficients at three and four loci is presented. The method involves calculating the conditional probability of IBD (identical by descent) at one locus given IBD at other loci, and multiplying this probability by the prior probability of the latter loci being simultaneously IBD. The conditional probability is obtained applying a novel regression model, and the prior probability from the theory of digenic measures of Weir and Cockerham. The model was validated for a finite monoecious population mating at random, with a constant effective population size, and with or without selfing, and also for an infinite population with a constant intermediate proportion of selfing. We assumed discrete generations. Deterministic predictions were very accurate when compared with simulation results, and robust to alternative forms of implementation. These simultaneous inbreeding coefficients were more sensitive to changes in effective population size than in marker spacing. Extensions to predict simultaneous inbreeding coefficients at more than four loci are now possible.
Resumo:
Bus travel time estimation and prediction are two important modelling approaches which could facilitate transit users in using and transit providers in managing the public transport network. Bus travel time estimation could assist transit operators in understanding and improving the reliability of their systems and attracting more public transport users. On the other hand, bus travel time prediction is an important component of a traveller information system which could reduce the anxiety and stress for the travellers. This paper provides an insight into the characteristic of bus in traffic and the factors that influence bus travel time. A critical overview of the state-of-the-art in bus travel time estimation and prediction is provided and the needs for research in this important area are highlighted. The possibility of using Vehicle Identification Data (VID) for studying the relationship between bus and cars travel time is also explored.
Resumo:
As one of the measures for decreasing road traffic noise in a city, the control of the traffic flow and the physical distribution is considered. To conduct the measure effectively, the model for predicting the traffic flow in the citywide road network is necessary. In this study, the existing model named AVENUE was used as a traffic flow prediction model. The traffic flow model was integrated with the road vehicles' sound power model and the sound propagation model, and the new road traffic noise prediction model was established. As a case study, the prediction model was applied to the road network of Tsukuba city in Japan and the noise map of the city was made. To examine the calculation accuracy of the noise map, the calculated values of the noise at the main roads were compared with the measured values. As a result, it was found that there was a possibility that the high accuracy noise map of the city could be made by using the noise prediction model developed in this study.
Resumo:
This report is the second deliverable of the Real Time and Predictive Traveller Information project and the first deliverable of the Freeway Travel Time Information sub-project in the Integrated Traveller Information research Domain of the Smart Transport Research Centre. The primary objective of the Freeway Travel Time Information sub-project is to develop algorithms for real-time travel time estimation and prediction models for Freeway traffic. The objective of this report is to review the literature pertaining to travel time estimation and prediction models for freeway traffic.
Resumo:
This report is the fourth deliverable of the Real Time and Predictive Traveller Information project and the first deliverable of the Arterial Travel Time Information sub-project in the Integrated Traveller Information research Domain of the Smart Transport Research Centre. The primary objective of the Arterial Travel Time Information sub-project is to develop algorithms for real-time travel time estimation and prediction models for arterial traffic. The objective of this report is to review the literature pertaining to travel time estimation and prediction models for arterial traffic.
Resumo:
Based on theoretical prediction, a g-C3N4@carbon metal-free oxygen reduction reaction (ORR) electrocatalyst was designed and synthesized by uniform incorporation of g-C3N4 into a mesoporous carbon to enhance the electron transfer efficiency of g-C3N4. The resulting g-C3N4@carbon composite exhibited competitive catalytic activity (11.3 mA cm–2 kinetic-limiting current density at −0.6 V) and superior methanol tolerance compared to a commercial Pt/C catalyst. Furthermore, it demonstrated significantly higher catalytic efficiency (nearly 100% of four-electron ORR process selectivity) than a Pt/C catalyst. The proposed synthesis route is facile and low-cost, providing a feasible method for the development of highly efficient electrocatalysts.
Resumo:
We demonstrated for the first time by ab initio density functional calculation and molecular dynamics simulation that C0.5(BN)0.5 armchair single-walled nanotubes (NT) are gapless semiconductors and can be spontaneously formed via the hybrid connection of graphene/BN Nanoribbons (GNR/BNNR) at room temperature. The direct synthesis of armchair C0.5(BN)0.5 via the hybrid connection of GNR/BNNR is predicted to be both thermodynamically and dynamically stable. Such novel armchair C0.5(BN)0.5 NTs possess enhanced conductance as that observed in GNRs. Additionally, the zigzag C0.5(BN)0.5 SWNTs are narrow band gap semiconductors, which may have potential application for light emission. In light of recent experimental progress and the enhanced degree of control in the synthesis of GNRs and BNNR, our results highlight an interesting avenue for synthesizing a novel specific type of C0.5(BN)0.5 nanotube (gapless or narrow direct gap semiconductor), with potentially important applications in BNC-based nanodevices.