14 resultados para probability models

em Deakin Research Online - Australia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Modeling and simulation is commonly used to improve vehicle performance, to optimize vehicle system design, and to reduce vehicle development time. Vehicle performances can be affected by environmental conditions and driver behavior factors, which are often uncertain and immeasurable. To incorporate the role of environmental conditions in the modeling and simulation of vehicle systems, both real and artificial data are used. Often, real data are unavailable or inadequate for extensive investigations. Hence, it is important to be able to construct artificial environmental data whose characteristics resemble those of the real data for modeling and simulation purposes. However, to produce credible vehicle simulation results, the simulated environment must be realistic and validated using accepted practices. This paper proposes a stochastic model that is capable of creating artificial environmental factors such as road geometry and wind conditions. In addition, road geometric design principles are employed to modify the created road data, making it consistent with the real-road geometry. Two sets of real-road geometry and wind condition data are employed to propose probability models. To justify the distribution goodness of fit, Pearson's chi-square and correlation statistics have been used. Finally, the stochastic models of road geometry and wind conditions (SMRWs) are developed to produce realistic road and wind data. SMRW can be used to predict vehicle performance, energy management, and control strategies over multiple driving cycles and to assist in developing fuel-efficient vehicles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Customer retention has become a focal priority. However, the process of implementing an effective retention campaign is complex and dependent on firms’ ability to accurately identify both at-risk customers and those worth retaining. Drawing on empirical and simulated data from two online retailers, we evaluate the performance of several parametric and nonparametric churn prediction techniques, in order to identify the optimal modeling approach, dependent on context. Results show that under most circumstances (i.e., varying sample sizes, purchase frequencies, and churn ratios), the boosting technique, a nonparametric method, delivers superior predictability. Furthermore, in cases/contexts where churn is more rare, logistic regression prevails. Finally, where the size of the customer base is very small, parametric probability models outperform other techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The issue of information sharing and exchanging is one of the most important issues in the areas of artificial intelligence and knowledge-based systems (KBSs), or even in the broader areas of computer and information technology. This paper deals with a special case of this issue by carrying out a case study of information sharing between two well-known heterogeneous uncertain reasoning models: the certainty factor model and the subjective Bayesian method. More precisely, this paper discovers a family of exactly isomorphic transformations between these two uncertain reasoning models. More interestingly, among isomorphic transformation functions in this family, different ones can handle different degrees to which a domain expert is positive or negative when performing such a transformation task. The direct motivation of the investigation lies in a realistic consideration. In the past, expert systems exploited mainly these two models to deal with uncertainties. In other words, a lot of stand-alone expert systems which use the two uncertain reasoning models are available. If there is a reasonable transformation mechanism between these two uncertain reasoning models, we can use the Internet to couple these pre-existing expert systems together so that the integrated systems are able to exchange and share useful information with each other, thereby improving their performance through cooperation. Also, the issue of transformation between heterogeneous uncertain reasoning models is significant in the research area of multi-agent systems because different agents in a multi-agent system could employ different expert systems with heterogeneous uncertain reasonings for their action selections and the information sharing and exchanging is unavoidable between different agents. In addition, we make clear the relationship between the certainty factor model and probability theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wildlife managers are often faced with the difficult task of determining the distribution of species, and their preferred habitats, at large spatial scales. This task is even more challenging when the species of concern is in low abundance and/or the terrain is largely inaccessible. Spatially explicit distribution models, derived from multivariate statistical analyses and implemented in a geographic information system (GIS), can be used to predict the distributions of species and their habitats, thus making them a useful conservation tool. We present two such models: one for a dasyurid, the Swamp Antechinus (Antechinus minimus), and the other for a ground-dwelling bird, the Rufous Bristlebird (Dasyornis broadbenti), both of which are rare species occurring in the coastal heathlands of south-western Victoria. Models were generated using generalized linear modelling (GLM) techniques with species presence or absence as the independent variable and a series of landscape variables derived from GIS layers and high-resolution imagery as the predictors. The most parsimonious model, based on the Akaike Information Criterion, for each species then was extrapolated spatially in a GIS. Probability of species presence was used as an index of habitat suitability. Because habitat fragmentation is thought to be one of the major threats to these species, an assessment of the spatial distribution of suitable habitat across the landscape is vital in prescribing management actions to prevent further habitat fragmentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reuse of wastewater to irrigate food crops is being practiced in many parts of the world and is becoming more commonplace as the competition for, and stresses on, freshwater resources intensify. But there are risks associated with wastewater irrigation, including the possibility of transmission of pathogens causing infectious disease, to both workers in the field and to consumers buying and eating produce irrigated with wastewater. To manage these risks appropriately we need objective and quantitative estimates of them. This is typically achieved through one of two modelling approaches: deterministic or stochastic. Each parameter in a deterministic model is represented by a single value, whereas in stochastic models probability functions are used. Stochastic models are theoretically superior because they account for variability and uncertainty, but they are computationally demanding and not readily accessible to water resource and public health managers. We constructed models to estimate risk of enteric virus infection arising from the consumption of wastewater-irrigated horticultural crops (broccoli, cucumber and lettuce), and compared the resultant levels of risk between the deterministic and stochastic approaches. Several scenarios were tested for each crop, accounting for different concentrations of enteric viruses and different lengths of environmental exposure (i.e. the time between the last irrigation event and harvest, when the viruses are liable to decay or inactivation). In most situations modelled the two approaches yielded similar estimates of risk (within 1 order-of-magnitude). The two methods diverged most markedly, up to around 2 orders-of-magnitude, when there was large uncertainty associated with the estimate of virus concentration and the exposure period was short (1 day). Therefore, in some circumstances deterministic modelling may offer water resource managers a pragmatic alternative to stochastic modelling, but its usefulness as a surrogate will depend upon the level of uncertainty in the model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A dichotomy in female extrapair copulation (EPC) behavior, with some females seeking EPC and others not, is inferred if the observed distribution of extrapair young (EPY) over broods differs from a random process on the level of individual offspring (binomial, hypergeometrical, or Poisson). A review of the literature shows such null models are virtually always rejected, with often large effect sizes. We formulate an alternative null model, which assumes that 1) the number of EPC has a random (Poisson) distribution across females (broods) and that 2) the probability for an offspring to be of extrapair origin is zero without any EPC and increases with the number of EPC. Our brood-level model can accommodate the bimodality of both zero and medium rates of EPY typically found in empirical data, and fitting our model to EPY production of 7 passerine bird species shows evidence of a nonrandom distribution of EPY in only 2 species. We therefore argue that 1) dichotomy in extrapair mate choice cannot be inferred only from a significant deviation in the observed distribution of EPY from a random process on the level of offspring and that 2) additional empirical work on testing the contrasting critical predictions from the classic and our alternative null models is required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are the two common means for propagating worms: scanning vulnerable computers in the network and sending out malicious email attachments. Modeling the propagation of worms can help us understand how worms spread and devise effective defence strategies. Most traditional models simulate the overall scale of infected network in each time tick, making them invalid for examining deep inside the propagation procedure among individual nodes. For this reason, this paper proposes a novel probability matrix to model the propagation mechanism of the two main classes of worms (scanning and email worms) by concentrating on the propagation probability. The objective of this paper is to access the spreading and work out an effective scheme against the worms. In order to evaluate the effects of each major component in our probability model, we implement a series of experiments for both worms. From the results, the network administrators can make decision on how to reduce the number of vulnerable nodes to a certain threshold for scanning worms, and how to immunize the highly-connected node for preventing worm's propagation for email worms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, a simple analytical framework to find the probability distributions of number of children and maternal age at various order births by making use of data on age-specific fertility rates by birth order was proposed. The proposed framework is applicable to both the period and cohort fertility schedules. The most appealing point of the proposed framework is that it does not require stringent assumptions. The proposed framework has been applied to the cohort birth order-specific fertility schedules of India and its different regions and period birth order-specific fertility schedules, including the United States of America, Russia, and the Netherlands, to demonstrate its usefulness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Impact assessments often focus on short-term behavioral responses of animals to human disturbance. However, the cumulative effects caused by repeated behavioral disruptions are of management concern because these effects have the potential to influence individuals' survival and reproduction. We need to estimate individual exposure rates to disturbance to determine cumulative effects. We present a new approach to estimate the spatial exposure of minke whales to whalewatching boats in Faxaflõi Bay, Iceland. We used recent advances in spatially explicit capture-recapture modeling to estimate the probability that whales would encounter a disturbance (i.e., whalewatching boat). We obtained spatially explicit individual encounter histories of individually identifiable animals using photo-identification. We divided the study area into 1-km2 grid cells and considered each cell a spatially distinct sampling unit. We used capture history of individuals to model and estimate spatial encounter probabilities of individual minke whales across the study area, accounting for heterogeneity in sampling effort. We inferred the exposure of individual minke whales to whalewatching vessels throughout the feeding season by estimating individual whale encounters with vessels using the whale encounter probabilities and spatially explicit whalewatching intensity in the same area, obtained from recorded whalewatching vessel tracks. We then estimated the cumulative time whales spent with whalewatching boats to assess the biological significance of whalewatching disturbances. The estimated exposure levels to boats varied considerably between individuals because of both temporal and spatial variations in the activity centers of whales and the whalewatching intensity in the area. However, although some whales were repeatedly exposed to whalewatching boats throughout the feeding season, the estimated cumulative time they spent with boats was very low. Although whalewatching boat interactions caused feeding disruptions for the whales, the estimated low cumulative exposure indicated that the whalewatching industry in its current state likely is not having any long-term negative effects on vital rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this companion article to "Dynamic Regime Marginal Structural Mean Models for Estimation of Optimal Dynamic Treatment Regimes, Part I: Main Content" [Orellana, Rotnitzky and Robins (2010), IJB, Vol. 6, Iss. 2, Art. 7] we present (i) proofs of the claims in that paper, (ii) a proposal for the computation of a confidence set for the optimal index when this lies in a finite set, and (iii) an example to aid the interpretation of the positivity assumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract
This paper aims to investigate the effect of cash flow and free cash flow on corporate failure in the emerging market in particular Jordan using two samples; matched sample and a cross sectional time-series (panel data) sample representative of 167 Jordanian companies in 1989-2003. LOGIT models are used to outline the relationship between firms’ financial health and the probability of default. Our results show that there is firm’s free cash flow increases corporate failure. The result also shows that the firm’s cash flow decreases corporate failure. Firms’ capital structures are fund a mental in predicting default. Capital structure is seen as the main factor affecting the probability of default as it affects a firm’s ability to access external sources of funds. Jordanian firms depend on short-term debt for both short and long term financing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the links between external variables such as habitat and interactions with conspecifics and animal space-use is fundamental to developing effective management measures. In the marine realm, automated acoustic tracking has become a widely used method for monitoring the movement of free-ranging animals, yet researchers generally lack robust methods for analysing the resulting spatial-usage data. In this study, acoustic tracking data from male and female broadnose sevengill sharks Notorynchus cepedianus, collected in a system of coastal embayments in southeast Tasmania were analyzed to examine sex-specific differences in the sharks' coastal space-use and test novel methods for the analysis of acoustic telemetry data. Sex-specific space-use of the broadnose sevengill shark from acoustic telemetry data was analysed in two ways: The recently proposed spatial network analysis of between-receiver movements was employed to identify sex-specific space-use patterns. To include the full breadth of temporal information held in the data, movements between receivers were furthermore considered as transitions between states of a Markov chain, with the resulting transition probability matrix allowing the ranking of the relative importance of different parts of the study area. Both spatial network and Markov chain analysis revealed sex-specific preferences of different sites within the study area. The identification of priority areas differed for the methods, due to the fact that in contrast to network analysis, our Markov chain approach preserves the chronological sequence of detections and accounts for both residency periods and movements. In addition to adding to our knowledge of the ecology of a globally distributed apex predator, this study presents a promising new step towards condensing the vast amounts of information collected with acoustic tracking technology into straightforward results which are directly applicable to the management and conservation of any species that meet the assumptions of our model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We empirically compare the reliability of the dividend (DIV) model, the residual income valuation (CT, GLS) model, and the abnormal earnings growth (OJ) model. We find that valuation estimates from the OJ model are generally more reliable than those from the other three models, because the residual income valuation model anchored by book value gets off to a poor start when compared with the OJ model led by capitalized next-year earnings. We adopt a 34-year sample covering from 1985 to 2013 to compare the reliability of valuation estimates via their means of absolute pricing errors (MAPE) and corresponding t statistics. We further use the switching regression of Barrios and Blanco to show that the average probability of OJ valuation estimates is greater in explaining stock prices than the DIV, CT, and GLS models. In addition, our finding that the OJ model yields more reliable estimates is robust to analysts-based and model-based earnings measures.