416 resultados para GAMMA-GENERALIZED DISTRIBUTION
em Queensland University of Technology - ePrints Archive
Resumo:
In the study of traffic safety, expected crash frequencies across sites are generally estimated via the negative binomial model, assuming time invariant safety. Since the time invariant safety assumption may be invalid, Hauer (1997) proposed a modified empirical Bayes (EB) method. Despite the modification, no attempts have been made to examine the generalisable form of the marginal distribution resulting from the modified EB framework. Because the hyper-parameters needed to apply the modified EB method are not readily available, an assessment is lacking on how accurately the modified EB method estimates safety in the presence of the time variant safety and regression-to-the-mean (RTM) effects. This study derives the closed form marginal distribution, and reveals that the marginal distribution in the modified EB method is equivalent to the negative multinomial (NM) distribution, which is essentially the same as the likelihood function used in the random effects Poisson model. As a result, this study shows that the gamma posterior distribution from the multivariate Poisson-gamma mixture can be estimated using the NM model or the random effects Poisson model. This study also shows that the estimation errors from the modified EB method are systematically smaller than those from the comparison group method by simultaneously accounting for the RTM and time variant safety effects. Hence, the modified EB method via the NM model is a generalisable method for estimating safety in the presence of the time variant safety and the RTM effects.
Resumo:
A comprehensive voltage imbalance sensitivity analysis and stochastic evaluation based on the rating and location of single-phase grid-connected rooftop photovoltaic cells (PVs) in a residential low voltage distribution network are presented. The voltage imbalance at different locations along a feeder is investigated. In addition, the sensitivity analysis is performed for voltage imbalance in one feeder when PVs are installed in other feeders of the network. A stochastic evaluation based on Monte Carlo method is carried out to investigate the risk index of the non-standard voltage imbalance in the network in the presence of PVs. The network voltage imbalance characteristic based on different criteria of PV rating and location and network conditions is generalized. Improvement methods are proposed for voltage imbalance reduction and their efficacy is verified by comparing their risk index using Monte Carlo simulations.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
In this paper, the performance of voltage-source converter-based shunt and series compensators used for load voltage control in electrical power distribution systems has been analyzed and compared, when a nonlinear load is connected across the load bus. The comparison has been made based on the closed-loop frequency resopnse characteristics of the compensated distribution system. A distribution static compensator (DSTATCOM) as a shunt device and a dynamic voltage restorer (DVR) as a series device are considered in the voltage-control mode for the comparison. The power-quality problems which these compensator address include voltage sags/swells, load voltage harmonic distortions, and unbalancing. The effect of various system parameters on the control performance of the compensator can be studied using the proposed analysis. In particular, the performance of the two compensators are compared with the strong ac supply (stiff source) and weak ac-supply (non-still source) distribution system. The experimental verification of the analytical results derived has been obtained using a laboratory model of the single-phase DSTATCOM and DVR. A generalized converter topology using a cascaded multilevel inverter has been proposed for the medium-voltage distribution system. Simulation studies have been performed in the PSCAD/EMTDC software to verify the results in the three-phase system.
Resumo:
Secure communications in wireless sensor networks operating under adversarial conditions require providing pairwise (symmetric) keys to sensor nodes. In large scale deployment scenarios, there is no prior knowledge of post deployment network configuration since nodes may be randomly scattered over a hostile territory. Thus, shared keys must be distributed before deployment to provide each node a key-chain. For large sensor networks it is infeasible to store a unique key for all other nodes in the key-chain of a sensor node. Consequently, for secure communication either two nodes have a key in common in their key-chains and they have a wireless link between them, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path have a key in common. Length of the key-path is the key factor for efficiency of the design. This paper presents novel deterministic and hybrid approaches based on Combinatorial Design for deciding how many and which keys to assign to each key-chain before the sensor network deployment. In particular, Balanced Incomplete Block Designs (BIBD) and Generalized Quadrangles (GQ) are mapped to obtain efficient key distribution schemes. Performance and security properties of the proposed schemes are studied both analytically and computationally. Comparison to related work shows that the combinatorial approach produces better connectivity with smaller key-chain sizes.
Resumo:
A generalised gamma bidding model is presented, which incorporates many previous models. The log likelihood equations are provided. Using a new method of testing, variants of the model are fitted to some real data for construction contract auctions to find the best fitting models for groupings of bidders. The results are examined for simplifying assumptions, including all those in the main literature. These indicate no one model to be best for all datasets. However, some models do appear to perform significantly better than others and it is suggested that future research would benefit from a closer examination of these.
Resumo:
Background There is increasing evidence supporting the concept of cancer stem cells (CSCs), which are responsible for the initiation, growth and metastasis of tumors. CSCs are thus considered the target for future cancer therapies. To achieve this goal, identifying potential therapeutic targets for CSCs is essential. Methods We used a natural product of vitamin E, gamma tocotrienol (gamma-T3), to treat mammospheres and spheres from colon and cervical cancers. Western blotting and real-time RT-PCR were employed to identify the gene and protein targets of gamma-T3 in mammospheres. Results We found that mammosphere growth was inhibited in a dose dependent manner, with total inhibition at high doses. Gamma-T3 also inhibited sphere growth in two other human epithelial cancers, colon and cervix. Our results suggested that both Src homology 2 domain-containing phosphatase 1 (SHP1) and 2 (SHP2) were affected by gamma-T3 which was accompanied by a decrease in K- and H-Ras gene expression and phosphorylated ERK protein levels in a dose dependent way. In contrast, expression of self-renewal genes TGF-beta and LIF, as well as ESR signal pathways were not affected by the treatment. These results suggest that gamma-T3 specifically targets SHP2 and the RAS/ERK signaling pathway. Conclusions SHP1 and SHP2 are potential therapeutic targets for breast CSCs and gamma-T3 is a promising natural drug for future breast cancer therapy.
Resumo:
Background Prostate cancer (PCa) frequently relapses after hormone ablation therapy. Unfortunately, once progressed to the castration resistant stage, the disease is regarded as incurable as prostate cancer cells are highly resistant to conventional chemotherapy. Method We recently reported that the two natural compounds polysaccharopeptide (PSP) and Gamma-tocotrienols (γ-T3) possessed potent anti-cancer activities through targeting of CSCs. In the present study, using both prostate cancer cell line and xenograft models, we seek to investigate the therapeutic potential of combining γ-T3 and PSP in the treatment of prostate cancer. Result We showed that in the presence of PSP, γ-T3 treatment induce a drastic activation of AMP-activated protein kinase (AMPK). This was accompanied with inactivation of acetyl-CoA carboxylase (ACC), as evidenced by the increased phosphorylation levels at Ser 79. In addition, PSP treatment also sensitized cancer cells toward γ-T3-induced cytotoxicity. Furthermore, we demonstrated for the first time that combination of PSP and γ-T3 treaments significantly reduced the growth of prostate tumor in vivo. Conclusion Our results indicate that PSP and γ-T3 treaments may have synergistic anti-cancer effect in vitro and in vivo, which warrants further investigation as a potential combination therapy for the treatment of cancer.
Resumo:
A nation-wide passive air sampling campaign recorded concentrations of persistent organic pollutants in Australia's atmosphere in 2012. XAD-based passive air samplers were deployed for one year at 15 sampling sites located in remote/background, agricultural and semi-urban and urban areas across the continent. Concentrations of 47 polychlorinated biphenyls ranged from 0.73 to 72 pg m-3 (median of 8.9 pg m-3) and were consistently higher at urban sites. The toxic equivalent concentration for the sum of 12 dioxin-like PCBs was low, ranging from below detection limits to 0.24 fg m-3 (median of 0.0086 fg m-3). Overall, the levels of polychlorinated biphenyls in Australia were among the lowest reported globally to date. Among the organochlorine pesticides, hexachlorobenzene had the highest (median of 41 pg m-3) and most uniform concentration (with a ratio between highest and lowest value [similar]5). Bushfires may be responsible for atmospheric hexachlorobenzene levels in Australia that exceeded Southern Hemispheric baseline levels by a factor of [similar]4. Organochlorine pesticide concentrations generally increased from remote/background and agricultural sites to urban sites, except for high concentrations of [small alpha]-endosulfan and DDTs at specific agricultural sites. Concentrations of heptachlor (0.47-210 pg m-3), dieldrin (ND-160 pg m-3) and trans- and cis-chlordanes (0.83-180 pg m-3, sum of) in Australian air were among the highest reported globally to date, whereas those of DDT and its metabolites (ND-160 pg m-3, sum of), [small alpha]-, [small beta]-, [gamma]- and [small delta]-hexachlorocyclohexane (ND-6.7 pg m-3, sum of) and [small alpha]-endosulfan (ND-27 pg m-3) were among the lowest.
Resumo:
The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.