866 resultados para Boosted regression trees
Resumo:
Numerous expert elicitation methods have been suggested for generalised linear models (GLMs). This paper compares three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression. These methods were trialled on two experts in order to model the habitat suitability of the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The first elicitation approach is a geographically assisted indirect predictive method with a geographic information system (GIS) interface. The second approach is a predictive indirect method which uses an interactive graphical tool. The third method uses a questionnaire to elicit expert knowledge directly about the impact of a habitat variable on the response. Two variables (slope and aspect) are used to examine prior and posterior distributions of the three methods. The results indicate that there are some similarities and dissimilarities between the expert informed priors of the two experts formulated from the different approaches. The choice of elicitation method depends on the statistical knowledge of the expert, their mapping skills, time constraints, accessibility to experts and funding available. This trial reveals that expert knowledge can be important when modelling rare event data, such as threatened species, because experts can provide additional information that may not be represented in the dataset. However care must be taken with the way in which this information is elicited and formulated.
Resumo:
The work was both conceived and constructed in-situ within Gnombup Swamp a seasonal water body at Bremer Bay, Western Australia. The work interacts with site-specific conditions including wind patterns and a datum of seasonal water levels marks. The work is the result of collaboration between soil scientist Paula Deegan and Ian Weir. The installation was documented with a series of 30 still digital photographs, later animated in Microsoft Powerpoint.
Resumo:
Purpose: Progression to the castration-resistant state is the incurable and lethal end stage of prostate cancer, and there is strong evidence that androgen receptor (AR) still plays a central role in this process. We hypothesize that knocking down AR will have a major effect on inhibiting growth of castration-resistant tumors. Experimental Design: Castration-resistant C4-2 human prostate cancer cells stably expressing a tetracycline-inducible AR-targeted short hairpin RNA (shRNA) were generated to directly test the effects of AR knockdown in C4-2 human prostate cancer cells and tumors. Results:In vitro expression of AR shRNA resulted in decreased levels of AR mRNA and protein, decreased expression of prostate-specific antigen (PSA), reduced activation of the PSA-luciferase reporter, and growth inhibition of C4-2 cells. Gene microarray analyses revealed that AR knockdown under hormone-deprived conditions resulted in activation of genes involved in apoptosis, cell cycle regulation, protein synthesis, and tumorigenesis. To ensure that tumors were truly castration-resistant in vivo, inducible AR shRNA expressing C4-2 tumors were grown in castrated mice to an average volume of 450 mm3. In all of the animals, serum PSA decreased, and in 50% of them, there was complete tumor regression and disappearance of serum PSA. Conclusions: Whereas castration is ineffective in castration-resistant prostate tumors, knockdown of AR can decrease serum PSA, inhibit tumor growth, and frequently cause tumor regression. This study is the first direct evidence that knockdown of AR is a viable therapeutic strategy for treatment of prostate tumors that have already progressed to the castration-resistant state.
Resumo:
Focuses on a study which introduced an iterative modeling method that combines properties of ordinary least squares (OLS) with hierarchical tree-based regression (HTBR) in transportation engineering. Information on OLS and HTBR; Comparison and contrasts of OLS and HTBR; Conclusions.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Because of the greenhouse gas emissions implications of the market dominating electric hot water systems, governments in Australia have implemented policies and programs to encourage the uptake of solar water heaters (SWHs) in the residential market as part of climate change adaptation and mitigation strategies. The cost-benefit analysis that usually accompanies all government policy and program design could be simplistically reduced to the ratio of expected greenhouse gas reductions of SWH to the cost of a SWH. The national Register of Solar Water Heaters specifies how many renewable energy certificates (RECs) are allocated to complying SWHs according to their expected performance, and hence greenhouse gas reductions, in different climates. Neither REC allocations nor rebates are tied to actual performance of systems. This paper examines the performance of instantaneous gas-boosted solar water heaters installed in new residences in a housing estate in south-east Queensland in the period 2007 – 2010. The evidence indicates systemic failures in installation practices, resulting in zero solar performance or dramatic underperformance (estimated average 43% solar contribution). The paper will detail the faults identified, and how these faults were eventually diagnosed and corrected. The impacts of these system failures on end-use consumers are discussed before concluding with a brief overview of areas where further research is required in order to more fully understand whole of supply chain implications.
Resumo:
The economiser is a critical component for efficient operation of coal-fired power stations. It consists of a large system of water-filled tubes which extract heat from the exhaust gases. When it fails, usually due to erosion causing a leak, the entire power station must be shut down to effect repairs. Not only are such repairs highly expensive, but the overall repair costs are significantly affected by fluctuations in electricity market prices, due to revenue lost during the outage. As a result, decisions about when to repair an economiser can alter the repair costs by millions of dollars. Therefore, economiser repair decisions are critical and must be optimised. However, making optimal repair decisions is difficult because economiser leaks are a type of interactive failure. If left unfixed, a leak in a tube can cause additional leaks in adjacent tubes which will need more time to repair. In addition, when choosing repair times, one also needs to consider a number of other uncertain inputs such as future electricity market prices and demands. Although many different decision models and methodologies have been developed, an effective decision-making method specifically for economiser repairs has yet to be defined. In this paper, we describe a Decision Tree based method to meet this need. An industrial case study is presented to demonstrate the application of our method.
Resumo:
Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.