97 resultados para Boosted regression trees
Resumo:
Purpose: Progression to the castration-resistant state is the incurable and lethal end stage of prostate cancer, and there is strong evidence that androgen receptor (AR) still plays a central role in this process. We hypothesize that knocking down AR will have a major effect on inhibiting growth of castration-resistant tumors. Experimental Design: Castration-resistant C4-2 human prostate cancer cells stably expressing a tetracycline-inducible AR-targeted short hairpin RNA (shRNA) were generated to directly test the effects of AR knockdown in C4-2 human prostate cancer cells and tumors. Results:In vitro expression of AR shRNA resulted in decreased levels of AR mRNA and protein, decreased expression of prostate-specific antigen (PSA), reduced activation of the PSA-luciferase reporter, and growth inhibition of C4-2 cells. Gene microarray analyses revealed that AR knockdown under hormone-deprived conditions resulted in activation of genes involved in apoptosis, cell cycle regulation, protein synthesis, and tumorigenesis. To ensure that tumors were truly castration-resistant in vivo, inducible AR shRNA expressing C4-2 tumors were grown in castrated mice to an average volume of 450 mm3. In all of the animals, serum PSA decreased, and in 50% of them, there was complete tumor regression and disappearance of serum PSA. Conclusions: Whereas castration is ineffective in castration-resistant prostate tumors, knockdown of AR can decrease serum PSA, inhibit tumor growth, and frequently cause tumor regression. This study is the first direct evidence that knockdown of AR is a viable therapeutic strategy for treatment of prostate tumors that have already progressed to the castration-resistant state.
Resumo:
Focuses on a study which introduced an iterative modeling method that combines properties of ordinary least squares (OLS) with hierarchical tree-based regression (HTBR) in transportation engineering. Information on OLS and HTBR; Comparison and contrasts of OLS and HTBR; Conclusions.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Because of the greenhouse gas emissions implications of the market dominating electric hot water systems, governments in Australia have implemented policies and programs to encourage the uptake of solar water heaters (SWHs) in the residential market as part of climate change adaptation and mitigation strategies. The cost-benefit analysis that usually accompanies all government policy and program design could be simplistically reduced to the ratio of expected greenhouse gas reductions of SWH to the cost of a SWH. The national Register of Solar Water Heaters specifies how many renewable energy certificates (RECs) are allocated to complying SWHs according to their expected performance, and hence greenhouse gas reductions, in different climates. Neither REC allocations nor rebates are tied to actual performance of systems. This paper examines the performance of instantaneous gas-boosted solar water heaters installed in new residences in a housing estate in south-east Queensland in the period 2007 – 2010. The evidence indicates systemic failures in installation practices, resulting in zero solar performance or dramatic underperformance (estimated average 43% solar contribution). The paper will detail the faults identified, and how these faults were eventually diagnosed and corrected. The impacts of these system failures on end-use consumers are discussed before concluding with a brief overview of areas where further research is required in order to more fully understand whole of supply chain implications.
Resumo:
The economiser is a critical component for efficient operation of coal-fired power stations. It consists of a large system of water-filled tubes which extract heat from the exhaust gases. When it fails, usually due to erosion causing a leak, the entire power station must be shut down to effect repairs. Not only are such repairs highly expensive, but the overall repair costs are significantly affected by fluctuations in electricity market prices, due to revenue lost during the outage. As a result, decisions about when to repair an economiser can alter the repair costs by millions of dollars. Therefore, economiser repair decisions are critical and must be optimised. However, making optimal repair decisions is difficult because economiser leaks are a type of interactive failure. If left unfixed, a leak in a tube can cause additional leaks in adjacent tubes which will need more time to repair. In addition, when choosing repair times, one also needs to consider a number of other uncertain inputs such as future electricity market prices and demands. Although many different decision models and methodologies have been developed, an effective decision-making method specifically for economiser repairs has yet to be defined. In this paper, we describe a Decision Tree based method to meet this need. An industrial case study is presented to demonstrate the application of our method.
Resumo:
Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.
Resumo:
There have been notable advances in learning to control complex robotic systems using methods such as Locally Weighted Regression (LWR). In this paper we explore some potential limits of LWR for robotic applications, particularly investigating its application to systems with a long horizon of temporal dependence. We define the horizon of temporal dependence as the delay from a control input to a desired change in output. LWR alone cannot be used in a temporally dependent system to find meaningful control values from only the current state variables and output, as the relationship between the input and the current state is under-constrained. By introducing a receding horizon of the future output states of the system, we show that sufficient constraint is applied to learn good solutions through LWR. The new method, Receding Horizon Locally Weighted Regression (RH-LWR), is demonstrated through one-shot learning on a real Series Elastic Actuator controlling a pendulum.
Resumo:
We consider the problem of how to construct robust designs for Poisson regression models. An analytical expression is derived for robust designs for first-order Poisson regression models where uncertainty exists in the prior parameter estimates. Given certain constraints in the methodology, it may be necessary to extend the robust designs for implementation in practical experiments. With these extensions, our methodology constructs designs which perform similarly, in terms of estimation, to current techniques, and offers the solution in a more timely manner. We further apply this analytic result to cases where uncertainty exists in the linear predictor. The application of this methodology to practical design problems such as screening experiments is explored. Given the minimal prior knowledge that is usually available when conducting such experiments, it is recommended to derive designs robust across a variety of systems. However, incorporating such uncertainty into the design process can be a computationally intense exercise. Hence, our analytic approach is explored as an alternative.