904 resultados para Constrained ridge regression
Resumo:
The motivation of the study stems from the results reported in the Excellence in Research for Australia (ERA) 2010 report. The report showed that only 12 universities performed research at or above international standards, of which, the Group of Eight (G8) universities filled the top eight spots. While performance of universities was based on number of research outputs, total amount of research income and other quantitative indicators, the measure of efficiency or productivity was not considered. The objectives of this paper are twofold. First, to provide a review of the research performance of 37 Australian universities using the data envelopment analysis (DEA) bootstrap approach of Simar and Wilson (2007). Second, to determine sources of productivity drivers by regressing the efficiency scores against a set of environmental variables.
Resumo:
The available wind power is stochastic and requires appropriate tools in the OPF model for economic and reliable power system operation. This paper exhibit the OPF formulation with factors involved in the intermittency of wind power. Weibull distribution is adopted to find the stochastic wind speed and power distribution. The reserve requirement is evaluated based on the wind distribution and risk of under/over estimation of the wind power. In addition, the Wind Energy Conversion System (WECS) is represented by Doubly Fed Induction Generator (DFIG) based wind farms. The reactive power capability for DFIG based wind farm is also analyzed. The study is performed on IEEE-30 bus system with wind farm located at different buses and with different wind profiles. Also the reactive power capacity to be installed in the wind farm to maintain a satisfactory voltage profile under the various wind flow scenario is demonstrated.
Resumo:
The benefits of applying tree-based methods to the purpose of modelling financial assets as opposed to linear factor analysis are increasingly being understood by market practitioners. Tree-based models such as CART (classification and regression trees) are particularly well suited to analysing stock market data which is noisy and often contains non-linear relationships and high-order interactions. CART was originally developed in the 1980s by medical researchers disheartened by the stringent assumptions applied by traditional regression analysis (Brieman et al. [1984]). In the intervening years, CART has been successfully applied to many areas of finance such as the classification of financial distress of firms (see Frydman, Altman and Kao [1985]), asset allocation (see Sorensen, Mezrich and Miller [1996]), equity style timing (see Kao and Shumaker [1999]) and stock selection (see Sorensen, Miller and Ooi [2000])...
Resumo:
We consider quantile regression models and investigate the induced smoothing method for obtaining the covariance matrix of the regression parameter estimates. We show that the difference between the smoothed and unsmoothed estimating functions in quantile regression is negligible. The detailed and simple computational algorithms for calculating the asymptotic covariance are provided. Intensive simulation studies indicate that the proposed method performs very well. We also illustrate the algorithm by analyzing the rainfall–runoff data from Murray Upland, Australia.
Resumo:
Genetic recombination is a fundamental evolutionary mechanism promoting biological adaptation. Using engineered recombinants of the small single-stranded DNA plant virus, Maize streak virus (MSV), we experimentally demonstrate that fragments of genetic material only function optimally if they reside within genomes similar to those in which they evolved. The degree of similarity necessary for optimal functionality is correlated with the complexity of intragenomic interaction networks within which genome fragments must function. There is a striking correlation between our experimental results and the types of MSV recombinants that are detectable in nature, indicating that obligatory maintenance of intragenome interaction networks strongly constrains the evolutionary value of recombination for this virus and probably for genomes in general.
Resumo:
There is overwhelming evidence that persistent infection with high-risk human papillomaviruses (HR-HPV) is the main risk factor for invasive cancer of the cervix. Due to this global public health burden, two prophylactic HPV L1 virus-like particles (VLP) vaccines have been developed. While these vaccines have demonstrated excellent type-specific prevention of infection by the homologous vaccine types (high and low risk HPV types), no data have been reported on the therapeutic effects in people already infected with the low-risk HPV type. In this study we explored whether regression of CRPV-induced papillomas could be achieved following immunisation of out-bred New Zealand White rabbits with CRPV VLPs. Rabbits immunised with CRPV VLPs had papillomas that were significantly smaller compared to the negative control rabbit group (P ≤ 0.05). This data demonstrates the therapeutic potential of PV VLPs in a well-understood animal model with potential important implications for human therapeutic vaccination for low-risk HPVs. © 2008 Govan et al; licensee BioMed Central Ltd.
Resumo:
In most visual mapping applications suited to Autonomous Underwater Vehicles (AUVs), stereo visual odometry (VO) is rarely utilised as a pose estimator as imagery is typically of very low framerate due to energy conservation and data storage requirements. This adversely affects the robustness of a vision-based pose estimator and its ability to generate a smooth trajectory. This paper presents a novel VO pipeline for low-overlap imagery from an AUV that utilises constrained motion and integrates magnetometer data in a bi-objective bundle adjustment stage to achieve low-drift pose estimates over large trajectories. We analyse the performance of a standard stereo VO algorithm and compare the results to the modified vo algorithm. Results are demonstrated in a virtual environment in addition to low-overlap imagery gathered from an AUV. The modified VO algorithm shows significantly improved pose accuracy and performance over trajectories of more than 300m. In addition, dense 3D meshes generated from the visual odometry pipeline are presented as a qualitative output of the solution.
Resumo:
Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality data sets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares Regression and Bayesian Weighted Least Squares Regression for the estimation of uncertainty associated with pollutant build-up prediction using limited data sets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in the prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling.
Resumo:
A satellite based observation system can continuously or repeatedly generate a user state vector time series that may contain useful information. One typical example is the collection of International GNSS Services (IGS) station daily and weekly combined solutions. Another example is the epoch-by-epoch kinematic position time series of a receiver derived by a GPS real time kinematic (RTK) technique. Although some multivariate analysis techniques have been adopted to assess the noise characteristics of multivariate state time series, statistic testings are limited to univariate time series. After review of frequently used hypotheses test statistics in univariate analysis of GNSS state time series, the paper presents a number of T-squared multivariate analysis statistics for use in the analysis of multivariate GNSS state time series. These T-squared test statistics have taken the correlation between coordinate components into account, which is neglected in univariate analysis. Numerical analysis was conducted with the multi-year time series of an IGS station to schematically demonstrate the results from the multivariate hypothesis testing in comparison with the univariate hypothesis testing results. The results have demonstrated that, in general, the testing for multivariate mean shifts and outliers tends to reject less data samples than the testing for univariate mean shifts and outliers under the same confidence level. It is noted that neither univariate nor multivariate data analysis methods are intended to replace physical analysis. Instead, these should be treated as complementary statistical methods for a prior or posteriori investigations. Physical analysis is necessary subsequently to refine and interpret the results.