32 resultados para Minimum Variance Model

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Continued population growth in Melbourne over the past decade has led to the development of a range of strategies and policies by State and Local levels of government to set an agenda for a more sustainable form of urban development. As the Victorian State government moves towards the development of 'Plan Melbourne', a new metropolitan planning strategy currently being prepared to take Melbourne forward to 2050, the following paper addresses the issue of how new residential built form will impact on and be accommodated in existing Inner Melbourne activity centres. Working with the prospect of establishing a more compact city in order to meet an inner city target of 90,000 new dwellings (Inner Metropolitan Action Plan - IMAP Strategy 5), the paper presents a 'Housing Variance Model' based on household structure and dwelling type. As capacity is progressively altered through a range of built form permutations, the research attempts to assess the impact on the urban morphology of a case study of four Major Activity Centres in the municipality of Port Phillip.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a sampling procedure called selected ranked set sampling (SRSS), in which only selected observations from a ranked set sample (RSS) are measured. This paper describes the optimal linear estimation of location and scale parameters based on SRSS, and for some distributions it presents the required tables for optimal selections. For these distributions, the optimal SRSS estimators are compared with the other popular simple random sample (SRS) and RSS estimators. In every situation the estimators based on SRSS are found advantageous at least in some respect, compared to those obtained from SRS or RSS. The SRSS method with errors in ranking is also described. The relative precision of the estimator of the population mean is investigated for different degrees of correlations between the actual and erroneous ranking. The paper reports the minimum value of the correlation coefficient between the actual and the erroneous ranking required for achieving better precision with respect to the usual SRS estimator and with respect to the RSS estimator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the light of the Victorian State Government's move towards the development of 'Plan Melbourne' - a new metropolitan planning strategy currently being prepared to take Melbourne forward to 2050 - the following paper attempts to address the issue of how an inner city target of 90,000 new dwellings (Inner Metropolitan Action Plan - IMAP Strategy 5) will impact on existing inner Melbourne activity centres. Working with the prospect of establishing a more compact city within the inner Melbourne region, the paper will focus on key suburbs within the Port Phillip area. Working with a 'Housing Variance Model' based on household structure and dwelling type, the paper will attempt to assess the impact on urban morphology as capacity is progressively altered through a range of built form permutations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The relationship between mass loss rate and chemical power in flying birds is analysed with regard to water and heat balance. Two models are presented: the first model is applicable to situations where heat loads are moderate. i.e. when heat balance can be achieved by regulating non-evaporative heat loss, and evaporative water loss is minimised. The second model is applicable when heat loads are high, non-evaporative heat loss is maximised. and heat balance has to be achieved by regulating evaporative heat loss. The rates of mass loss of two Thrush Nightingales Luscinia luscinia and one Teal Anas crecca were measured at various flight speeds in a wind tunnel. Estimates of metabolic water production indicate that the Thrush Nightingales did not dehydrate during experimental flights. Probably, the Thrush Nightingales maintained heat balance without actively increasing evaporative cooling. The Teal, however, most likely had to resort to evaporative cooling, although it may not have dehydrated. Chemical power was estimated from our mass loss rate data using the minimum evaporation model for the Thrush Nightingales and the evaporative heat regulation model for the Teal. For both Thrush Nightingales and the Teal, the chemical power calculated from our mass loss rate data showed a greater change with speed (more 'U-shaped' curve) than the theoretically predicted chemical power curves based on aerodynamic theory. The minimum power speeds calculated from our data differed little from theoretical predictions but maximum range speeds were drastically different. Mass loss rate could potentially be used to estimate chemical power in flying birds under laboratory conditions where temperature and humidity are controlled. However, the assumptions made in the models and the model predictions need further testing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Social foragers can alternate between searching for food (producer tactic), and searching for other individuals that have located food in order to join them (scrounger tactic). Both tactics yield equal rewards on average, but the rewards generated by producer are more variable. A dynamic variance-sensitive foraging model predicts that social foragers should increase their use of scrounger with increasing energy requirements and/or decreased food availability early in the foraging period. We tested whether natural variation in minimum energy requirements (basal metabolic rate or BMR) is associated with differences in the use of producer–scrounger foraging tactics in female zebra finches Taeniopygia guttata. As predicted by the dynamic variance-sensitive model, high BMR individuals had significantly greater use of the scrounger tactic compared with low BMR individuals. However, we observed no effect of food availability on tactic use, indicating that female zebra finches were not variance-sensitive foragers under our experimental conditions. This study is the first to report that variation in BMR within a species is associated with differences in foraging behaviour. BMR-related differences in scrounger tactic use are consistent with phenotype-dependent tactic use decisions. We suggest that BMR is correlated with another phenotypic trait which itself influences tactic use decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on the results of a study aimed at identifying the relative influence of generic and job-specific stressors experienced by a cohort of Australian managers. The results of a regression analysis revealed that both the generic components of the job strain model (JSM) and job-specific stressors were predictive of the strain experienced by participants. However, when looking at the total amount of variance that is explained by the predictor variables, the combined influence of job demand, job control and social support contributed 98 per cent of the explained variance in job satisfaction and 90 per cent of the variance in psychological health. The large amount of variance explained by the JSM suggests that this model provides an accurate account of the work characteristics that contribute to the strain experienced by managers and no augmentation is needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficiently inducing precise causal models accurately reflecting given data sets is the ultimate goal of causal discovery. The algorithms proposed by Dai et al. has demonstrated the ability of the Minimum Message Length (MML) principle in discovering Linear Causal Models from training data. In order to further explore ways to improve efficiency, this paper incorporates the Hoeffding Bounds into the learning process. At each step of causal discovery, if a small number of data items is enough to distinguish the better model from the rest, the computation cost will be reduced by ignoring the other data items. Experiments with data set from related benchmark models indicate that the new algorithm achieves speedup over previous work in terms of learning efficiency while preserving the discovery accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the many valuable uses of injury surveillance is the potential to alert health authorities and societies in general to emerging injury trends, facilitating earlier development of prevention measures. Other than road safety, to date, few attempts to forecast injury data have been made, although forecasts have been made of other public health issues. This may in part be due to the complex pattern of variance displayed by injury data. The profile of many injury types displays seasonality and diurnal variance, as well as stochastic variance. The authors undertook development of a simple model to forecast injury into the near term. In recognition of the large numbers of possible predictions, the variable nature of injury profiles and the diversity of dependent variables, it became apparent that manual forecasting was impractical. Therefore, it was decided to evaluate a commercially available forecasting software package for prediction accuracy against actual data for a set of predictions. Injury data for a 4-year period (1996 to 1999) were extracted from the Victorian Emergency Minimum Dataset and were used to develop forecasts for the year 2000, for which data was also held. The forecasts for 2000 were compared to the actual data for 2000 by independent t-tests, and the standard errors of the predictions were modelled by stepwise hierarchical multiple regression using the independent variables of the standard deviation, seasonality, mean monthly frequency and slope of the base data (R = 0.93, R2 = 0.86, F(3, 27) = 55.2, p < 0.0001). Significant contributions to the model included the SD (β = 1.60, p < 0.001), mean monthly frequency (β =  - 0.72, p < 0.002), and the seasonality of the data (β = 0.16, p < 0.02). It was concluded that injury data could be reliably forecast and that commercial software was adequate for the task. Variance in the data was found to be the most important determinant of prediction accuracy. Importantly, automated forecasting may provide a vehicle for identifying emerging trends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study identifies the environmental and personal characteristics that predict employee outcomes within an Australian public sector organization that had, under New Public Management (NPM), implemented a variety of practices traditionally found in the private sector. These are more results-oriented, and their adoption can be accompanied by increased strain for employees. The current investigation was guided by two complementary theories, the Demand Control Support (DCS) model and Conservation of Resources (COR) theory, and sought to examine the benefits of building on the DCS to include both situation-specific stressors and internal coping resources. Survey responses from 1,155 employees were analysed. The hierarchical regression analyses indicated that both external and employee-centred variables made significant contributions to variations in psychological health, job satisfaction, and organizational commitment. The external resources, work based support and, to a lesser extent, job control, predicted relatively large proportions of the variance in the target variables. The situation-specific stressors, particularly those involving harmful management practices (e.g., insufficient time to do job as well as you would like, lack of recognition for good work), made significant contributions to the outcome measures and generally supported the process of augmenting the generic components of the DCS with more situation-specific variables. In terms of internal resources, problem and emotion-based coping improved the capacity of the model to predict psychological health. The results suggest that the impact of NPM can be ameliorated by incorporating the dimensions of the augmented DCS and coping resources into the change programme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The affective content of Subjective Wellbeing (SWB) was investigated in two separate studies. Study 1 involved a representative sample of 478 participants from across Australia aged between 18 and 72 years. This study tested the circumplex model of affect and then determined the minimum set of affects that explain variance in SWB. The model was supported, with most affects congregated around the valence axis. Overall, 64% of the variance in SWB was explained by six Core Affects, indicating that SWB is a highly affective construct. Study 2 tested the relative strength of Core Affect (content, happy and excited), in three separate models of SWB incorporating cognition (seven discrepancies)
and all five factors of personality. Using a sample of 854 participants aged been 18 – 86 years, structural equation modeling was used to compare an affective-cognitive driven model of SWB, with a personality driven model of SWB and a discrepancy driven model of SWB. The results provide support for an affective-cognitive model which explained 90 percent of the variance in SWB. All models confirm that the relationship between SWB, Core Affect and Discrepancies is far stronger than the relationship between personality and SWB. It is proposed that Core Affect and Discrepancies comprise the essence of SWB. Moreover, Core Affect is the driving force behind individual set-point levels in SWB homeostasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an attempt to improve automated gene prediction in the untranslated region of a gene, we completed an in-depth analysis of the minimum free energy for 8,689 sub-genetic DNA sequences. We expanded Zhang's classification model and classified each sub-genetic sequence into one of 27 possible motifs. We calculated the minimum free energy for each motif to explore statistical features that correlate to biologically relevant sub-genetic sequences. If biologically relevant sub-genetic sequences fall into distinct free energy quanta it may be possible to characterize a motif based on its minimum free energy. Proper characterization of motifs can lead to greater understanding in automated genefinding, gene variability and the role DNA structure plays in gene network regulation.

Our analysis determined: (1) the average free energy value for exons, introns and other biologically relevant sub-genetic sequences, (2) that these subsequences do not exist in distinct energy quanta, (3) that introns exist however in a tightly coupled average minimum free energy quantum compared to all other biologically relevant sub-genetic sequence types, (4) that single exon genes demonstrate a higher stability than exons which span the entire coding sequence as part of a multi-exon gene and (5) that all motif types contain a free energy global minimum at approximately nucleotide position 1,000 before reaching a plateau. These results should be relevant to the biochemist and bioinformatician seeking to understand the relationship between sub-genetic sequences and the information behind them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates the determinants of the fertility rate in Taiwan over the period 1966–2001. Consistent with theory, the key explanatory variables in Taiwan's fertility model are real income, infant mortality rate, female education and female labor force participation rate. The test for cointegration is based on the recently developed bounds testing procedure while the long-run and short-run elasticities are based on the autoregressive distributed lag model. Among our key results, female education and female labor force participation rate are found to be the key determinants of fertility in Taiwan in the long run. The variance decom-position analysis indicates that in the long run approximately 45percent of the variation in fertility is explained by the combined impact of female labor force participation, mortality and income, implying that socioeconomic development played an important role in the fertility transition in Taiwan. This result is consistent with the traditional structural hypothesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – Based on the theoretical framework of expectancy-disconfirmation paradigm, the purpose of this paper is to examine the differences in student perceptions of the level of satisfaction related to educational and non-educational services among four groups of international postgraduate business students from China, India, Indonesia and Thailand undertaking study in Australia.

Design/methodology/approach
– The data used in this study were derived from a mail survey conducted among international postgraduate business students from Asia studying at five universities in the state of Victoria, Australia. A total of 573 usable responses were received. Analysis using structural equation modelling, multivariate analysis of variance (MANOVA) and analysis of variance (ANOVA) was undertaken.

Findings – This study develops and tests a model of international postgraduate student satisfaction. Findings indicate that the importance of service quality factors related to both educational and non-educational services varies among nationality groups and, therefore, has a differential impact on student satisfaction.

Practical implications –
The study provides insights into seven constructs related to educational and non-educational services that are perceived as important by postgraduate business students from Asia in satisfaction formation. Universities should develop a diversified strategic marketing plan that incorporates the differential needs of international postgraduate business students according to the educational and non-educational constructs developed in this paper.

Originality/value – This study makes a contribution by filling a void in academic research in the area of satisfaction in relation to postgraduate international business students from four nationality groups in Asia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the fundamental machine learning tasks is that of predictive classification. Given that organisations collect an ever increasing amount of data, predictive classification methods must be able to effectively and efficiently handle large amounts of data. However, it is understood that present requirements push existing algorithms to, and sometimes beyond, their limits since many classification prediction algorithms were designed when currently common data set sizes were beyond imagination. This has led to a significant amount of research into ways of making classification learning algorithms more effective and efficient. Although substantial progress has been made, a number of key questions have not been answered. This dissertation investigates two of these key questions. The first is whether different types of algorithms to those currently employed are required when using large data sets. This is answered by analysis of the way in which the bias plus variance decomposition of predictive classification error changes as training set size is increased. Experiments find that larger training sets require different types of algorithms to those currently used. Some insight into the characteristics of suitable algorithms is provided, and this may provide some direction for the development of future classification prediction algorithms which are specifically designed for use with large data sets. The second question investigated is that of the role of sampling in machine learning with large data sets. Sampling has long been used as a means of avoiding the need to scale up algorithms to suit the size of the data set by scaling down the size of the data sets to suit the algorithm. However, the costs of performing sampling have not been widely explored. Two popular sampling methods are compared with learning from all available data in terms of predictive accuracy, model complexity, and execution time. The comparison shows that sub-sampling generally products models with accuracy close to, and sometimes greater than, that obtainable from learning with all available data. This result suggests that it may be possible to develop algorithms that take advantage of the sub-sampling methodology to reduce the time required to infer a model while sacrificing little if any accuracy. Methods of improving effective and efficient learning via sampling are also investigated, and now sampling methodologies proposed. These methodologies include using a varying-proportion of instances to determine the next inference step and using a statistical calculation at each inference step to determine sufficient sample size. Experiments show that using a statistical calculation of sample size can not only substantially reduce execution time but can do so with only a small loss, and occasional gain, in accuracy. One of the common uses of sampling is in the construction of learning curves. Learning curves are often used to attempt to determine the optimal training size which will maximally reduce execution time while nut being detrimental to accuracy. An analysis of the performance of methods for detection of convergence of learning curves is performed, with the focus of the analysis on methods that calculate the gradient, of the tangent to the curve. Given that such methods can be susceptible to local accuracy plateaus, an investigation into the frequency of local plateaus is also performed. It is shown that local accuracy plateaus are a common occurrence, and that ensuring a small loss of accuracy often results in greater computational cost than learning from all available data. These results cast doubt over the applicability of gradient of tangent methods for detecting convergence, and of the viability of learning curves for reducing execution time in general.