973 resultados para subset consistency


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The high degree of variability and inconsistency in cash flow study usage by property professionals demands improvement in knowledge and processes. Until recently limited research was being undertaken on the use of cash flow studies in property valuations but the growing acceptance of this approach for major investment valuations has resulted in renewed interest in this topic. Studies on valuation variations identify data accuracy, model consistency and bias as major concerns. In cash flow studies there are practical problems with the input data and the consistency of the models. This study will refer to the recent literature and identify the major factors in model inconsistency and data selection. A detailed case study will be used to examine the effects of changes in structure and inputs. The key variable inputs will be identified and proposals developed to improve the selection process for these key variables. The variables will be selected with the aid of sensitivity studies and alternative ways of quantifying the key variables explained. The paper recommends, with reservations, the use of probability profiles of the variables and the incorporation of this data in simulation exercises. The use of Monte Carlo simulation is demonstrated and the factors influencing the structure of the probability distributions of the key variables are outline. This study relates to ongoing research into functional performance of commercial property within an Australian Cooperative Research Centre.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative behaviour analysis requires the classification of behaviour to produce the basic data. In practice, much of this work will be performed by multiple observers, and maximising inter-observer consistency is of particular importance. Another discipline where consistency in classification is vital is biological taxonomy. A classification tool of great utility, the binary key, is designed to simplify the classification decision process and ensure consistent identification of proper categories. We show how this same decision-making tool - the binary key - can be used to promote consistency in the classification of behaviour. The construction of a binary key also ensures that the categories in which behaviour is classified are complete and non-overlapping. We discuss the general principles of design of binary keys, and illustrate their construction and use with a practical example from education research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This publication is the culmination of a 2 year Australian Learning and Teaching Council's Project Priority Programs Research Grant which investigates key issues and challenges in developing flexible guidelines lines for best practice in Australian Doctoral and Masters by Research Examination, encompassing the two modes of investigation, written and multi-modal (practice-led/based) theses, their distinctiveness and their potential interplay. The aims of the project were to address issues of assessment legitimacy raised by the entry of practice-orientated dance studies into Australian higher degrees; examine literal embodiment and presence, as opposed to cultural studies about states of embodiment; foreground the validity of questions around subjectivity and corporeal intelligence/s and the reliability of artistic/aesthetic communications, and finally to celebrate ‘performance mastery’(Melrose 2003) as a rigorous and legitimate mode of higher research. The project began with questions which centred around: the functions of higher degree dance research; concepts of 'master-ness’ and ‘doctorateness’; the kinds of languages, structures and processes which may guide candidates, supervisors, examiners and research personnel; the purpose of evaluation/examination; addressing positive and negative attributes of examination. Finally the study examined ways in which academic/professional, writing/dancing, tradition/creation and diversity/consistency relationships might be fostered to embrace change. Over two years, the authors undertook a qualitative national study encompassing a triangulation of semi-structured face to face interviews and industry forums to gather views from the profession, together with an analysis of existing guidelines, and recent literature in the field. The most significant primary data emerged from 74 qualitative interviews with supervisors, examiners, research deans and administrators, and candidates in dance and more broadly across the creative arts. Qualitative data gathered from the two primary sources, was coded and analysed using the NVivo software program. Further perspectives were drawn from international consultant and dance researcher Susan Melrose, as well as publications in the field, and initial feedback from a draft document circulated at the World Dance Alliance Global Summit in July 2008 in Brisbane. Refinement of data occurred in a continual sifting process until the final publication was produced. This process resulted in a set of guidelines in the form of a complex dynamic system for both product and process oriented outcomes of multi-modal theses, along with short position papers on issues which arose from the research such as contested definitions, embodiment and ephemerality, ‘liveness’ in performance research higher degrees, dissolving theory/practice binaries, the relationship between academe and industry, documenting practices and a re-consideration of the viva voce.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Open Forum examines research on case management that draws on consumer perspectives. It clarifies the extent of consumer involvement and whether evaluations were informed by recovery perspectives. Searches of three databases revealed l3 studies that sought to investigate consumer perspectives. Only one study asked consumers about experiences of recovery. Most evaluations did not adequately assess consumers' views, and active consumer participation in research was rare. Supporting an individual's recovery requires commitment to a recovery paradigm that incorporates traditional symptom reduction and improved functioning, with broader recovery principles, and a shift in focus from illness to wellbeing. It also requires greater involvement of consumers in the implementation of case management and ownership of their own recovery process, not just in research that evaluates the practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the major challenges facing a present day game development company is the removal of bugs from such complex virtual environments. This work presents an approach for measuring the correctness of synthetic scenes generated by a rendering system of a 3D application, such as a computer game. Our approach builds a database of labelled point clouds representing the spatiotemporal colour distribution for the objects present in a sequence of bug-free frames. This is done by converting the position that the pixels take over time into the 3D equivalent points with associated colours. Once the space of labelled points is built, each new image produced from the same game by any rendering system can be analysed by measuring its visual inconsistency in terms of distance from the database. Objects within the scene can be relocated (manually or by the application engine); yet the algorithm is able to perform the image analysis in terms of the 3D structure and colour distribution of samples on the surface of the object. We applied our framework to the publicly available game RacingGame developed for Microsoft(R) Xna(R). Preliminary results show how this approach can be used to detect a variety of visual artifacts generated by the rendering system in a professional quality game engine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the complex interactions that occur as teachers meet online to justify and negotiate their assessment judgments of student work across relatively large and geographically dispersed populations. Drawing from sociocultural theories of learning and technology, the technology is positioned as playing a role in either supporting or hindering teachers reaching a common understanding of assessment standards. Meeting transcripts and interviews with the teachers have been qualitatively analysed in terms of the interactions that occurred and teachers’ perceptions of these interactions. While online meetings offer a partial solution to address the current demands of assessment in education, they also present new challenges as teachers meet, in an unfamiliar environment, to discuss student work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

First year undergraduate university classes can be very large, and big student numbers often creates a challenge for instructors to ensure assignments are graded consistently across the cohort. This session describes and demonstrates the use of interactive audience response technology (ART) with assessors (rather than students) to moderate assignment grading. Results from preliminary research indicate this method of moderating the grading of assignments is effective, and achieves more consistent outcomes for students.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Major curriculum and assessment reforms in Australia have generated research interest in issues related to standards, teacher judgement and moderation. This article is based on one related inquiry of a large-scale Australian Research Council Linkage project conducted in Queensland. This qualitative study analysed interview data to identify teachers’ views on standards and moderation as a means to achieving consistency of teacher judgement. A complementary aspect of the research involved a blind review that was conducted to determine the degree of teacher consistency without the experience of moderation. Empirical evidence was gained that most teachers, of the total interviewed, articulated a positive attitude towards the use of standards in moderation and perceived that this process produces consistency in teachers’ judgements. Context was identified as an important influential factor in teachers’ judgements and it was concluded that teachers’ assessment beliefs, attitudes and practices impact on their perceptions of the value of moderation practice and the extent to which consistency can be achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Binary classification methods can be generalized in many ways to handle multiple classes. It turns out that not all generalizations preserve the nice property of Bayes consistency. We provide a necessary and sufficient condition for consistency which applies to a large class of multiclass classification methods. The approach is illustrated by applying it to some multiclass methods proposed in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Binary classification is a well studied special case of the classification problem. Statistical properties of binary classifiers, such as consistency, have been investigated in a variety of settings. Binary classification methods can be generalized in many ways to handle multiple classes. It turns out that one can lose consistency in generalizing a binary classification method to deal with multiple classes. We study a rich family of multiclass methods and provide a necessary and sufficient condition for their consistency. We illustrate our approach by applying it to some multiclass methods proposed in the literature.