945 resultados para Ligante RANK


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: In response to the need for more comprehensive quality assessment within Australian residential aged care facilities, the Clinical Care Indicator (CCI) Tool was developed to collect outcome data as a means of making inferences about quality. A national trial of its effectiveness and a Brisbane-based trial of its use within the quality improvement context determined the CCI Tool represented a potentially valuable addition to the Australian aged care system. This document describes the next phase in the CCI Tool.s development; the aims of which were to establish validity and reliability of the CCI Tool, and to develop quality indicator thresholds (benchmarks) for use in Australia. The CCI Tool is now known as the ResCareQA (Residential Care Quality Assessment). Methods: The study aims were achieved through a combination of quantitative data analysis, and expert panel consultations using modified Delphi process. The expert panel consisted of experienced aged care clinicians, managers, and academics; they were initially consulted to determine face and content validity of the ResCareQA, and later to develop thresholds of quality. To analyse its psychometric properties, ResCareQA forms were completed for all residents (N=498) of nine aged care facilities throughout Queensland. Kappa statistics were used to assess inter-rater and test-retest reliability, and Cronbach.s alpha coefficient calculated to determine internal consistency. For concurrent validity, equivalent items on the ResCareQA and the Resident Classification Scales (RCS) were compared using Spearman.s rank order correlations, while discriminative validity was assessed using known-groups technique, comparing ResCareQA results between groups with differing care needs, as well as between male and female residents. Rank-ordered facility results for each clinical care indicator (CCI) were circulated to the panel; upper and lower thresholds for each CCI were nominated by panel members and refined through a Delphi process. These thresholds indicate excellent care at one extreme and questionable care at the other. Results: Minor modifications were made to the assessment, and it was renamed the ResCareQA. Agreement on its content was reached after two Delphi rounds; the final version contains 24 questions across four domains, enabling generation of 36 CCIs. Both test-retest and inter-rater reliability were sound with median kappa values of 0.74 (test-retest) and 0.91 (inter-rater); internal consistency was not as strong, with a Chronbach.s alpha of 0.46. Because the ResCareQA does not provide a single combined score, comparisons for concurrent validity were made with the RCS on an item by item basis, with most resultant correlations being quite low. Discriminative validity analyses, however, revealed highly significant differences in total number of CCIs between high care and low care groups (t199=10.77, p=0.000), while the differences between male and female residents were not significant (t414=0.56, p=0.58). Clinical outcomes varied both within and between facilities; agreed upper and lower thresholds were finalised after three Delphi rounds. Conclusions: The ResCareQA provides a comprehensive, easily administered means of monitoring quality in residential aged care facilities that can be reliably used on multiple occasions. The relatively modest internal consistency score was likely due to the multi-factorial nature of quality, and the absence of an aggregate result for the assessment. Measurement of concurrent validity proved difficult in the absence of a gold standard, but the sound discriminative validity results suggest that the ResCareQA has acceptable validity and could be confidently used as an indication of care quality within Australian residential aged care facilities. The thresholds, while preliminary due to small sample size, enable users to make judgements about quality within and between facilities. Thus it is recommended the ResCareQA be adopted for wider use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A distinctive feature of Chinese test is that a Chinese document is a sequence of Chinese with no space or boundary between Chinese words. This feature makes Chinese information retrieval more difficult since a retrieved document which contains the query term as a sequence of Chinese characters may not be really relevant to the query since the query term (as a sequence Chinese characters) may not be a valid Chinese word in that documents. On the other hand, a document that is actually relevant may not be retrieved because it does not contain the query sequence but contains other relevant words. In this research, we propose a hybrid Chinese information retrieval model by incorporating word-based techniques with the traditional character-based techniques. The aim of this approach is to investigate the influence of Chinese segmentation on the performance of Chinese information retrieval. Two ranking methods are proposed to rank retrieved documents based on the relevancy to the query calculated by combining character-based ranking and word-based ranking. Our experimental results show that Chinese segmentation can improve the performance of Chinese information retrieval, but the improvement is not significant if it incorporates only Chinese segmentation with the traditional character-based approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper describes the processes and the outcomes of the ranking of LIS journal titles by Australia’s LIS researchers during 2007-8, firstly through the Australian federal government’s Research Quality Framework (RQF) process and then its replacement, the Excellence in Research for Australia (ERA) initiative. The requirement to rank the journals titles used came from discussions held at the RQF panel meeting held in February 2007 in Canberra, Australia. While it was recognised that the Web of Science (formerly ISI) journal impact approach of journal acceptance for measures of research quality and impact might not work for LIS, it was apparent that this model would be the default if no other ranking of journal titles became apparent. Although an increasing number of LIS and related discipline journals were appearing in the Web of Science listed rankings, the number was few and it was thus decided by the Australian LIS research community to undertake the ranking exercise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Medication-related problems often occur in the immediate post-discharge period. To reduce medication misadventure the Commonwealth Government funds home medicines reviews (HMRs). HMRs are initiated when general practitioners refer consenting patients to their community pharmacists, who then engage accredited pharmacists to review patients' medicines in their homes. Aim: To determine if hospital-initiated medication reviews (HIMRs) can be implemented in a more timely manner than HMRs; and to assess the impact of a bespoke referral form with comorbidity-specific questions on the quality of reports. Method: Eligible medical inpatients at risk of medication misadventure were referred by the hospital liaison pharmacist to participating accredited pharmacists post-discharge from hospital. Social, demographic and laboratory data were collected from medical records and during interviews with consenting patients. Issues raised in the HIMR reports were categorised: intervention/action, information given or recommendation, and assigned a rank of clinical significance. Results: HIMRs were conducted within 11.6 6.6 days postdischarge. 36 HIMR reports were evaluated and 1442 issues identified - information given (n = 1204), recommendations made (n = 88) and actions taken (n = 150). The majority of issues raised (89%) had a minor clinical impact. The bespoke referral form prompted approximately half of the issues raised. Conclusion: HIMRs can be facilitated in a more timely manner than post-discharge HMRs. There was an associated positive clinical impact of issues raised in the HIMR reports.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genitourinary (GU) problems are a common complaint in the community and to the emergency department (ED). Urinary tract infections (UTIs) are the second most common bacterial disease. UTIs rank as the sixteenth most frequently reported problem to general practitioners in Australia1 and between 10% and 20% of women will experience at least one UTI in their lifetime. Over 1,000,000 Australians are currently suffering with nephrolithiasis (renal calculi) and it is hy-pothesised that Australia’s hot, dry climate causes more stone formation than many other coun-tries in the world. Acute kidney injury (AKI) is a common complication of any trauma. Hypovol-aemia results in severe hypotension and this precipitates the development of acute tubular necrosis and subsequent AKI. The incidence of chronic kidney disease (CKD) is rising across the world. CKD is classified into five stages with those in stage 5 being classified as being in end stage kidney disease (ESKD). It is estimated that there are over 1.5 million people in Australia with CKD and there were over 16,000 Australians and over 2900 individuals in New Zealand with ESKD.2 Indigenous populations from both countries (Aboriginals, Torres Strait Islanders, Maoris, and Pacific Islanders) are over-represented in the number of people with all stages of CKD in both countries. Patients with compromised renal function often require the assistance of paramedics and will arrive at the ED with life-threatening fluid and electrolyte imbalances. Spe-cific GU emergencies discussed in this chapter are acute renal failure, rhabdomyolysis, chronic kidney disease, UTIs, acute urinary retention, urinary calculi, testicular torsion, epididymitis, and priapism. Refer to Chapter 31 for discussion of sexually transmitted infections (STIs) in women and to Chapter X for discussion of genitourinary trauma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, user tagging systems have grown in popularity on the web. The tagging process is quite simple for ordinary users, which contributes to its popularity. However, free vocabulary has lack of standardization and semantic ambiguity. It is possible to capture the semantics from user tagging and represent those in a form of ontology, but the application of the learned ontology for recommendation making has not been that flourishing. In this paper we discuss our approach to learn domain ontology from user tagging information and apply the extracted tag ontology in a pilot tag recommendation experiment. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, user tagging systems have grown in popularity on the web. The tagging process is quite simple for ordinary users, which contributes to its popularity. However, free vocabulary has lack of standardization and semantic ambiguity. It is possible to capture the semantics from user tagging into some form of ontology, but the application of the resulted ontology for recommendation making has not been that flourishing. In this paper we discuss our approach to learn domain ontology from user tagging information and apply the extracted tag ontology in a pilot tag recommendation experiment. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the emergence of Web 2.0, Web users can classify Web items of their interest by using tags. Tags reflect users’ understanding to the items collected in each tag. Exploring user tagging behavior provides a promising way to understand users’ information needs. However, free and relatively uncontrolled vocabulary has its drawback in terms of lack of standardization and semantic ambiguity. Moreover, the relationships among tags have not been explored even there exist rich relationships among tags which could provide valuable information for us to better understand users. In this paper, we propose a novel approach to construct tag ontology based on the widely used general ontology WordNet to capture the semantics and the structural relationships of tags. Ambiguity of tags is a challenging problem to deal with in order to construct high quality tag ontology. We propose strategies to find the semantic meanings of tags and a strategy to disambiguate the semantics of tags based on the opinion of WordNet lexicographers. In order to evaluate the usefulness of the constructed tag ontology, in this paper we apply the extracted tag ontology in a tag recommendation experiment. We believe this is the first application of tag ontology for recommendation making. The initial result shows that by using the tag ontology to re-rank the recommended tags, the accuracy of the tag recommendation can be improved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates a strategy for guiding school-based active travel intervention. School-based active travel programs address the travel behaviors and perceptions of small target populations (i.e., at individual schools) so they can encourage people to walk or bike. Thus, planners need to know as much as possible about the behaviors and perceptions of their target populations. However, existing strategies for modeling travel behavior and segmenting audiences typically work with larger populations and may not capture the attitudinal diversity of smaller groups. This case study used Q technique to identify salient travel-related attitude types among parents at an elementary school in Denver, Colorado; 161 parents presented their perspectives about school travel by rank-ordering 36 statements from strongly disagree to strongly agree in a normalized distribution, single centered around no opinion. Thirty-nine respondents' cases were selected for case-wise cluster analysis in SPSS according to criteria that made them most likely to walk: proximity to school, grade, and bus service. Analysis revealed five core perspectives that were then correlated with the larger respondent pool: optimistic walkers, fair-weather walkers, drivers of necessity, determined drivers, and fence sitters. Core perspectives are presented—characterized by parents' opinions, personal characteristics, and reported travel behaviors—and recommendations are made for possible intervention approaches. The study concludes that Q technique provides a fine-grained assessment of travel behavior for small populations, which would benefit small-scale behavioral interventions

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Handling information overload online, from the user's point of view is a big challenge, especially when the number of websites is growing rapidly due to growth in e-commerce and other related activities. Personalization based on user needs is the key to solving the problem of information overload. Personalization methods help in identifying relevant information, which may be liked by a user. User profile and object profile are the important elements of a personalization system. When creating user and object profiles, most of the existing methods adopt two-dimensional similarity methods based on vector or matrix models in order to find inter-user and inter-object similarity. Moreover, for recommending similar objects to users, personalization systems use the users-users, items-items and users-items similarity measures. In most cases similarity measures such as Euclidian, Manhattan, cosine and many others based on vector or matrix methods are used to find the similarities. Web logs are high-dimensional datasets, consisting of multiple users, multiple searches with many attributes to each. Two-dimensional data analysis methods may often overlook latent relationships that may exist between users and items. In contrast to other studies, this thesis utilises tensors, the high-dimensional data models, to build user and object profiles and to find the inter-relationships between users-users and users-items. To create an improved personalized Web system, this thesis proposes to build three types of profiles: individual user, group users and object profiles utilising decomposition factors of tensor data models. A hybrid recommendation approach utilising group profiles (forming the basis of a collaborative filtering method) and object profiles (forming the basis of a content-based method) in conjunction with individual user profiles (forming the basis of a model based approach) is proposed for making effective recommendations. A tensor-based clustering method is proposed that utilises the outcomes of popular tensor decomposition techniques such as PARAFAC, Tucker and HOSVD to group similar instances. An individual user profile, showing the user's highest interest, is represented by the top dimension values, extracted from the component matrix obtained after tensor decomposition. A group profile, showing similar users and their highest interest, is built by clustering similar users based on tensor decomposed values. A group profile is represented by the top association rules (containing various unique object combinations) that are derived from the searches made by the users of the cluster. An object profile is created to represent similar objects clustered on the basis of their similarity of features. Depending on the category of a user (known, anonymous or frequent visitor to the website), any of the profiles or their combinations is used for making personalized recommendations. A ranking algorithm is also proposed that utilizes the personalized information to order and rank the recommendations. The proposed methodology is evaluated on data collected from a real life car website. Empirical analysis confirms the effectiveness of recommendations made by the proposed approach over other collaborative filtering and content-based recommendation approaches based on two-dimensional data analysis methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Collaborative question answering (cQA) portals such as Yahoo! Answers allow users as askers or answer authors to communicate, and exchange information through the asking and answering of questions in the network. In their current set-up, answers to a question are arranged in chronological order. For effective information retrieval, it will be advantageous to have the users’ answers ranked according to their quality. This paper proposes a novel approach of evaluating and ranking the users’answers and recommending the top-n quality answers to information seekers. The proposed approach is based on a user-reputation method which assigns a score to an answer reflecting its answer author’s reputation level in the network. The proposed approach is evaluated on a dataset collected from a live cQA, namely, Yahoo! Answers. To compare the results obtained by the non-content-based user-reputation method, experiments were also conducted with several content-based methods that assign a score to an answer reflecting its content quality. Various combinations of non-content and content-based scores were also used in comparing results. Empirical analysis shows that the proposed method is able to rank the users’ answers and recommend the top-n answers with good accuracy. Results of the proposed method outperform the content-based methods, various combinations, and the results obtained by the popular link analysis method, HITS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ultrafine particles (UFPs, <100 nm) are produced in large quantities by vehicular combustion and are implicated in causing several adverse human health effects. Recent work has suggested that a large proportion of daily UFP exposure may occur during commuting. However, the determinants, variability and transport mode-dependence of such exposure are not well-understood. The aim of this review was to address these knowledge gaps by distilling the results of ‘in-transit’ UFP exposure studies performed to-date, including studies of health effects. We identified 47 exposure studies performed across 6 transport modes: automobile, bicycle, bus, ferry, rail and walking. These encompassed approximately 3000 individual trips where UFP concentrations were measured. After weighting mean UFP concentrations by the number of trips in which they were collected, we found overall mean UFP concentrations of 3.4, 4.2, 4.5, 4.7, 4.9 and 5.7 × 10^4 particles cm^-3 for the bicycle, bus, automobile, rail, walking and ferry modes, respectively. The mean concentration inside automobiles travelling through tunnels was 3.0 × 10^5 particles cm^-3. While the mean concentrations were indicative of general trends, we found that the determinants of exposure (meteorology, traffic parameters, route, fuel type, exhaust treatment technologies, cabin ventilation, filtration, deposition, UFP penetration) exhibited marked variability and mode-dependence, such that it is not necessarily appropriate to rank modes in order of exposure without detailed consideration of these factors. Ten in-transit health effects studies have been conducted and their results indicate that UFP exposure during commuting can elicit acute effects in both healthy and health-compromised individuals. We suggest that future work should focus on further defining the contribution of in-transit UFP exposure to total UFP exposure, exploring its specific health effects and investigating exposures in the developing world. Keywords: air pollution; transport modes; acute health effects; travel; public transport