887 resultados para weighting triangles


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyses the probabilistic linear discriminant analysis (PLDA) speaker verification approach with limited development data. This paper investigates the use of the median as the central tendency of a speaker’s i-vector representation, and the effectiveness of weighted discriminative techniques on the performance of state-of-the-art length-normalised Gaussian PLDA (GPLDA) speaker verification systems. The analysis within shows that the median (using a median fisher discriminator (MFD)) provides a better representation of a speaker when the number of representative i-vectors available during development is reduced, and that further, usage of the pair-wise weighting approach in weighted LDA and weighted MFD provides further improvement in limited development conditions. Best performance is obtained using a weighted MFD approach, which shows over 10% improvement in EER over the baseline GPLDA system on mismatched and interview-interview conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Risk taking is central to human activity. Consequently, it lies at the focal point of behavioral sciences such as neuroscience, economics, and finance. Many influential models from these sciences assume that financial risk preferences form a stable trait. Is this assumption justified and, if not, what causes the appetite for risk to fluctuate? We have previously found that traders experience a sustained increase in the stress hormone cortisol when the amount of uncertainty, in the form of market volatility, increases. Here we ask whether these elevated cortisol levels shift risk preferences. Using a double-blind, placebo-controlled, cross-over protocol we raised cortisol levels in volunteers over eight days to the same extent previously observed in traders. We then tested for the utility and probability weighting functions underlying their risk taking, and found that participants became more risk averse. We also observed that the weighting of probabilities became more distorted among men relative to women. These results suggest that risk preferences are highly dynamic. Specifically, the stress response calibrates risk taking to our circumstances, reducing it in times of prolonged uncertainty, such as a financial crisis. Physiology-induced shifts in risk preferences may thus be an under-appreciated cause of market instability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vast majority of current robot mapping and navigation systems require specific well-characterized sensors that may require human-supervised calibration and are applicable only in one type of environment. Furthermore, if a sensor degrades in performance, either through damage to itself or changes in environmental conditions, the effect on the mapping system is usually catastrophic. In contrast, the natural world presents robust, reasonably well-characterized solutions to these problems. Using simple movement behaviors and neural learning mechanisms, rats calibrate their sensors for mapping and navigation in an incredibly diverse range of environments and then go on to adapt to sensor damage and changes in the environment over the course of their lifetimes. In this paper, we introduce similar movement-based autonomous calibration techniques that calibrate place recognition and self-motion processes as well as methods for online multisensor weighting and fusion. We present calibration and mapping results from multiple robot platforms and multisensory configurations in an office building, university campus, and forest. With moderate assumptions and almost no prior knowledge of the robot, sensor suite, or environment, the methods enable the bio-inspired RatSLAM system to generate topologically correct maps in the majority of experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Cancer monitoring and prevention relies on the critical aspect of timely notification of cancer cases. However, the abstraction and classification of cancer from the free-text of pathology reports and other relevant documents, such as death certificates, exist as complex and time-consuming activities. Aims In this paper, approaches for the automatic detection of notifiable cancer cases as the cause of death from free-text death certificates supplied to Cancer Registries are investigated. Method A number of machine learning classifiers were studied. Features were extracted using natural language techniques and the Medtex toolkit. The numerous features encompassed stemmed words, bi-grams, and concepts from the SNOMED CT medical terminology. The baseline consisted of a keyword spotter using keywords extracted from the long description of ICD-10 cancer related codes. Results Death certificates with notifiable cancer listed as the cause of death can be effectively identified with the methods studied in this paper. A Support Vector Machine (SVM) classifier achieved best performance with an overall F-measure of 0.9866 when evaluated on a set of 5,000 free-text death certificates using the token stem feature set. The SNOMED CT concept plus token stem feature set reached the lowest variance (0.0032) and false negative rate (0.0297) while achieving an F-measure of 0.9864. The SVM classifier accounts for the first 18 of the top 40 evaluated runs, and entails the most robust classifier with a variance of 0.001141, half the variance of the other classifiers. Conclusion The selection of features significantly produced the most influences on the performance of the classifiers, although the type of classifier employed also affects performance. In contrast, the feature weighting schema created a negligible effect on performance. Specifically, it is found that stemmed tokens with or without SNOMED CT concepts create the most effective feature when combined with an SVM classifier.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a study to understand the effect that negated terms (e.g., "no fever") and family history (e.g., "family history of diabetes") have on searching clinical records. Our analysis is aimed at devising the most effective means of handling negation and family history. In doing so, we explicitly represent a clinical record according to its different content types: negated, family history and normal content; the retrieval model weights each of these separately. Empirical evaluation shows that overall the presence of negation harms retrieval effectiveness while family history has little effect. We show negation is best handled by weighting negated content (rather than the common practise of removing or replacing it). However, we also show that many queries benefit from the inclusion of negated content and that negation is optimally handled on a per-query basis. Additional evaluation shows that adaptive handing of negated and family history content can have significant benefits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

1. Biodiversity, water quality and ecosystem processes in streams are known to be influenced by the terrestrial landscape over a range of spatial and temporal scales. Lumped attributes (i.e. per cent land use) are often used to characterise the condition of the catchment; however, they are not spatially explicit and do not account for the disproportionate influence of land located near the stream or connected by overland flow. 2. We compared seven landscape representation metrics to determine whether accounting for the spatial proximity and hydrological effects of land use can be used to account for additional variability in indicators of stream ecosystem health. The landscape metrics included the following: a lumped metric, four inverse-distance-weighted (IDW) metrics based on distance to the stream or survey site and two modified IDW metrics that also accounted for the level of hydrologic activity (HA-IDW). Ecosystem health data were obtained from the Ecological Health Monitoring Programme in Southeast Queensland, Australia and included measures of fish, invertebrates, physicochemistry and nutrients collected during two seasons over 4 years. Linear models were fitted to the stream indicators and landscape metrics, by season, and compared using an information-theoretic approach. 3. Although no single metric was most suitable for modelling all stream indicators, lumped metrics rarely performed as well as other metric types. Metrics based on proximity to the stream (IDW and HA-IDW) were more suitable for modelling fish indicators, while the HA-IDW metric based on proximity to the survey site generally outperformed others for invertebrates, irrespective of season. There was consistent support for metrics based on proximity to the survey site (IDW or HA-IDW) for all physicochemical indicators during the dry season, while a HA-IDW metric based on proximity to the stream was suitable for five of the six physicochemical indicators in the post-wet season. Only one nutrient indicator was tested and results showed that catchment area had a significant effect on the relationship between land use metrics and algal stable isotope ratios in both seasons. 4. Spatially explicit methods of landscape representation can clearly improve the predictive ability of many empirical models currently used to study the relationship between landscape, habitat and stream condition. A comparison of different metrics may provide clues about causal pathways and mechanistic processes behind correlative relationships and could be used to target restoration efforts strategically.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data in germplasm collections contain a mixture of data types; binary, multistate and quantitative. Given the multivariate nature of these data, the pattern analysis methods of classification and ordination have been identified as suitable techniques for statistically evaluating the available diversity. The proximity (or resemblance) measure, which is in part the basis of the complementary nature of classification and ordination techniques, is often specific to particular data types. The use of a combined resemblance matrix has an advantage over data type specific proximity measures. This measure accommodates the different data types without manipulating them to be of a specific type. Descriptors are partitioned into their data types and an appropriate proximity measure is used on each. The separate proximity matrices, after range standardisation, are added as a weighted average and the combined resemblance matrix is then used for classification and ordination. Germplasm evaluation data for 831 accessions of groundnut (Arachis hypogaea L.) from the Australian Tropical Field Crops Genetic Resource Centre, Biloela, Queensland were examined. Data for four binary, five ordered multistate and seven quantitative descriptors have been documented. The interpretative value of different weightings - equal and unequal weighting of data types to obtain a combined resemblance matrix - was investigated by using principal co-ordinate analysis (ordination) and hierarchical cluster analysis. Equal weighting of data types was found to be more valuable for these data as the results provided a greater insight into the patterns of variability available in the Australian groundnut germplasm collection. The complementary nature of pattern analysis techniques enables plant breeders to identify relevant accessions in relation to the descriptors which distinguish amongst them. This additional information may provide plant breeders with a more defined entry point into the germplasm collection for identifying sources of variability for their plant improvement program, thus improving the utilisation of germplasm resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Capacitors are widely used for power-factor correction (PFC) in power systems. When a PFC capacitor is installed with a certain load in a microgrid, it may be in parallel with the filter capacitor of the inverter interfacing the utility grid and the local distributed-generation unit and, thus, change the effective filter capacitance. Another complication is the possibility of occurrence of resonance in the microgrid. This paper conducts an in-depth investigation of the effective shunt-filter-capacitance variation and resonance phenomena in a microgrid due to a connection of a PFC capacitor. To compensate the capacitance-parameter variation, an Hinfin controller is designed for the voltage-source- inverter voltage control. By properly choosing the weighting functions, the synthesized Hinfin controller would exhibit high gains at the vicinity of the line frequency, similar to traditional high- performance P+ resonant controller and, thus, would possess nearly zero steady-state error. However, with the robust Hinfin controller, it will be possible to explicitly specify the degree of robustness in face of parameter variations. Furthermore, a thorough investigation is carried out to study the performance of inner current-loop feedback variables under resonance conditions. It reveals that filter-inductor current feedback is more effective in damping the resonance. This resonance can be further attenuated by employing the dual-inverter microgrid conditioner and controlling the series inverter as a virtual resistor affecting only harmonic components without interference with the fundamental power flow. And finally, the study in this paper has been tested experimentally using an experimental microgrid prototype.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A travel article about touring in the Rockies, Alberta. MORAINE Lake, held up by a crown of mountains near the small town of Lake Louise, is opal blue. You'd need a precious stone to cut the surface. And even on a hot morning in August, there are plenty around: diamonds of snow decorate the mountain tops and, in their reflections, slice the lake with white triangles. We are at the start of a track that climbs from the lake to Mt Temple, the tallest mountain in the area and two or three hours' walk away. When the track zigzags back, we glimpse the lake through gaps in the conifer forest...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cane fibre content has increased over the past ten years. Some of that increase can be attributed to new varieties selected for release. This paper reviews the existing methods for quantifying the fibre characteristics of a variety, including fibre content and fibre quality measurements – shear strength, impact resistance and short fibre content. The variety selection process is presented and it is reported that fibre content has zero weighting in the current selection index. An updated variety selection approach is proposed, potentially replacing the existing selection process relating to fibre. This alternative approach involves the use of a more complex mill area level model that accounts for harvesting, transport and processing equipment, taking into account capacity, efficiency and operational impacts, along with the end use for the bagasse. The approach will ultimately determine a net economic value for the variety. The methodology lends itself to a determination of the fibre properties that have a significant impact on the economic value so that variety tests can better target the critical properties. A low-pressure compression test is proposed as a good test to provide an assessment of the impact of a variety on milling capacity. NIR methodology is proposed as a technology to lead to a more rapid assessment of fibre properties, and hence the opportunity to more comprehensively test for fibre impacts at an earlier stage of variety development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research falls in the area of enhancing the quality of tag-based item recommendation systems. It aims to achieve this by employing a multi-dimensional user profile approach and by analyzing the semantic aspects of tags. Tag-based recommender systems have two characteristics that need to be carefully studied in order to build a reliable system. Firstly, the multi-dimensional correlation, called as tag assignment , should be appropriately modelled in order to create the user profiles [1]. Secondly, the semantics behind the tags should be considered properly as the flexibility with their design can cause semantic problems such as synonymy and polysemy [2]. This research proposes to address these two challenges for building a tag-based item recommendation system by employing tensor modeling as the multi-dimensional user profile approach, and the topic model as the semantic analysis approach. The first objective is to optimize the tensor model reconstruction and to improve the model performance in generating quality rec-ommendation. A novel Tensor-based Recommendation using Probabilistic Ranking (TRPR) method [3] has been developed. Results show this method to be scalable for large datasets and outperforming the benchmarking methods in terms of accuracy. The memory efficient loop implements the n-mode block-striped (matrix) product for tensor reconstruction as an approximation of the initial tensor. The probabilistic ranking calculates the probabil-ity of users to select candidate items using their tag preference list based on the entries generated from the reconstructed tensor. The second objective is to analyse the tag semantics and utilize the outcome in building the tensor model. This research proposes to investigate the problem using topic model approach to keep the tags nature as the “social vocabulary” [4]. For the tag assignment data, topics can be generated from the occurrences of tags given for an item. However there is only limited amount of tags availa-ble to represent items as collection of topics, since an item might have only been tagged by using several tags. Consequently, the generated topics might not able to represent the items appropriately. Furthermore, given that each tag can belong to any topics with various probability scores, the occurrence of tags cannot simply be mapped by the topics to build the tensor model. A standard weighting technique will not appropriately calculate the value of tagging activity since it will define the context of an item using a tag instead of a topic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new mesh adaptivity algorithm that combines a posteriori error estimation with bubble-type local mesh generation (BLMG) strategy for elliptic differential equations is proposed. The size function used in the BLMG is defined on each vertex during the adaptive process based on the obtained error estimator. In order to avoid the excessive coarsening and refining in each iterative step, two factor thresholds are introduced in the size function. The advantages of the BLMG-based adaptive finite element method, compared with other known methods, are given as follows: the refining and coarsening are obtained fluently in the same framework; the local a posteriori error estimation is easy to implement through the adjacency list of the BLMG method; at all levels of refinement, the updated triangles remain very well shaped, even if the mesh size at any particular refinement level varies by several orders of magnitude. Several numerical examples with singularities for the elliptic problems, where the explicit error estimators are used, verify the efficiency of the algorithm. The analysis for the parameters introduced in the size function shows that the algorithm has good flexibility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background. Cause-of-death statistics are an essential component of health information. Despite improvements, underregistration and misclassification of causes make it difficult to interpret the official death statistics. Objective. To estimate consistent cause-specific death rates for the year 2000 and to identify the leading causes of death and premature mortality in the provinces. Methods. Total number of deaths and population size were estimated using the Actuarial Society of South Africa ASSA2000 AIDS and demographic model. Cause-of-death profiles based on Statistics South Africa's 15% sample, adjusted for misclassification of deaths due to ill-defined causes and AIDS deaths due to indicator conditions, were applied to the total deaths by age and sex. Age-standardised rates and years of life lost were calculated using age weighting and discounting. Results. Life expectancy in KwaZulu-Natal and Mpumalanga is about 10 years lower than that in the Western Cape, the province with the lowest mortality rate. HIV/AIDS is the leading cause of premature mortality for all provinces. Mortality due to pre-transitional causes, such as diarrhoea, is more pronounced in the poorer and more rural provinces. In contrast, non-communicable disease mortality is similar across all provinces, although the cause profiles differ. Injury mortality rates are particularly high in provinces with large metropolitan areas and in Mpumalanga. Conclusion. The quadruple burden experienced in all provinces requires a broad range of interventions, including improved access to health care; ensuring that basic needs such as those related to water and sanitation are met; disease and injury prevention; and promotion of a healthy lifestyle. High death rates as a result of HIV/AIDS highlight the urgent need to accelerate the implementation of the treatment and prevention plan. In addition, there is an urgent need to improve the cause-of-death data system to provide reliable cause-of-death statistics at health district level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assessment has widely been described as being ‘at the centre of the student experience’. It would be difficult to conceive of the modern teaching university without it. Assessment is accepted as one of the most important tools that an educator can deploy to influence both what and how students learn. Evidence suggests that how students allocate time and effort to tasks and to developing an understanding of the syllabus is affected by the method of assessment utilised and the weighting it is given. This is particularly significant in law schools where law students may be more preoccupied with achieving high grades in all courses than their counterparts from other disciplines. However, well-designed assessment can be seen as more than this. It can be a vehicle for encouraging students to learn and engage more broadly than with the minimums required to complete the assessment activity. In that sense assessment need not merely ‘drive’ learning, but can instead act as a catalyst for further learning beyond what a student had anticipated. In this article we reconsider the potential roles and benefits in legal education of a form of interactive classroom learning we term assessable class participation (‘ACP’), both as part of a pedagogy grounded in assessment and learning theory, and as a platform for developing broader autonomous approaches to learning amongst students. We also consider some of the barriers students can face in ACP and the ways in which teacher approaches to ACP can positively affect the socio-emotional climates in classrooms and thus reduce those barriers. We argue that the way in which a teacher facilitates ACP is critical to the ability to develop positive emotional and learning outcomes for law students, and for teachers themselves.