257 resultados para trained incapacity
Resumo:
Time-expanded and heterodyned echolocation calls of the New Zealand long-tailed Chalinolobus tuberculatus and lesser short-tailed bat Mystacina tuberculata were recorded and digitally analysed. Temporal and spectral parameters were measured from time-expanded calls and power spectra generated for both time-expanded and heterodyned calls. Artificial neural networks were trained to classify the calls of both species using temporal and spectral parameters and power spectra as input data. Networks were then tested using data not previously seen. Calls could be unambiguously identified using parameters and power spectra from time-expanded calls. A neural network, trained and tested using power spectra of calls from both species recorded using a heterodyne detector set to 40 kHz (the frequency with the most energy of the fundamental of C. tuberculatus call), could identify 99% and 84% of calls of C. tuberculatus and M. tuberculata, respectively. A second network, trained and tested using power spectra of calls from both species recorded using a heterodyne detector set to 27 kHz (the frequency with the most energy of the fundamental of M. tuberculata call), could identify 34% and 100% of calls of C. tuberculatus and M. tuberculata, respectively. This study represents the first use of neural networks for the identification of bats from their echolocation calls. It is also the first study to use power spectra of time-expanded and heterodyned calls for identification of chiropteran species. The ability of neural networks to identify bats from their echolocation calls is discussed, as is the ecology of both species in relation to the design of their echolocation calls.
Resumo:
We propose expected attainable discrimination (EAD) as a measure to select discrete valued features for reliable discrimination between two classes of data. EAD is an average of the area under the ROC curves obtained when a simple histogram probability density model is trained and tested on many random partitions of a data set. EAD can be incorporated into various stepwise search methods to determine promising subsets of features, particularly when misclassification costs are difficult or impossible to specify. Experimental application to the problem of risk prediction in pregnancy is described.
Resumo:
Structural damage detection using measured dynamic data for pattern recognition is a promising approach. These pattern recognition techniques utilize artificial neural networks and genetic algorithm to match pattern features. In this study, an artificial neural network–based damage detection method using frequency response functions is presented, which can effectively detect nonlinear damages for a given level of excitation. The main objective of this article is to present a feasible method for structural vibration–based health monitoring, which reduces the dimension of the initial frequency response function data and transforms it into new damage indices and employs artificial neural network method for detecting different levels of nonlinearity using recognized damage patterns from the proposed algorithm. Experimental data of the three-story bookshelf structure at Los Alamos National Laboratory are used to validate the proposed method. Results showed that the levels of nonlinear damages can be identified precisely by the developed artificial neural networks. Moreover, it is identified that artificial neural networks trained with summation frequency response functions give higher precise damage detection results compared to the accuracy of artificial neural networks trained with individual frequency response functions. The proposed method is therefore a promising tool for structural assessment in a real structure because it shows reliable results with experimental data for nonlinear damage detection which renders the frequency response function–based method convenient for structural health monitoring.
Resumo:
Purpose To determine the prescribed drug-utilisation pattern for six common chronic conditions in adult South Africans in a cross-sectional survey. Methods 13 826 randomly selected participants, 15 years and older, were surveyed by trained fieldworkers at their homes in 1998. Questionnaires included socio-demographic, chronic-disease and drug-use data. The prescribed drugs were recorded from participants' medication containers. The Anatomical Therapeutic Classification (ATC) code of the drugs for tuberculosis (TB), diabetes, hypertension, hyperlipidaemia, other atherosclerosis-related conditions, such as heart conditions or cerebrovascular accidents (CVA), and asthma or chronic obstructive pulmonary disease (COPD), was recorded. The use of logistic regression analyses identified the determinants of those patients who used prescription medication for these six conditions. Results 18.4% of the women and 12.5% of the men used drugs for the six chronic conditions. Men used drugs most frequently for hypertension (50.9%) and asthma or chronic bronchitis (24.3%), while in women it was for hypertension (59.9%) and diabetes (17.5%). The logistic regression analyses showed that women, wealthier and older people, and those with medical insurance used these chronic-disease drugs more frequently compared to men, younger or poor people, or those without medical insurance. The African population group used these drugs less frequently than any other ethnic group. The inappropriate use of methyldopa was found for 14.8% of all antihypertensive drugs, while very few people used aspirin. Conclusions The methodology of this study provides a means of ascertaining the chronic-disease drug-utilisation pattern in national health surveys. The pattern described, suggests an inequitable use of chronic-disease drugs and inadequate use of some effective drugs to control the burden of chronic diseases in South Africa. Copyright © 2004 John Wiley & Sons, Ltd.
Resumo:
Objectives: To identify the groups of patients with high prevalence and poor control of hypertension in South Africa. Methods: In the first national Demographic and Health Survey, 12 952 randomly selected South Africans, aged 15 years and older were surveyed. Trained interviewers completed questionnaires on socio-demographic characteristics, lifestyle and the management of hypertension. This cross-sectional survey also included blood pressure, height and weight measurements. Logistic regression analyses identified the determinants of hypertension and the treatment status in this dataset. Results: A high risk of hypertension was associated with less than tertiary education, older age groups, overweight and obese people, using alcohol in excess, and a family history of stroke and hypertension. Rural Africans had the lowest risk of hypertension, which was significantly higher in obese African women than in women with normal body mass index. Improved hypertension control was found in the wealthy, women, older persons, being Asian, and having medical insurance. Conclusions: Rural African people had lower hypertension prevalence rates than the other groups. The poorer, younger men, without health insurance had the worst level of hypertension control.
Resumo:
Given the ever increasing importance of legislation to the resolution of legal disputes, there is a concomitant need for law students to be well trained in the anatomy, identification, interpretation and application of laws made by or under parliament. This article discusses a blended learning project called Indigo’s Folly, implemented at the Queensland University of Technology Law School in 2014. Indigo’s Folly was created to increase law student competency with respect to statutory interpretation. Just as importantly, it was designed to make the teaching of statutory interpretation more interesting – to “bring the sexy” to the student statutory interpretation experience. Quantitative and qualitative empirical data will be presented as evidence to show that statutory interpretation can be taught in a way that law students find engaging.
Resumo:
Background The requirement for dual screening of titles and abstracts to select papers to examine in full text can create a huge workload, not least when the topic is complex and a broad search strategy is required, resulting in a large number of results. An automated system to reduce this burden, while still assuring high accuracy, has the potential to provide huge efficiency savings within the review process. Objectives To undertake a direct comparison of manual screening with a semi‐automated process (priority screening) using a machine classifier. The research is being carried out as part of the current update of a population‐level public health review. Methods Authors have hand selected studies for the review update, in duplicate, using the standard Cochrane Handbook methodology. A retrospective analysis, simulating a quasi‐‘active learning’ process (whereby a classifier is repeatedly trained based on ‘manually’ labelled data) will be completed, using different starting parameters. Tests will be carried out to see how far different training sets, and the size of the training set, affect the classification performance; i.e. what percentage of papers would need to be manually screened to locate 100% of those papers included as a result of the traditional manual method. Results From a search retrieval set of 9555 papers, authors excluded 9494 papers at title/abstract and 52 at full text, leaving 9 papers for inclusion in the review update. The ability of the machine classifier to reduce the percentage of papers that need to be manually screened to identify all the included studies, under different training conditions, will be reported. Conclusions The findings of this study will be presented along with an estimate of any efficiency gains for the author team if the screening process can be semi‐automated using text mining methodology, along with a discussion of the implications for text mining in screening papers within complex health reviews.
Resumo:
Traditional text classification technology based on machine learning and data mining techniques has made a big progress. However, it is still a big problem on how to draw an exact decision boundary between relevant and irrelevant objects in binary classification due to much uncertainty produced in the process of the traditional algorithms. The proposed model CTTC (Centroid Training for Text Classification) aims to build an uncertainty boundary to absorb as many indeterminate objects as possible so as to elevate the certainty of the relevant and irrelevant groups through the centroid clustering and training process. The clustering starts from the two training subsets labelled as relevant or irrelevant respectively to create two principal centroid vectors by which all the training samples are further separated into three groups: POS, NEG and BND, with all the indeterminate objects absorbed into the uncertain decision boundary BND. Two pairs of centroid vectors are proposed to be trained and optimized through the subsequent iterative multi-learning process, all of which are proposed to collaboratively help predict the polarities of the incoming objects thereafter. For the assessment of the proposed model, F1 and Accuracy have been chosen as the key evaluation measures. We stress the F1 measure because it can display the overall performance improvement of the final classifier better than Accuracy. A large number of experiments have been completed using the proposed model on the Reuters Corpus Volume 1 (RCV1) which is important standard dataset in the field. The experiment results show that the proposed model has significantly improved the binary text classification performance in both F1 and Accuracy compared with three other influential baseline models.
Resumo:
Experimental studies have found that when the state-of-the-art probabilistic linear discriminant analysis (PLDA) speaker verification systems are trained using out-domain data, it significantly affects speaker verification performance due to the mismatch between development data and evaluation data. To overcome this problem we propose a novel unsupervised inter dataset variability (IDV) compensation approach to compensate the dataset mismatch. IDV-compensated PLDA system achieves over 10% relative improvement in EER values over out-domain PLDA system by effectively compensating the mismatch between in-domain and out-domain data.
Resumo:
Semantic perception and object labeling are key requirements for robots interacting with objects on a higher level. Symbolic annotation of objects allows the usage of planning algorithms for object interaction, for instance in a typical fetchand-carry scenario. In current research, perception is usually based on 3D scene reconstruction and geometric model matching, where trained features are matched with a 3D sample point cloud. In this work we propose a semantic perception method which is based on spatio-semantic features. These features are defined in a natural, symbolic way, such as geometry and spatial relation. In contrast to point-based model matching methods, a spatial ontology is used where objects are rather described how they "look like", similar to how a human would described unknown objects to another person. A fuzzy based reasoning approach matches perceivable features with a spatial ontology of the objects. The approach provides a method which is able to deal with senor noise and occlusions. Another advantage is that no training phase is needed in order to learn object features. The use-case of the proposed method is the detection of soil sample containers in an outdoor environment which have to be collected by a mobile robot. The approach is verified using real world experiments.
Resumo:
Affect is an important feature of multimedia content and conveys valuable information for multimedia indexing and retrieval. Most existing studies for affective content analysis are limited to low-level features or mid-level representations, and are generally criticized for their incapacity to address the gap between low-level features and high-level human affective perception. The facial expressions of subjects in images carry important semantic information that can substantially influence human affective perception, but have been seldom investigated for affective classification of facial images towards practical applications. This paper presents an automatic image emotion detector (IED) for affective classification of practical (or non-laboratory) data using facial expressions, where a lot of “real-world” challenges are present, including pose, illumination, and size variations etc. The proposed method is novel, with its framework designed specifically to overcome these challenges using multi-view versions of face and fiducial point detectors, and a combination of point-based texture and geometry. Performance comparisons of several key parameters of relevant algorithms are conducted to explore the optimum parameters for high accuracy and fast computation speed. A comprehensive set of experiments with existing and new datasets, shows that the method is effective despite pose variations, fast, and appropriate for large-scale data, and as accurate as the method with state-of-the-art performance on laboratory-based data. The proposed method was also applied to affective classification of images from the British Broadcast Corporation (BBC) in a task typical for a practical application providing some valuable insights.
Resumo:
Introduction: Training for and competing in ultraendurance exercise events is associated with an improvement in endogenous antioxidant defenses as well as increased oxidative stress. However, consequences on health are currently unclear. Purpose: We aimed to examine the impact of training- and acute exercise-induced changes in the antioxidant capacity on the oxidant/antioxidant balance after an ironman triathlon and whether there are indications for sustained oxidative damage. Methods: Blood samples were taken from 42 well-trained male triathletes 2 d before an ironman triathlon, then immediately postrace, 1, 5, and 19 d later. Blood was analyzed for conjugated dienes (CD), malondialdehyde (MDA), oxidized low-density lipoprotein (oxLDL), oxLDL:LDL ratio, advanced oxidation protein products (AOPP), AOPP:total protein (TP) ratio, Trolox equivalent antioxidant capacity (TEAC), uric acid (UA) in plasma, and activities of superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), and catalase (CAT) in erythrocytes. Results: Immediately postrace, there were significant increases in CD, AOPP, TEAC, UA (for all P < 0.001), and AOPP:TP (P < 0.01). MDA rose significantly (P < 0.01) 1 d postrace, whereas CD (P < 0.01), AOPP (P = 0.01), AOPP:TP (P < 0.05), and TEAC (P < 0.001) remained elevated. OxLDL:LDL trended to increase, whereas oxLDL significantly (P < 0.01) decreased 1 d postrace. Except for GSH-Px (P = 0.08), activities of SOD (P < 0.001) and CAT (P < 0.05) significantly decreased postrace. All oxidative stress markers had returned to prerace values 5 d postrace. Furthermore, several relationships between training status and oxidative stress markers, TEAC, and antioxidant enzyme activities were noted. Conclusions: This study indicates that despite a temporary increase in most (but not all) oxidative stress markers, there is no persistent oxidative stress in response to an ironman triathlon, probably due to training- and exercise-induced protective alterations in the antioxidant defense system.
Resumo:
Ultra-endurance exercise, such as an Ironman triathlon, induces muscle damage and a systemic inflammatory response. As the resolution of recovery in these parameters is poorly documented, we investigated indices of muscle damage and systemic inflammation in response to an Ironman triathlon and monitored these parameters 19 days into recovery. Blood was sampled from 42 well-trained male triathletes 2 days before, immediately after, and 1, 5 and 19 days after an Ironman triathlon. Blood samples were analyzed for hematological profile, and plasma values of myeloperoxidase (MPO), polymorphonuclear (PMN) elastase, cortisol, testosterone, creatine kinase (CK) activity, myoglobin, interleukin (IL)-6, IL-10 and high-sensitive C-reactive protein (hs-CRP). Immediately post-race there were significant (P < 0.001) increases in total leukocyte counts, MPO, PMN elastase, cortisol, CK activity, myoglobin, IL-6, IL-10 and hs-CRP, while testosterone significantly (P < 0.001) decreased compared to prerace. With the exception of cortisol, which decreased below prerace values (P < 0.001), these alterations persisted 1 day post-race (P < 0.001; P < 0.01 for IL-10). Five days post-race CK activity, myoglobin, IL-6 and hs-CRP had decreased, but were still significantly (P < 0.001) elevated. Nineteen days post-race most parameters had returned to prerace values, except for MPO and PMN elastase, which had both significantly (P < 0.001) decreased below prerace concentrations, and myoglobin and hs-CRP, which were slightly, but significantly higher than prerace. Furthermore, significant relationships between leukocyte dynamics, cortisol, markers of muscle damage, cytokines and hs-CRP after the Ironman triathlon were noted. This study indicates that the pronounced initial systemic inflammatory response induced by an Ironman triathlon declines rapidly. However, a low-grade systemic inflammation persisted until at least 5 days post-race, possibly reflecting incomplete muscle recovery.
Resumo:
Antioxidant requirements have neither been defined for endurance nor been defined for ultra-endurance athletes. To verify whether an acute bout of ultra-endurance exercise modifies the need for nutritive antioxidants, we aimed (1) to investigate the changes of endogenous and exogenous antioxidants in response to an Ironman triathlon; (2) to particularise the relevance of antioxidant responses to the indices of oxidatively damaged blood lipids, blood cell compounds and lymphocyte DNA and (3) to examine whether potential time-points of increased susceptibility to oxidative damage are associated with alterations in the antioxidant status. Blood that was collected from forty-two well-trained male athletes 2 d pre-race, immediately post-race, and 1, 5 and 19 d later was sampled. The key findings of the present study are as follows: (1) Immediately post-race, vitamin C, alpha-tocopherol, and levels of the Trolox equivalent antioxidant capacity, the ferric reducing ability of plasma and the oxygen radical absorbance capacity (ORAC) assays increased significantly. Exercise-induced changes in the plasma antioxidant capacity were associated with changes in uric acid, bilirubin and vitamin C. (2) Significant inverse correlations between ORAC levels and indices of oxidatively damaged DNA immediately and 1 d post-race suggest a protective role of the acute antioxidant responses in DNA stability. (3) Significant decreases in carotenoids and gamma-tocopherol 1 d post-race indicate that the antioxidant intake during the first 24 h of recovery following an acute ultra-endurance exercise requires specific attention. Furthermore, the present study illustrates the importance of a diversified and well-balanced diet to maintain a physiological antioxidant status in ultra-endurance athletes in reference to recommendations.
Resumo:
During acute and strenuous exercise, the enhanced formation of reactive oxygen species can induce damage to lipids, proteins, and nucleic acids. The aim of this study was to investigate the effect of an Ironman triathlon (3.8 km swim, 180 km cycle, 42 km run), as a prototype of ultra-endurance exercise, on DNA stability. As biomarkers of genomic instability, the number of micronuclei, nucleoplasmic bridges, and nuclear buds were measured within the cytokinesis-block micronucleus cytome assay in once-divided peripheral lymphocytes of 20 male triathletes. Blood samples were taken 2 days before, within 20 min after the race, and 5 and 19 days post-race. Overall, the number of micronuclei decreased (P < 0.05) after the race, remained at a low level until 5 days post-race, and declined further to 19 days post-race (P < 0.01). The frequency of nucleoplasmic bridges and nuclear buds did not change immediately after the triathlon. The number of nucleoplasmic bridge declined from 2 days pre-race to 19 days post-exercise (P < 0.05). The frequency of nuclear buds increased after the triathlon, peaking 5 days post-race (P < 0.01) and decreased to basic levels 19 days after the race (P < 0.01). The results suggest that an Ironman triathlon does not cause long-lasting DNA damage in well-trained athletes.