942 resultados para Classification error rate


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimer la filtration glomérulaire chez les personnes âgées, tout en tenant compte de la difficulté supplémentaire d'évaluer leur masse musculaire, est difficile et particulièrement important pour la prescription de médicaments. Le taux plasmatique de la creatinine dépend à la fois de la fraction d'élimination rénale et extra-rénale et de la masse musculaire. Actuellement, pour estimer là filtration glomérulaire différentes formules sont utilisées, qui se fondent principalement sur la valeur de la créatinine. Néanmoins, en raison de la fraction éliminée par les voies tubulaires et intestinales la clairance de la créatinine surestime généralement le taux de filtration glomérulaire (GFR). Le but de cette étude est de vérifier la fiabilité de certains marqueurs et algorithmes de la fonction rénale actuellement utilisés et d'évaluer l'avantage additionnel de prendre en considération la masse musculaire mesurée par la bio-impédance dans une population âgée (> 70 ans) et avec une fonction rénale chronique compromise basée sur MDRD eGFR (CKD stades lll-IV). Dans cette étude, nous comparons 5 équations développées pour estimer la fonction rénale et basées respectivement sur la créatinine sérique (Cockcroft et MDRD), la cystatine C (Larsson), la créatinine combinée à la bêta-trace protéine (White), et la créatinine ajustée à la masse musculaire obtenue par analyse de la bio-impédance (MacDonald). La bio-impédance est une méthode couramment utilisée pour estimer la composition corporelle basée sur l'étude des propriétés électriques passives et de la géométrie des tissus biologiques. Cela permet d'estimer les volumes relatifs des différents tissus ou des fluides dans le corps, comme par exemple l'eau corporelle totale, la masse musculaire (=masse maigre) et la masse grasse corporelle. Nous avons évalué, dans une population âgée d'un service interne, et en utilisant la clairance de l'inuline (single shot) comme le « gold standard », les algorithmes de Cockcroft (GFR CKC), MDRD, Larsson (cystatine C, GFR CYS), White (beta trace protein, GFR BTP) et Macdonald (GFR = ALM, la masse musculaire par bio-impédance. Les résultats ont montré que le GFR (mean ± SD) mesurée avec l'inuline et calculée avec les algorithmes étaient respectivement de : 34.9±20 ml/min pour l'inuline, 46.7±18.5 ml/min pour CKC, 47.2±23 ml/min pour CYS, 54.4±18.2ml/min pour BTP, 49±15.9 ml/min pour MDRD et 32.9±27.2ml/min pour ALM. Les courbes ROC comparant la sensibilité et la spécificité, l'aire sous la courbe (AUC) et l'intervalle de confiance 95% étaient respectivement de : CKC 0 68 (055-0 81) MDRD 0.76 (0.64-0.87), Cystatin C 0.82 (0.72-0.92), BTP 0.75 (0.63-0.87), ALM 0.65 (0.52-0.78). ' En conclusion, les algorithmes comparés dans cette étude surestiment la GFR dans la population agee et hospitalisée, avec des polymorbidités et une classe CKD lll-IV. L'utilisation de l'impédance bioelectrique pour réduire l'erreur de l'estimation du GFR basé sur la créatinine n'a fourni aucune contribution significative, au contraire, elle a montré de moins bons résultats en comparaison aux autres equations. En fait dans cette étude 75% des patients ont changé leur classification CKD avec MacDonald (créatinine et masse musculaire), contre 49% avec CYS (cystatine C), 56% avec MDRD,52% avec Cockcroft et 65% avec BTP. Les meilleurs résultats ont été obtenus avec Larsson (CYS C) et la formule de Cockcroft.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ratio of resting metabolic rate (RMR) to fat-free mass (FFM) is often used to compare individuals of different body sizes. Because RMR has not been well described over the full range of FFM, a literature review was conducted among groups with a wide range of FFM. It included 31 data sets comprising a total of 1111 subjects: 118 infants and preschoolers, 323 adolescents, and 670 adults; FFM ranged from 2.8 to 106 kg. The relationship of RMR to FFM was found to be nonlinear and average slopes of the regression equations of the three groups differed significantly (P less than 0.0001). For only the youngest group did the intercept approach zero. The lower slopes of RMR on FFM, at higher measures of FFM, corresponded to relatively greater proportions of less metabolically active muscle mass and to lesser proportions of more metabolically active nonmuscle organ mass. Because the contribution of FFM to RMR is not constant, an arithmetic error is introduced when the ratio of RMR to FFM is used. Hence, alternative methods should be used to compare individuals with markedly different FFM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Wolfram syndrome is a degenerative, recessive rare disease with an onset in childhood. It is caused by mutations in WFS1 or CISD2 genes. More than 200 different variations in WFS1 have been described in patients with Wolfram syndrome, which complicates the establishment of clear genotype-phenotype correlation. The purpose of this study was to elucidate the role of WFS1 mutations and update the natural history of the disease. Methods: This study analyzed clinical and genetic data of 412 patients with Wolfram syndrome published in the last 15 years. Results: (i) 15% of published patients do not fulfill the current ­inclusion criterion; (ii) genotypic prevalence differences may exist among countries; (iii) diabetes mellitus and optic atrophy might not be the first two clinical features in some patients; (iv) mutations are nonuniformly distributed in WFS1; (v) age at onset of diabetes mellitus, hearing defects, and diabetes insipidus may depend on the patient"s genotypic class; and (vi) disease progression rate might depend on genotypic class. Conclusion: New genotype-phenotype correlations were established, disease progression rate for the general population and for the genotypic classes has been calculated, and new diagnostic criteria have been proposed. The conclusions raised could be important for patient management and counseling as well as for the development of treatments for Wolfram syndrome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies evaluation of software development practices through an error analysis. The work presents software development process, software testing, software errors, error classification and software process improvement methods. The practical part of the work presents results from the error analysis of one software process. It also gives improvement ideas for the project. It was noticed that the classification of the error data was inadequate in the project. Because of this it was impossible to use the error data effectively. With the error analysis we were able to show that there were deficiencies in design and analyzing phases, implementation phase and in testing phase. The work gives ideas for improving error classification and for software development practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Diagnosing pediatric pneumonia is challenging in low-resource settings. The World Health Organization (WHO) has defined primary end-point radiological pneumonia for use in epidemiological and vaccine studies. However, radiography requires expertise and is often inaccessible. We hypothesized that plasma biomarkers of inflammation and endothelial activation may be useful surrogates for end-point pneumonia, and may provide insight into its biological significance. METHODS: We studied children with WHO-defined clinical pneumonia (n = 155) within a prospective cohort of 1,005 consecutive febrile children presenting to Tanzanian outpatient clinics. Based on x-ray findings, participants were categorized as primary end-point pneumonia (n = 30), other infiltrates (n = 31), or normal chest x-ray (n = 94). Plasma levels of 7 host response biomarkers at presentation were measured by ELISA. Associations between biomarker levels and radiological findings were assessed by Kruskal-Wallis test and multivariable logistic regression. Biomarker ability to predict radiological findings was evaluated using receiver operating characteristic curve analysis and Classification and Regression Tree analysis. RESULTS: Compared to children with normal x-ray, children with end-point pneumonia had significantly higher C-reactive protein, procalcitonin and Chitinase 3-like-1, while those with other infiltrates had elevated procalcitonin and von Willebrand Factor and decreased soluble Tie-2 and endoglin. Clinical variables were not predictive of radiological findings. Classification and Regression Tree analysis generated multi-marker models with improved performance over single markers for discriminating between groups. A model based on C-reactive protein and Chitinase 3-like-1 discriminated between end-point pneumonia and non-end-point pneumonia with 93.3% sensitivity (95% confidence interval 76.5-98.8), 80.8% specificity (72.6-87.1), positive likelihood ratio 4.9 (3.4-7.1), negative likelihood ratio 0.083 (0.022-0.32), and misclassification rate 0.20 (standard error 0.038). CONCLUSIONS: In Tanzanian children with WHO-defined clinical pneumonia, combinations of host biomarkers distinguished between end-point pneumonia, other infiltrates, and normal chest x-ray, whereas clinical variables did not. These findings generate pathophysiological hypotheses and may have potential research and clinical utility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past few decades, the rise of criminal, civil and asylum cases involving young people lacking valid identification documents has generated an increase in the demand of age estimation. The chronological age or the probability that an individual is older or younger than a given age threshold are generally estimated by means of some statistical methods based on observations performed on specific physical attributes. Among these statistical methods, those developed in the Bayesian framework allow users to provide coherent and transparent assignments which fulfill forensic and medico-legal purposes. The application of the Bayesian approach is facilitated by using probabilistic graphical tools, such as Bayesian networks. The aim of this work is to test the performances of the Bayesian network for age estimation recently presented in scientific literature in classifying individuals as older or younger than 18 years of age. For these exploratory analyses, a sample related to the ossification status of the medial clavicular epiphysis available in scientific literature was used. Results obtained in the classification are promising: in the criminal context, the Bayesian network achieved, on the average, a rate of correct classifications of approximatively 97%, whilst in the civil context, the rate is, on the average, close to the 88%. These results encourage the continuation of the development and the testing of the method in order to support its practical application in casework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To evaluate the performance of diagnostic centers in the classification of mammography reports from an opportunistic screening undertaken by the Brazilian public health system (SUS) in the municipality of Goiânia, GO, Brazil in 2010. Materials and Methods The present ecological study analyzed data reported to the Sistema de Informação do Controle do Câncer de Mama (SISMAMA) (Breast Cancer Management Information System) by diagnostic centers involved in the mammographic screening developed by the SUS. Based on the frequency of mammograms per BI-RADS® category and on the limits established for the present study, the authors have calculated the rate of conformity for each diagnostic center. Diagnostic centers with equal rates of conformity were considered as having equal performance. Results Fifteen diagnostic centers performed mammographic studies for SUS and reported 31,198 screening mammograms. The performance of the diagnostic centers concerning BI-RADS classification has demonstrated that none of them was in conformity for all categories, one center presented conformity in five categories, two centers, in four categories, three centers, in three categories, two centers, in two categories, four centers, in one category, and three centers with no conformity. Conclusion The results of the present study demonstrate unevenness in the diagnostic centers performance in the classification of mammograms reported to SISMAMA from the opportunistic screening undertaken by SUS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cada cert temps hi ha assumptes d’alt interès social dins de la jurisdicció. En els últims anys, coincidint amb la crisi financera internacional, els assumptes relatius a contractes bancaris complexos, sobretot permutes financeres o swaps, han tingut una gran rellevància. En un context de crisi financera internacional, i també nacional, s'han estès el nombre de demandes dirigides contra bancs i entitats financeres. Són reclamacions en les quals se sol·licita la declaració de nul·litat dels citats contractes, principalment es basa en un error del consentiment, nul·litat que comporta la devolució de les quantitats invertides, de les rentabilitats esperades o de les penalitzacions aplicades davant la resolució anticipada d'us contractes pels clients defraudats en les seves expectatives. Les presents pàgines pretenen un estudi dels litigis sobre SWAPS, principalment dels “Interest Rate Swap”, identificar quins són els contractes bancaris complexes, quines són les normes de consentiment contractual que els regeixen. Respecte dels primers, cal destacar que l'elevat nombre de casos plantejats davant els nostres tribunals no es tradueix en una casuística tan àmplia com seria imaginable. La gran majoria versa sobre les peticions de nul·litat del contracte (total o parcial) realitzades pels clients, al moment en què l'Euribor va descendir, i que allò que molts havien contractat com un segur de cobertura enfront dels elevats tipus d'interès que havien de pagar per les seves hipoteques, veien com conforme als pactes en el contracte, havien de satisfer al seu contrapart (una entitat de crèdit) una liquidació.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Female sexual dysfunctions, including desire, arousal, orgasm and pain problems, have been shown to be highly prevalent among women around the world. The etiology of these dysfunctions is unclear but associations with health, age, psychological problems, and relationship factors have been identified. Genetic effects explain individual variation in orgasm function to some extent but until now quantitative behavior genetic analyses have not been applied to other sexual functions. In addition, behavior genetics can be applied to exploring the cause of any observed comorbidity between the dysfunctions. Discovering more about the etiology of the dysfunctions may further improve the classification systems which are currently under intense debate. The aims of the present thesis were to evaluate the psychometric properties of a Finnish-language version of a commonly used questionnaire for measuring female sexual function, the Female Sexual Function Index (FSFI), in order to investigate prevalence, comorbidity, and classification, and to explore the balance of genetic and environmental factors in the etiology as well as the associations of a number of biopsychosocial factors with female sexual functions. Female sexual functions were studied through survey methods in a population based sample of Finnish twins and their female siblings. There were two waves of data collection. The first data collection targeted 5,000 female twins aged 33–43 years and the second 7,680 female twins aged 18–33 and their over 18–year-old female siblings (n = 3,983). There was no overlap between the data collections. The combined overall response rate for both data collections was 53% (n = 8,868), with a better response rate in the second (57%) compared to the first (45%). In order to measure female sexual function, the FSFI was used. It includes 19 items which measure female sexual function during the previous four weeks in six subdomains; desire, subjective arousal, lubrication, orgasm, sexual satisfaction, and pain. In line with earlier research in clinical populations, a six factor solution of the Finnish-language version of the FSFI received supported. The internal consistencies of the scales were good to excellent. Some questions about how to avoid overestimating the prevalence of extreme dysfunctions due to women being allocated the score of zero if they had had no sexual activity during the preceding four weeks were raised. The prevalence of female sexual dysfunctions per se ranged from 11% for lubrication dysfunction to 55% for desire dysfunction. The prevalence rates for sexual dysfunction with concomitant sexual distress, in other words, sexual disorders were notably lower ranging from 7% for lubrication disorder to 23% for desire disorder. The comorbidity between the dysfunctions was substantial most notably between arousal and lubrication dysfunction even if these two dysfunctions showed distinct patterns of associations with the other dysfunctions. Genetic influences on individual variation in the six subdomains of FSFI were modest but significant ranging from 3–11% for additive genetic effects and 5–18% for nonadditive genetic effects. The rest of the variation in sexual functions was explained by nonshared environmental influences. A correlated factor model, including additive and nonadditive genetic effects and nonshared environmental effects had the best fit. All in all, every correlation between the genetic factors was significant except between lubrication and pain. All correlations between the nonshared environment factors were significant showing that there is a substantial overlap in genetic and nonshared environmental influences between the dysfunctions. In general, psychological problems, poor satisfaction with the relationship, sexual distress, and poor partner compatibility were associated with more sexual dysfunctions. Age was confounded with relationship length but had over and above relationship length a negative effect on desire and sexual satisfaction and a positive effect on orgasm and pain functions. Alcohol consumption in general was associated with better desire, arousal, lubrication, and orgasm function. Women pregnant with their first child had fewer pain problems than nulliparous nonpregnant women. Multiparous pregnant women had more orgasm problems compared to multiparous nonpregnant women. Having children was associated with less orgasm and pain problems. The conclusions were that desire, subjective arousal, lubrication, orgasm, sexual satisfaction, and pain are separate entities that have distinct associations with a number of different biopsychosocial factors. However, there is also considerable comorbidity between the dysfunctions which are explained by overlap in additive genetic, nonadditive genetic and nonshared environmental influences. Sexual dysfunctions are highly prevalent and are not always associated with sexual distress and this relationship might be moderated by a good relationship and compatibility with partner. Regarding classification, the results supports separate diagnoses for subjective arousal and genital arousal as well as the inclusion of pain under sexual dysfunctions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current classification system for spinal cord injury (SCI) considers only somatic information and neglects autonomic damage after injiuy. Heart rate variability (HRV) has the potential to be a valuable measure of cardiac autonomic control after (SCI). Five individuals with tetraplegia and four able-bodied controls underwent 1 min continuous ECG recordings during rest, after Metoprolol administration (max dose=3x5mg) and after Atropine administration (0.02mg/kg) in both supine and 40° head-up tilt. After Metoprolol administration there was a 61.8% decrease in the LF:HF ratio in the SCI participants suggesting that the LF:HF ratio is a reflection of cardiac sympathetic outflow. After Atropine administration there was a 99.1% decrease in the HF power in the SCI participants suggesting that HF power is highly representative of cardiac parasympathetic outflow. There were no significant differences between the SCI and able-bodied participants. Thus, HRV measures are a valid index of cardiac autonomic control after SCI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In studies of cognitive processing, the allocation of attention has been consistently linked to subtle, phasic adjustments in autonomic control. Both autonomic control of heart rate and control of the allocation of attention are known to decline with age. It is not known, however, whether characteristic individual differences in autonomic control and the ability to control attention are closely linked. To test this, a measure of parasympathetic function, vagal tone (VT) was computed from cardiac recordings from older and younger adults taken before and during performance of two attentiondemanding tasks - the Eriksen visual flanker task and the source memory task. Both tasks elicited event-related potentials (ERPs) that accompany errors, i.e., error-related negativities (ERNs) and error positivities (Pe's). The ERN is a negative deflection in the ERP signal, time-locked to responses made on incorrect trials, likely generated in the anterior cingulate. It is followed immediately by the Pe, a broad, positive deflection which may reflect conscious awareness of having committed an error. Age-attenuation ofERN amplitude has previously been found in paradigms with simple stimulus-response mappings, such as the flanker task, but has rarely been examined in more complex, conceptual tasks. Until now, there have been no reports of its being investigated in a source monitoring task. Age-attenuation of the ERN component was observed in both tasks. Results also indicated that the ERNs generated in these two tasks were generally comparable for young adults. For older adults, however, the ERN from the source monitoring task was not only shallower, but incorporated more frontal processing, apparently reflecting task demands. The error positivities elicited by 3 the two tasks were not comparable, however, and age-attenuation of the Pe was seen only in the more perceptual flanker task. For younger adults, it was Pe scalp topography that seemed to reflect task demands, being maximal over central parietal areas in the flanker task, but over very frontal areas in the source monitoring task. With respect to vagal tone, in the flanker task, neither the number of errors nor ERP amplitudes were predicted by baseline or on-task vagal tone measures. However, in the more difficult source memory task, lower VT was marginally associated with greater numbers of source memory errors in the older group. Thus, for older adults, relatively low levels of parasympathetic control over cardiac response coincided with poorer source memory discrimination. In both groups, lower levels of baseline VT were associated with larger amplitude ERNs, and smaller amplitude Pe's. Thus, low VT was associated in a conceptual task with a greater "emergency response" to errors, and at the same time, reduced awareness of having made them. The efficiency of an individual's complex cognitive processing was therefore associated with the flexibility of parasympathetic control of heart rate, in response to a cognitively challenging task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les employés d’un organisme utilisent souvent un schéma de classification personnel pour organiser les documents électroniques qui sont sous leur contrôle direct, ce qui suggère la difficulté pour d’autres employés de repérer ces documents et la perte possible de documentation pour l’organisme. Aucune étude empirique n’a été menée à ce jour afin de vérifier dans quelle mesure les schémas de classification personnels permettent, ou même facilitent, le repérage des documents électroniques par des tiers, dans le cadre d’un travail collaboratif par exemple, ou lorsqu’il s’agit de reconstituer un dossier. Le premier objectif de notre recherche était de décrire les caractéristiques de schémas de classification personnels utilisés pour organiser et classer des documents administratifs électroniques. Le deuxième objectif consistait à vérifier, dans un environnement contrôlé, les différences sur le plan de l’efficacité du repérage de documents électroniques qui sont fonction du schéma de classification utilisé. Nous voulions vérifier s’il était possible de repérer un document avec la même efficacité, quel que soit le schéma de classification utilisé pour ce faire. Une collecte de données en deux étapes fut réalisée pour atteindre ces objectifs. Nous avons d’abord identifié les caractéristiques structurelles, logiques et sémantiques de 21 schémas de classification utilisés par des employés de l’Université de Montréal pour organiser et classer les documents électroniques qui sont sous leur contrôle direct. Par la suite, nous avons comparé, à partir d'une expérimentation contrôlée, la capacité d’un groupe de 70 répondants à repérer des documents électroniques à l’aide de cinq schémas de classification ayant des caractéristiques structurelles, logiques et sémantiques variées. Trois variables ont été utilisées pour mesurer l’efficacité du repérage : la proportion de documents repérés, le temps moyen requis (en secondes) pour repérer les documents et la proportion de documents repérés dès le premier essai. Les résultats révèlent plusieurs caractéristiques structurelles, logiques et sémantiques communes à une majorité de schémas de classification personnels : macro-structure étendue, structure peu profonde, complexe et déséquilibrée, regroupement par thème, ordre alphabétique des classes, etc. Les résultats des tests d’analyse de la variance révèlent des différences significatives sur le plan de l’efficacité du repérage de documents électroniques qui sont fonction des caractéristiques structurelles, logiques et sémantiques du schéma de classification utilisé. Un schéma de classification caractérisé par une macro-structure peu étendue et une logique basée partiellement sur une division par classes d’activités augmente la probabilité de repérer plus rapidement les documents. Au plan sémantique, une dénomination explicite des classes (par exemple, par utilisation de définitions ou en évitant acronymes et abréviations) augmente la probabilité de succès au repérage. Enfin, un schéma de classification caractérisé par une macro-structure peu étendue, une logique basée partiellement sur une division par classes d’activités et une sémantique qui utilise peu d’abréviations augmente la probabilité de repérer les documents dès le premier essai.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adolescent idiopathic scoliosis (AIS) is a deformity of the spine manifested by asymmetry and deformities of the external surface of the trunk. Classification of scoliosis deformities according to curve type is used to plan management of scoliosis patients. Currently, scoliosis curve type is determined based on X-ray exam. However, cumulative exposure to X-rays radiation significantly increases the risk for certain cancer. In this paper, we propose a robust system that can classify the scoliosis curve type from non invasive acquisition of 3D trunk surface of the patients. The 3D image of the trunk is divided into patches and local geometric descriptors characterizing the surface of the back are computed from each patch and forming the features. We perform the reduction of the dimensionality by using Principal Component Analysis and 53 components were retained. In this work a multi-class classifier is built with Least-squares support vector machine (LS-SVM) which is a kernel classifier. For this study, a new kernel was designed in order to achieve a robust classifier in comparison with polynomial and Gaussian kernel. The proposed system was validated using data of 103 patients with different scoliosis curve types diagnosed and classified by an orthopedic surgeon from the X-ray images. The average rate of successful classification was 93.3% with a better rate of prediction for the major thoracic and lumbar/thoracolumbar types.