803 resultados para preference-based measures
Resumo:
Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain–computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.
Resumo:
OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
Background Children with callous-unemotional (CU) traits, a proposed precursor to adult psychopathy, are characterized by impaired emotion recognition, reduced responsiveness to others’ distress, and a lack of guilt or empathy. Reduced attention to faces, and more specifically to the eye region, has been proposed to underlie these difficulties, although this has never been tested longitudinally from infancy. Attention to faces occurs within the context of dyadic caregiver interactions, and early environment including parenting characteristics has been associated with CU traits. The present study tested whether infants’ preferential tracking of a face with direct gaze and levels of maternal sensitivity predict later CU traits. Methods Data were analyzed from a stratified random sample of 213 participants drawn from a population-based sample of 1233 first-time mothers. Infants’ preferential face tracking at 5 weeks and maternal sensitivity at 29 weeks were entered into a weighted linear regression as predictors of CU traits at 2.5 years. Results Controlling for a range of confounders (e.g., deprivation), lower preferential face tracking predicted higher CU traits (p = .001). Higher maternal sensitivity predicted lower CU traits in girls (p = .009), but not boys. No significant interaction between face tracking and maternal sensitivity was found. Conclusions This is the first study to show that attention to social features during infancy as well as early sensitive parenting predict the subsequent development of CU traits. Identifying such early atypicalities offers the potential for developing parent-mediated interventions in children at risk for developing CU traits.
Resumo:
Currently researchers in the field of personalized recommendations bear little consideration on users' interest differences in resource attributes although resource attribute is usually one of the most important factors in determining user preferences. To solve this problem, the paper builds an evaluation model of user interest based on resource multi-attributes, proposes a modified Pearson-Compatibility multi-attribute group decision-making algorithm, and introduces an algorithm to solve the recommendation problem of k-neighbor similar users. Considering the characteristics of collaborative filtering recommendation, the paper addresses the issues on the preference differences of similar users, incomplete values, and advanced converge of the algorithm. Thus the paper realizes multi-attribute collaborative filtering. Finally, the effectiveness of the algorithm is proved by an experiment of collaborative recommendation among multi-users based on virtual environment. The experimental results show that the algorithm has a high accuracy on predicting target users' attribute preferences and has a strong anti-interference ability on deviation and incomplete values.
Resumo:
This paper finds preference reversals in measurements of ambiguity aversion, even if psychological and informational circumstances are kept constant. The reversals are of a fundamentally different nature than the reversals found before because they cannot be explained by context-dependent weightings of attributes. We offer an explanation based on Sugden's random-reference theory, with different elicitation methods generating different random reference points. Then measurements of ambiguity aversion that use willingness to pay are confounded by loss aversion and hence overestimate ambiguity aversion.
Resumo:
The interplay between the fat mass- and obesity-associated (FTO) gene variants and diet has been implicated in the development of obesity. The aim of the present analysis was to investigate associations between FTO genotype, dietary intakes and anthropometrics among European adults. Participants in the Food4Me randomised controlled trial were genotyped for FTO genotype (rs9939609) and their dietary intakes, and diet quality scores (Healthy Eating Index and PREDIMED-based Mediterranean diet score) were estimated from FFQ. Relationships between FTO genotype, diet and anthropometrics (weight, waist circumference (WC) and BMI) were evaluated at baseline. European adults with the FTO risk genotype had greater WC (AAv. TT: +1·4 cm; P=0·003) and BMI (+0·9 kg/m2; P=0·001) than individuals with no risk alleles. Subjects with the lowest fried food consumption and two copies of the FTO risk variant had on average 1·4 kg/m2 greater BMI (Ptrend=0·028) and 3·1 cm greater WC (Ptrend=0·045) compared with individuals with no copies of the risk allele and with the lowest fried food consumption. However, there was no evidence of interactions between FTO genotype and dietary intakes on BMI and WC, and thus further research is required to confirm or refute these findings.
Resumo:
Past research has documented a substitution effect between real earnings management (RM) and accrual-based earnings management (AM), depending on relative costs. This study contributes to this research by examining whether levels of (and changes in) financial leverage have an impact on this empirically documented trade-off. We hypothesise that in the presence of high leverage, firms that engage in earnings manipulation tactics will exhibit a preference for RM due to a lower possibility—and subsequent costs—of getting caught. We show that leverage levels and increases positively and significantly affect upward RM, with no significant effect on income-increasing AM, while our findings point towards a complementarity effect between unexpected levels of RM and AM for firms with very high leverage levels and changes. This is interpreted as an indication that high leverage could attract heavy outsider scrutiny, making it necessary for firms to use both forms of earnings management in order to achieve earnings targets. Furthermore, we document that equity investors exhibit a significantly stronger penalising reaction to AM vs. RM, indicating that leverage-induced RM is not as easily detectable by market participants as debt-induced AM, despite the fact that the former could imply deviation from optimal business practices.
Resumo:
The Plant–Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant–Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant–Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.
Resumo:
P>1. The use of indicators to identify areas of conservation importance has been challenged on several grounds, but nonetheless retains appeal as no more parsimonious approach exists. Among the many variants, two indicator strategies stand out: the use of indicator species and the use of metrics of landscape structure. While the first has been thoroughly studied, the same cannot be said about the latter. We aimed to contrast the relative efficacy of species-based and landscape-based indicators by: (i) comparing their ability to reflect changes in community integrity at regional and landscape spatial scales, (ii) assessing their sensitivity to changes in data resolution, and (iii) quantifying the degree to which indicators that are generated in one landscape or at one spatial scale can be transferred to additional landscapes or scales. 2. We used data from more than 7000 bird captures in 65 sites from six 10 000-ha landscapes with different proportions of forest cover in the Atlantic Forest of Brazil. Indicator species and landscape-based indicators were tested in terms of how effective they were in reflecting changes in community integrity, defined as deviations in bird community composition from control areas. 3. At the regional scale, indicator species provided more robust depictions of community integrity than landscape-based indicators. At the landscape scale, however, landscape-based indicators performed more effectively, more consistently and were also more transferable among landscapes. The effectiveness of high resolution landscape-based indicators was reduced by just 12% when these were used to explain patterns of community integrity in independent data sets. By contrast, the effectiveness of species-based indicators was reduced by 33%. 4. Synthesis and applications. The use of indicator species proved to be effective; however their results were variable and sensitive to changes in scale and resolution, and their application requires extensive and time-consuming field work. Landscape-based indicators were not only effective but were also much less context-dependent. The use of landscape-based indicators may allow the rapid identification of priority areas for conservation and restoration, and indicate which restoration strategies should be pursued, using remotely sensed imagery. We suggest that landscape-based indicators might often be a better, simpler, and cheaper strategy for informing decisions in conservation.
Resumo:
We describe the epidemiology of malaria in a frontier agricultural settlement in Brazilian Amazonia. We analysed the incidence of slide-confirmed symptomatic infections diagnosed between 2001 and 2006 in a cohort of 531 individuals (2281.53 person-years of follow-up) and parasite prevalence data derived from four cross-sectional surveys. Overall, the incidence rates of Plasmodium vivax and P. falciparaum were 20.6/100 and 6.8/100 person-years at risk, respectively, with a marked decline in the incidence of both species (81.4 and 56.8%, respectively) observed between 2001 and 2006. PCR revealed 5.4-fold more infections than conventional microscopy in population-wide cross-sectional surveys carried out between 2004 and 2006 (average prevalence, 11.3 vs. 2.0%). Only 27.2% of PCR-positive (but 73.3% of slide-positive) individuals had symptoms when enrolled, indicating that asymptomatic carriage of low-grade parasitaemias is a common phenomenon in frontier settlements. A circular cluster comprising 22.3% of the households, all situated in the area of most recent occupation, comprised 69.1% of all malaria infections diagnosed during the follow-up, with malaria incidence decreasing exponentially with distance from the cluster centre. By targeting one-quarter of the households, with selective indoor spraying or other house-protection measures, malaria incidence could be reduced by more than two-thirds in this community. (C) 2010 Royal Society of Tropical Medicine and Hygiene. Published by Elsevier Ltd. All rights reserved.
Resumo:
A comparison of dengue virus (DENV) antibody levels in paired serum samples collected from predominantly DENV-naive residents in an agricultural settlement in Brazilian Amazonia (baseline seroprevalence, 18.3%) showed a seroconversion rate of 3.67 episodes/100 person-years at risk during 12 months of follow-up. Multivariate analysis identified male sex, poverty, and migration from extra-Amazonian states as significant predictors of baseline DENY seropositivity, whereas male sex, a history of clinical diagnosis of dengue fever, and travel to an urban area predicted subsequent seroconversion. The laboratory surveillance of acute febrile illnesses implemented at the study site and in a nearby town between 2004 and 2006 confirmed 11. DENV infections among 102 episodes studied with DENV IgM detection, reverse transcriptase-polymerise chain reaction, and virus isolation; DENV-3 was isolated. Because DENV exposure is associated with migration or travel, personal protection measures when visiting high-risk urban areas may reduce the incidence of DENV infection in this rural population.
Resumo:
Sensitivity and specificity are measures that allow us to evaluate the performance of a diagnostic test. In practice, it is common to have situations where a proportion of selected individuals cannot have the real state of the disease verified, since the verification could be an invasive procedure, as occurs with biopsy. This happens, as a special case, in the diagnosis of prostate cancer, or in any other situation related to risks, that is, not practicable, nor ethical, or in situations with high cost. For this case, it is common to use diagnostic tests based only on the information of verified individuals. This procedure can lead to biased results or workup bias. In this paper, we introduce a Bayesian approach to estimate the sensitivity and the specificity for two diagnostic tests considering verified and unverified individuals, a result that generalizes the usual situation based on only one diagnostic test.
Resumo:
Architectures based on Coordinated Atomic action (CA action) concepts have been used to build concurrent fault-tolerant systems. This conceptual model combines concurrent exception handling with action nesting to provide a general mechanism for both enclosing interactions among system components and coordinating forward error recovery measures. This article presents an architectural model to guide the formal specification of concurrent fault-tolerant systems. This architecture provides built-in Communicating Sequential Processes (CSPs) and predefined channels to coordinate exception handling of the user-defined components. Hence some safety properties concerning action scoping and concurrent exception handling can be proved by using the FDR (Failure Divergence Refinement) verification tool. As a result, a formal and general architecture supporting software fault tolerance is ready to be used and proved as users define components with normal and exceptional behaviors. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Objective To design, develop and set up a web-based system for enabling graphical visualization of upper limb motor performance (ULMP) of Parkinson’s disease (PD) patients to clinicians. Background Sixty-five patients diagnosed with advanced PD have used a test battery, implemented in a touch-screen handheld computer, in their home environment settings over the course of a 3-year clinical study. The test items consisted of objective measures of ULMP through a set of upper limb motor tests (finger to tapping and spiral drawings). For the tapping tests, patients were asked to perform alternate tapping of two buttons as fast and accurate as possible, first using the right hand and then the left hand. The test duration was 20 seconds. For the spiral drawing test, patients traced a pre-drawn Archimedes spiral using the dominant hand, and the test was repeated 3 times per test occasion. In total, the study database consisted of symptom assessments during 10079 test occasions. Methods Visualization of ULMP The web-based system is used by two neurologists for assessing the performance of PD patients during motor tests collected over the course of the said study. The system employs animations, scatter plots and time series graphs to visualize the ULMP of patients to the neurologists. The performance during spiral tests is depicted by animating the three spiral drawings, allowing the neurologists to observe real-time accelerations or hesitations and sharp changes during the actual drawing process. The tapping performance is visualized by displaying different types of graphs. Information presented included distribution of taps over the two buttons, horizontal tap distance vs. time, vertical tap distance vs. time, and tapping reaction time over the test length. Assessments Different scales are utilized by the neurologists to assess the observed impairments. For the spiral drawing performance, the neurologists rated firstly the ‘impairment’ using a 0 (no impairment) – 10 (extremely severe) scale, secondly three kinematic properties: ‘drawing speed’, ‘irregularity’ and ‘hesitation’ using a 0 (normal) – 4 (extremely severe) scale, and thirdly the probable ‘cause’ for the said impairment using 3 choices including Tremor, Bradykinesia/Rigidity and Dyskinesia. For the tapping performance, a 0 (normal) – 4 (extremely severe) scale is used for first rating four tapping properties: ‘tapping speed’, ‘accuracy’, ‘fatigue’, ‘arrhythmia’, and then the ‘global tapping severity’ (GTS). To achieve a common basis for assessment, initially one neurologist (DN) performed preliminary ratings by browsing through the database to collect and rate at least 20 samples of each GTS level and at least 33 samples of each ‘cause’ category. These preliminary ratings were then observed by the two neurologists (DN and PG) to be used as templates for rating of tests afterwards. In another track, the system randomly selected one test occasion per patient and visualized its items, that is tapping and spiral drawings, to the two neurologists. Statistical methods Inter-rater agreements were assessed using weighted Kappa coefficient. The internal consistency of properties of tapping and spiral drawing tests were assessed using Cronbach’s α test. One-way ANOVA test followed by Tukey multiple comparisons test was used to test if mean scores of properties of tapping and spiral drawing tests were different among GTS and ‘cause’ categories, respectively. Results When rating tapping graphs, inter-rater agreements (Kappa) were as follows: GTS (0.61), ‘tapping speed’ (0.89), ‘accuracy’ (0.66), ‘fatigue’ (0.57) and ‘arrhythmia’ (0.33). The poor inter-rater agreement when assessing “arrhythmia” may be as a result of observation of different things in the graphs, among the two raters. When rating animated spirals, both raters had very good agreement when assessing severity of spiral drawings, that is, ‘impairment’ (0.85) and irregularity (0.72). However, there were poor agreements between the two raters when assessing ‘cause’ (0.38) and time-information properties like ‘drawing speed’ (0.25) and ‘hesitation’ (0.21). Tapping properties, that is ‘tapping speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’ had satisfactory internal consistency with a Cronbach’s α coefficient of 0.77. In general, the trends of mean scores of tapping properties worsened with increasing levels of GTS. The mean scores of the four properties were significantly different to each other, only at different levels. In contrast from tapping properties, kinematic properties of spirals, that is ‘drawing speed’, ‘irregularity’ and ‘hesitation’ had a questionable consistency among them with a coefficient of 0.66. Bradykinetic spirals were associated with more impaired speed (mean = 83.7 % worse, P < 0.001) and hesitation (mean = 77.8% worse, P < 0.001), compared to dyskinetic spirals. Both these ‘cause’ categories had similar mean scores of ‘impairment’ and ‘irregularity’. Conclusions In contrast from current approaches used in clinical setting for the assessment of PD symptoms, this system enables clinicians to animate easily and realistically the ULMP of patients who at the same time are at their homes. Dynamic access of visualized motor tests may also be useful when observing and evaluating therapy-related complications such as under- and over-medications. In future, we foresee to utilize these manual ratings for developing and validating computer methods for automating the process of assessing ULMP of PD patients.