813 resultados para preference-based measures


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently researchers in the field of personalized recommendations bear little consideration on users' interest differences in resource attributes although resource attribute is usually one of the most important factors in determining user preferences. To solve this problem, the paper builds an evaluation model of user interest based on resource multi-attributes, proposes a modified Pearson-Compatibility multi-attribute group decision-making algorithm, and introduces an algorithm to solve the recommendation problem of k-neighbor similar users. Considering the characteristics of collaborative filtering recommendation, the paper addresses the issues on the preference differences of similar users, incomplete values, and advanced converge of the algorithm. Thus the paper realizes multi-attribute collaborative filtering. Finally, the effectiveness of the algorithm is proved by an experiment of collaborative recommendation among multi-users based on virtual environment. The experimental results show that the algorithm has a high accuracy on predicting target users' attribute preferences and has a strong anti-interference ability on deviation and incomplete values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper finds preference reversals in measurements of ambiguity aversion, even if psychological and informational circumstances are kept constant. The reversals are of a fundamentally different nature than the reversals found before because they cannot be explained by context-dependent weightings of attributes. We offer an explanation based on Sugden's random-reference theory, with different elicitation methods generating different random reference points. Then measurements of ambiguity aversion that use willingness to pay are confounded by loss aversion and hence overestimate ambiguity aversion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interplay between the fat mass- and obesity-associated (FTO) gene variants and diet has been implicated in the development of obesity. The aim of the present analysis was to investigate associations between FTO genotype, dietary intakes and anthropometrics among European adults. Participants in the Food4Me randomised controlled trial were genotyped for FTO genotype (rs9939609) and their dietary intakes, and diet quality scores (Healthy Eating Index and PREDIMED-based Mediterranean diet score) were estimated from FFQ. Relationships between FTO genotype, diet and anthropometrics (weight, waist circumference (WC) and BMI) were evaluated at baseline. European adults with the FTO risk genotype had greater WC (AAv. TT: +1·4 cm; P=0·003) and BMI (+0·9 kg/m2; P=0·001) than individuals with no risk alleles. Subjects with the lowest fried food consumption and two copies of the FTO risk variant had on average 1·4 kg/m2 greater BMI (Ptrend=0·028) and 3·1 cm greater WC (Ptrend=0·045) compared with individuals with no copies of the risk allele and with the lowest fried food consumption. However, there was no evidence of interactions between FTO genotype and dietary intakes on BMI and WC, and thus further research is required to confirm or refute these findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Past research has documented a substitution effect between real earnings management (RM) and accrual-based earnings management (AM), depending on relative costs. This study contributes to this research by examining whether levels of (and changes in) financial leverage have an impact on this empirically documented trade-off. We hypothesise that in the presence of high leverage, firms that engage in earnings manipulation tactics will exhibit a preference for RM due to a lower possibility—and subsequent costs—of getting caught. We show that leverage levels and increases positively and significantly affect upward RM, with no significant effect on income-increasing AM, while our findings point towards a complementarity effect between unexpected levels of RM and AM for firms with very high leverage levels and changes. This is interpreted as an indication that high leverage could attract heavy outsider scrutiny, making it necessary for firms to use both forms of earnings management in order to achieve earnings targets. Furthermore, we document that equity investors exhibit a significantly stronger penalising reaction to AM vs. RM, indicating that leverage-induced RM is not as easily detectable by market participants as debt-induced AM, despite the fact that the former could imply deviation from optimal business practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Plant–Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant–Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant–Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

P>1. The use of indicators to identify areas of conservation importance has been challenged on several grounds, but nonetheless retains appeal as no more parsimonious approach exists. Among the many variants, two indicator strategies stand out: the use of indicator species and the use of metrics of landscape structure. While the first has been thoroughly studied, the same cannot be said about the latter. We aimed to contrast the relative efficacy of species-based and landscape-based indicators by: (i) comparing their ability to reflect changes in community integrity at regional and landscape spatial scales, (ii) assessing their sensitivity to changes in data resolution, and (iii) quantifying the degree to which indicators that are generated in one landscape or at one spatial scale can be transferred to additional landscapes or scales. 2. We used data from more than 7000 bird captures in 65 sites from six 10 000-ha landscapes with different proportions of forest cover in the Atlantic Forest of Brazil. Indicator species and landscape-based indicators were tested in terms of how effective they were in reflecting changes in community integrity, defined as deviations in bird community composition from control areas. 3. At the regional scale, indicator species provided more robust depictions of community integrity than landscape-based indicators. At the landscape scale, however, landscape-based indicators performed more effectively, more consistently and were also more transferable among landscapes. The effectiveness of high resolution landscape-based indicators was reduced by just 12% when these were used to explain patterns of community integrity in independent data sets. By contrast, the effectiveness of species-based indicators was reduced by 33%. 4. Synthesis and applications. The use of indicator species proved to be effective; however their results were variable and sensitive to changes in scale and resolution, and their application requires extensive and time-consuming field work. Landscape-based indicators were not only effective but were also much less context-dependent. The use of landscape-based indicators may allow the rapid identification of priority areas for conservation and restoration, and indicate which restoration strategies should be pursued, using remotely sensed imagery. We suggest that landscape-based indicators might often be a better, simpler, and cheaper strategy for informing decisions in conservation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the epidemiology of malaria in a frontier agricultural settlement in Brazilian Amazonia. We analysed the incidence of slide-confirmed symptomatic infections diagnosed between 2001 and 2006 in a cohort of 531 individuals (2281.53 person-years of follow-up) and parasite prevalence data derived from four cross-sectional surveys. Overall, the incidence rates of Plasmodium vivax and P. falciparaum were 20.6/100 and 6.8/100 person-years at risk, respectively, with a marked decline in the incidence of both species (81.4 and 56.8%, respectively) observed between 2001 and 2006. PCR revealed 5.4-fold more infections than conventional microscopy in population-wide cross-sectional surveys carried out between 2004 and 2006 (average prevalence, 11.3 vs. 2.0%). Only 27.2% of PCR-positive (but 73.3% of slide-positive) individuals had symptoms when enrolled, indicating that asymptomatic carriage of low-grade parasitaemias is a common phenomenon in frontier settlements. A circular cluster comprising 22.3% of the households, all situated in the area of most recent occupation, comprised 69.1% of all malaria infections diagnosed during the follow-up, with malaria incidence decreasing exponentially with distance from the cluster centre. By targeting one-quarter of the households, with selective indoor spraying or other house-protection measures, malaria incidence could be reduced by more than two-thirds in this community. (C) 2010 Royal Society of Tropical Medicine and Hygiene. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comparison of dengue virus (DENV) antibody levels in paired serum samples collected from predominantly DENV-naive residents in an agricultural settlement in Brazilian Amazonia (baseline seroprevalence, 18.3%) showed a seroconversion rate of 3.67 episodes/100 person-years at risk during 12 months of follow-up. Multivariate analysis identified male sex, poverty, and migration from extra-Amazonian states as significant predictors of baseline DENY seropositivity, whereas male sex, a history of clinical diagnosis of dengue fever, and travel to an urban area predicted subsequent seroconversion. The laboratory surveillance of acute febrile illnesses implemented at the study site and in a nearby town between 2004 and 2006 confirmed 11. DENV infections among 102 episodes studied with DENV IgM detection, reverse transcriptase-polymerise chain reaction, and virus isolation; DENV-3 was isolated. Because DENV exposure is associated with migration or travel, personal protection measures when visiting high-risk urban areas may reduce the incidence of DENV infection in this rural population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensitivity and specificity are measures that allow us to evaluate the performance of a diagnostic test. In practice, it is common to have situations where a proportion of selected individuals cannot have the real state of the disease verified, since the verification could be an invasive procedure, as occurs with biopsy. This happens, as a special case, in the diagnosis of prostate cancer, or in any other situation related to risks, that is, not practicable, nor ethical, or in situations with high cost. For this case, it is common to use diagnostic tests based only on the information of verified individuals. This procedure can lead to biased results or workup bias. In this paper, we introduce a Bayesian approach to estimate the sensitivity and the specificity for two diagnostic tests considering verified and unverified individuals, a result that generalizes the usual situation based on only one diagnostic test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Architectures based on Coordinated Atomic action (CA action) concepts have been used to build concurrent fault-tolerant systems. This conceptual model combines concurrent exception handling with action nesting to provide a general mechanism for both enclosing interactions among system components and coordinating forward error recovery measures. This article presents an architectural model to guide the formal specification of concurrent fault-tolerant systems. This architecture provides built-in Communicating Sequential Processes (CSPs) and predefined channels to coordinate exception handling of the user-defined components. Hence some safety properties concerning action scoping and concurrent exception handling can be proved by using the FDR (Failure Divergence Refinement) verification tool. As a result, a formal and general architecture supporting software fault tolerance is ready to be used and proved as users define components with normal and exceptional behaviors. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To design, develop and set up a web-based system for enabling graphical visualization of upper limb motor performance (ULMP) of Parkinson’s disease (PD) patients to clinicians. Background Sixty-five patients diagnosed with advanced PD have used a test battery, implemented in a touch-screen handheld computer, in their home environment settings over the course of a 3-year clinical study. The test items consisted of objective measures of ULMP through a set of upper limb motor tests (finger to tapping and spiral drawings). For the tapping tests, patients were asked to perform alternate tapping of two buttons as fast and accurate as possible, first using the right hand and then the left hand. The test duration was 20 seconds. For the spiral drawing test, patients traced a pre-drawn Archimedes spiral using the dominant hand, and the test was repeated 3 times per test occasion. In total, the study database consisted of symptom assessments during 10079 test occasions. Methods Visualization of ULMP The web-based system is used by two neurologists for assessing the performance of PD patients during motor tests collected over the course of the said study. The system employs animations, scatter plots and time series graphs to visualize the ULMP of patients to the neurologists. The performance during spiral tests is depicted by animating the three spiral drawings, allowing the neurologists to observe real-time accelerations or hesitations and sharp changes during the actual drawing process. The tapping performance is visualized by displaying different types of graphs. Information presented included distribution of taps over the two buttons, horizontal tap distance vs. time, vertical tap distance vs. time, and tapping reaction time over the test length. Assessments Different scales are utilized by the neurologists to assess the observed impairments. For the spiral drawing performance, the neurologists rated firstly the ‘impairment’ using a 0 (no impairment) – 10 (extremely severe) scale, secondly three kinematic properties: ‘drawing speed’, ‘irregularity’ and ‘hesitation’ using a 0 (normal) – 4 (extremely severe) scale, and thirdly the probable ‘cause’ for the said impairment using 3 choices including Tremor, Bradykinesia/Rigidity and Dyskinesia. For the tapping performance, a 0 (normal) – 4 (extremely severe) scale is used for first rating four tapping properties: ‘tapping speed’, ‘accuracy’, ‘fatigue’, ‘arrhythmia’, and then the ‘global tapping severity’ (GTS). To achieve a common basis for assessment, initially one neurologist (DN) performed preliminary ratings by browsing through the database to collect and rate at least 20 samples of each GTS level and at least 33 samples of each ‘cause’ category. These preliminary ratings were then observed by the two neurologists (DN and PG) to be used as templates for rating of tests afterwards. In another track, the system randomly selected one test occasion per patient and visualized its items, that is tapping and spiral drawings, to the two neurologists. Statistical methods Inter-rater agreements were assessed using weighted Kappa coefficient. The internal consistency of properties of tapping and spiral drawing tests were assessed using Cronbach’s α test. One-way ANOVA test followed by Tukey multiple comparisons test was used to test if mean scores of properties of tapping and spiral drawing tests were different among GTS and ‘cause’ categories, respectively. Results When rating tapping graphs, inter-rater agreements (Kappa) were as follows: GTS (0.61), ‘tapping speed’ (0.89), ‘accuracy’ (0.66), ‘fatigue’ (0.57) and ‘arrhythmia’ (0.33). The poor inter-rater agreement when assessing “arrhythmia” may be as a result of observation of different things in the graphs, among the two raters. When rating animated spirals, both raters had very good agreement when assessing severity of spiral drawings, that is, ‘impairment’ (0.85) and irregularity (0.72). However, there were poor agreements between the two raters when assessing ‘cause’ (0.38) and time-information properties like ‘drawing speed’ (0.25) and ‘hesitation’ (0.21). Tapping properties, that is ‘tapping speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’ had satisfactory internal consistency with a Cronbach’s α coefficient of 0.77. In general, the trends of mean scores of tapping properties worsened with increasing levels of GTS. The mean scores of the four properties were significantly different to each other, only at different levels. In contrast from tapping properties, kinematic properties of spirals, that is ‘drawing speed’, ‘irregularity’ and ‘hesitation’ had a questionable consistency among them with a coefficient of 0.66. Bradykinetic spirals were associated with more impaired speed (mean = 83.7 % worse, P < 0.001) and hesitation (mean = 77.8% worse, P < 0.001), compared to dyskinetic spirals. Both these ‘cause’ categories had similar mean scores of ‘impairment’ and ‘irregularity’. Conclusions In contrast from current approaches used in clinical setting for the assessment of PD symptoms, this system enables clinicians to animate easily and realistically the ULMP of patients who at the same time are at their homes. Dynamic access of visualized motor tests may also be useful when observing and evaluating therapy-related complications such as under- and over-medications. In future, we foresee to utilize these manual ratings for developing and validating computer methods for automating the process of assessing ULMP of PD patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth of urban areas has a significant impact on traffic and transportation systems. New management policies and planning strategies are clearly necessary to cope with the more than ever limited capacity of existing road networks. The concept of Intelligent Transportation System (ITS) arises in this scenario; rather than attempting to increase road capacity by means of physical modifications to the infrastructure, the premise of ITS relies on the use of advanced communication and computer technologies to handle today’s traffic and transportation facilities. Influencing users’ behaviour patterns is a challenge that has stimulated much research in the ITS field, where human factors start gaining great importance to modelling, simulating, and assessing such an innovative approach. This work is aimed at using Multi-agent Systems (MAS) to represent the traffic and transportation systems in the light of the new performance measures brought about by ITS technologies. Agent features have good potentialities to represent those components of a system that are geographically and functionally distributed, such as most components in traffic and transportation. A BDI (beliefs, desires, and intentions) architecture is presented as an alternative to traditional models used to represent the driver behaviour within microscopic simulation allowing for an explicit representation of users’ mental states. Basic concepts of ITS and MAS are presented, as well as some application examples related to the subject. This has motivated the extension of an existing microscopic simulation framework to incorporate MAS features to enhance the representation of drivers. This way demand is generated from a population of agents as the result of their decisions on route and departure time, on a daily basis. The extended simulation model that now supports the interaction of BDI driver agents was effectively implemented, and different experiments were performed to test this approach in commuter scenarios. MAS provides a process-driven approach that fosters the easy construction of modular, robust, and scalable models, characteristics that lack in former result-driven approaches. Its abstraction premises allow for a closer association between the model and its practical implementation. Uncertainty and variability are addressed in a straightforward manner, as an easier representation of humanlike behaviours within the driver structure is provided by cognitive architectures, such as the BDI approach used in this work. This way MAS extends microscopic simulation of traffic to better address the complexity inherent in ITS technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents semiparametric estimators of changes in inequality measures of a dependent variable distribution taking into account the possible changes on the distributions of covariates. When we do not impose parametric assumptions on the conditional distribution of the dependent variable given covariates, this problem becomes equivalent to estimation of distributional impacts of interventions (treatment) when selection to the program is based on observable characteristics. The distributional impacts of a treatment will be calculated as differences in inequality measures of the potential outcomes of receiving and not receiving the treatment. These differences are called here Inequality Treatment Effects (ITE). The estimation procedure involves a first non-parametric step in which the probability of receiving treatment given covariates, the propensity-score, is estimated. Using the inverse probability weighting method to estimate parameters of the marginal distribution of potential outcomes, in the second step weighted sample versions of inequality measures are computed. Root-N consistency, asymptotic normality and semiparametric efficiency are shown for the semiparametric estimators proposed. A Monte Carlo exercise is performed to investigate the behavior in finite samples of the estimator derived in the paper. We also apply our method to the evaluation of a job training program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we construct sunspot equilibria that arrise from chaotic deterministic dynamics. These equilibria are robust and therefore observables. We prove that they may be learned by a sim pie rule based on the histograms or past state variables. This work gives the theoretical justification or deterministic models that might compete with stochastic models to explain real data.