758 resultados para Determinant-based sparseness measure
Resumo:
Background. At present, prostate cancer screening (PCS) guidelines require a discussion of risks, benefits, alternatives, and personal values, making decision aids an important tool to help convey information and to help clarify values. Objective: The overall goal of this study is to provide evidence of the reliability and validity of a PCS anxiety measure and the Decisional Conflict Scale (DCS). Methods. Using data from a randomized, controlled PCS decision aid trial that measured PCS anxiety at baseline and DCS at baseline (T0) and at two-weeks (T2), four psychometric properties were assessed: (1) internal consistency reliability, indicated by factor analysis intraclass correlations and Cronbach's α; (2) construct validity, indicated by patterns of Pearson correlations among subscales; (3) discriminant validity, indicated by the measure's ability to discriminate between undecided men and those with a definite screening intention; and (4) factor validity and invariance using confirmatory factor analyses (CFA). Results. The PCS anxiety measure had adequate internal consistency reliability and good construct and discriminant validity. CFAs indicated that the 3-factor model did not have adequate fit. CFAs for a general PCS anxiety measure and a PSA anxiety measure indicated adequate fit. The general PCS anxiety measure was invariant across clinics. The DCS had adequate internal consistency reliability except for the support subscale and had adequate discriminate validity. Good construct validity was found at the private clinic, but was only found for the feeling informed subscale at the public clinic. The traditional DCS did not have adequate fit at T0 or at T2. The alternative DCS had adequate fit at T0 but was not identified at T2. Factor loadings indicated that two subscales, feeling informed and feeling clear about values, were not distinct factors. Conclusions. Our general PCS anxiety measure can be used in PCS decision aid studies. The alternative DCS may be appropriate for men eligible for PCS. Implications: More emphasis needs to be placed on the development of PCS anxiety items relating to testing procedures. We recommend that the two DCS versions be validated in other samples of men eligible for PCS and in other health care decisions that involve uncertainty. ^
Resumo:
Recommender systems play an important role in reducing the negative impact of informa- tion overload on those websites where users have the possibility of voting for their prefer- ences on items. The most normal technique for dealing with the recommendation mechanism is to use collaborative filtering, in which it is essential to discover the most similar users to whom you desire to make recommendations. The hypothesis of this paper is that the results obtained by applying traditional similarities measures can be improved by taking contextual information, drawn from the entire body of users, and using it to cal- culate the singularity which exists, for each item, in the votes cast by each pair of users that you wish to compare. As such, the greater the measure of singularity result between the votes cast by two given users, the greater the impact this will have on the similarity. The results, tested on the Movielens, Netflix and FilmAffinity databases, corroborate the excellent behaviour of the singularity measure proposed.
Resumo:
Collaborative filtering recommender systems contribute to alleviating the problem of information overload that exists on the Internet as a result of the mass use of Web 2.0 applications. The use of an adequate similarity measure becomes a determining factor in the quality of the prediction and recommendation results of the recommender system, as well as in its performance. In this paper, we present a memory-based collaborative filtering similarity measure that provides extremely high-quality and balanced results; these results are complemented with a low processing time (high performance), similar to the one required to execute traditional similarity metrics. The experiments have been carried out on the MovieLens and Netflix databases, using a representative set of information retrieval quality measures.
Resumo:
Urban mobility in Europe is always a responsibility of the municipalities which propose measures to reduce CO2 emissions in terms of mobility aimed at reducing individual private transport (car). The European Commission's Action Plan on Urban Mobility calls for an increase in the take-up of Sustainable Urban Mobility Plans in Europe. SUMPs aim to create a sustainable urban transport system. Europe has got some long term initiatives and has been using some evaluation procedures, many of them through European projects. Nevertheless, the weak point with the SUMPs in Spain, has been the lack of concern about the evaluation and the effectiveness of the measures implemented in a SUMP. For this reason, it is difficult to know exactly whether or not the SUMPs have positively influenced in the modal split of the cities, and its contribution to reduce CO2 levels. The case of the City of Burgos is a very illustrative example as it developed a CiViTAS project during the years 2005-2009, with a total investment of 6M?. The results have been considered as ?very successful? even at European level. The modal split has changed considerably for better, The cost-effectiveness ratio of the SUMP in the city can be measured with the CO2 ton saved, specifically 36 ? per CO2 ton saved, which is fully satisfactory and in line with calculations from other European researchers. Additionally, the authors propose a single formula to measure the effectiveness of the activities developed under the umbrella of a SUMP.
Resumo:
One of the main challenges of fuzzy community detection problems is to be able to measure the quality of a fuzzy partition. In this paper, we present an alternative way of measuring the quality of a fuzzy community detection output based on n-dimensional grouping and overlap functions. Moreover, the proposed modularity measure generalizes the classical Girvan–Newman (GN) modularity for crisp community detection problems and also for crisp overlapping community detection problems. Therefore, it can be used to compare partitions of different nature (i.e. those composed of classical, overlapping and fuzzy communities). Particularly, as is usually done with the GN modularity, the proposed measure may be used to identify the optimal number of communities to be obtained by any network clustering algorithm in a given network. We illustrate this usage by adapting in this way a well-known algorithm for fuzzy community detection problems, extending it to also deal with overlapping community detection problems and produce a ranking of the overlapping nodes. Some computational experiments show the feasibility of the proposed approach to modularity measures through n-dimensional overlap and grouping functions.
Resumo:
Brain injury is the leading cause of disability and death in children in the United States. Student re-entry into the school setting following a traumatic brain injury is crucial to student success. Multidisciplinary teams within the school district comprised of individuals with expertise in brain injury are ideal in implementing student specific treatment plans given their specialized training and wide range of expertise addressing student needs. Therefore, the purpose of this study is to develop and initially validate a quantitative instrument that school personnel can use to determine if a student, identified as having a traumatic brain injury, will benefit from district-level consultation from a brain injury team. Three studies were designed to investigate the research questions. In study one, the planning and construction of the DORI-TBI was completed. Study two addressed the content validity of the DORI-TBI through a comparison analysis with other referral forms, content review with experts in the field of TBI, and cognitive interviews with professionals to test the usability of the new screening tool. In study three, a field administration was conducted using vignettes to measure construct validity. Results produced a valid and reliable new screening instrument that can aid school-based teams to more efficiently utilize district level consultation with a brain injury support team.
Resumo:
We propose and discuss a new centrality index for urban street patterns represented as networks in geographical space. This centrality measure, that we call ranking-betweenness centrality, combines the idea behind the random-walk betweenness centrality measure and the idea of ranking the nodes of a network produced by an adapted PageRank algorithm. We initially use a PageRank algorithm in which we are able to transform some information of the network that we want to analyze into numerical values. Numerical values summarizing the information are associated to each of the nodes by means of a data matrix. After running the adapted PageRank algorithm, a ranking of the nodes is obtained, according to their importance in the network. This classification is the starting point for applying an algorithm based on the random-walk betweenness centrality. A detailed example of a real urban street network is discussed in order to understand the process to evaluate the ranking-betweenness centrality proposed, performing some comparisons with other classical centrality measures.
Resumo:
Background: Celiac disease (CD) has a negative impact on the health-related quality of life (HRQL) of affected patients. Although HRQL and its determinants have been examined in Spanish CD patients specifically recruited in hospital settings, these aspects of CD have not been assessed among the general Spanish population. Methods: An observational, transversal study of a non-randomized, representative sample of adult celiac patients throughout all of Spain's Autonomous Regions. Subjects were recruited through celiac patient associations. A Spanish version of the self-administered Celiac Disease-Quality of Life (CD-QOL) questionnaire was used. Determinant factors of HRQL were assessed with the aid of multivariate analysis to control for confounding factors. Results: We analyzed the responses provided by 1,230 patients, 1,092 (89.2%) of whom were women. The overall mean value for the CD-QOL index was 56.3 ± 18.27 points. The dimension that obtained the most points was dysphoria, with 81.3 ± 19.56 points, followed by limitations with 52.3 ± 23.43 points; health problems, with 51.6 ± 26.08 points, and inadequate treatment, with 36.1 ± 21.18 points. Patient age and sex, along with time to diagnosis, and length of time on a gluten-free diet were all independent determinant factors of certain dimensions of HRQL: women aged 31 to 40 expressed poorer HRQL while time to diagnosis and length of time on a gluten-free diet were determinant factors for better HRQL scores. Conclusions: The HRQL of adult Spanish celiac subjects is moderate, improving with the length of time patients remain on a gluten-free diet.
Resumo:
The concept of entropy rate is well defined in dynamical systems theory but is impossible to apply it directly to finite real world data sets. With this in mind, Pincus developed Approximate Entropy (ApEn), which uses ideas from Eckmann and Ruelle to create a regularity measure based on entropy rate that can be used to determine the influence of chaotic behaviour in a real world signal. However, this measure was found not to be robust and so an improved formulation known as the Sample Entropy (SampEn) was created by Richman and Moorman to address these issues. We have developed a new, related, regularity measure which is not based on the theory provided by Eckmann and Ruelle and proves a more well-behaved measure of complexity than the previous measures whilst still retaining a low computational cost.
Resumo:
Data envelopment analysis (DEA) has gained a wide range of applications in measuring comparative efficiency of decision making units (DMUs) with multiple incommensurate inputs and outputs. The standard DEA method requires that the status of all input and output variables be known exactly. However, in many real applications, the status of some measures is not clearly known as inputs or outputs. These measures are referred to as flexible measures. This paper proposes a flexible slacks-based measure (FSBM) of efficiency in which each flexible measure can play input role for some DMUs and output role for others to maximize the relative efficiency of the DMU under evaluation. Further, we will show that when an operational unit is efficient in a specific flexible measure, this measure can play both input and output roles for this unit. In this case, the optimal input/output designation for flexible measure is one that optimizes the efficiency of the artificial average unit. An application in assessing UK higher education institutions used to show the applicability of the proposed approach. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
In the majority of production processes, noticeable amounts of bad byproducts or bad outputs are produced. The negative effects of the bad outputs on efficiency cannot be handled by the standard Malmquist index to measure productivity change over time. Toward this end, the Malmquist-Luenberger index (MLI) has been introduced, when undesirable outputs are present. In this paper, we introduce a Data Envelopment Analysis (DEA) model as well as an algorithm, which can successfully eliminate a common infeasibility problem encountered in MLI mixed period problems. This model incorporates the best endogenous direction amongst all other possible directions to increase desirable output and decrease the undesirable outputs at the same time. A simple example used to illustrate the new algorithm and a real application of steam power plants is used to show the applicability of the proposed model.
Resumo:
A dolgozatban a hitelderivatívák intenzitásalapú modellezésének néhány kérdését vizsgáljuk meg. Megmutatjuk, hogy alkalmas mértékcserével nemcsak a duplán sztochasztikus folyamatok, hanem tetszőleges intenzitással rendelkező pontfolyamat esetén is kiszámolható az összetett kár- és csődfolyamat eloszlásának Laplace-transzformáltja. _____ The paper addresses questions concerning the use of intensity based modeling in the pricing of credit derivatives. As the specification of the distribution of the lossprocess is a non-trivial exercise, the well-know technique for this task utilizes the inversion of the Laplace-transform. A popular choice for the model is the class of doubly stochastic processes given that their Laplace-transforms can be determined easily. Unfortunately these processes lack several key features supported by the empirical observations, e.g. they cannot replicate the self-exciting nature of defaults. The aim of the paper is to show that by using an appropriate change of measure the Laplace-transform can be calculated not only for a doubly stochastic process, but for an arbitrary point process with intensity as well. To support the application of the technique, we investigate the e®ect of the change of measure on the stochastic nature of the underlying process.
Resumo:
[EN]All the relevant risk factors contributing to breast cancer etiology are not fully known. Exposure to organochlorine pesticides has been linked to an increased incidence of the disease, although not all data have been consistent. Most published studies evaluated the exposure to organochlorines individually, ignoring the potential effects exerted by the mixtures of chemicals.