31 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation
Resumo:
Cannabis use among adolescents and young adults has become a major public health challenge. Several European countries are currently developing short screening instruments to identify 'problematic' forms of cannabis use in general population surveys. One such instrument is the Cannabis Use Disorders Identification Test (CUDIT), a 10-item questionnaire based on the Alcohol Use Disorders Identification Test. Previous research found that some CUDIT items did not perform well psychometrically. In the interests of improving the psychometric properties of the CUDIT, this study replaces the poorly performing items with new items that specifically address cannabis use. Analyses are based on a sub-sample of 558 recent cannabis users from a representative population sample of 5722 individuals (aged 13-32) who were surveyed in the 2007 Swiss Cannabis Monitoring Study. Four new items were added to the original CUDIT. Psychometric properties of all 14 items, as well as the dimensionality of the supplemented CUDIT were then examined using Item Response Theory. Results indicate the unidimensionality of CUDIT and an improvement in its psychometric performance when three original items (usual hours being stoned; injuries; guilt) are replaced by new ones (motives for using cannabis; missing out leisure time activities; difficulties at work/school). However, improvements were limited to cannabis users with a high problem score. For epidemiological purposes, any further revision of CUDIT should therefore include a greater number of 'easier' items.
Resumo:
The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.
Resumo:
Reducing a test administration to standardised procedures reflects the test designers' standpoint. However, from the practitioners' standpoint, each client is unique. How do psychologists deal with both standardised test administration and clients' diversity? To answer this question, we interviewed 17 psychologists working in three public services for children and adolescents about their assessment practices. We analysed the numerous "client categorisations" they produced in their accounts. We found that they had shared perceptions about their clients' diversity, and reported various non-standard practices that complemented standardised test administration, but also differed from them or were even forbidden. They seem to experience a dilemma between: (a) prescribed and situated practices; (b) scientific and situated reliability; (c) commutative and distributive justice. For practitioners, dealing with clients' diversity this is a practical problem, halfway between a problem-solving task and a moral dilemma.
Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.
Resumo:
BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.
Resumo:
In Switzerland, the land management regime is characterized by a liberal attitude towards the institution of property rights, which is guaranteed by the Constitution. Under the present Swiss constitutional arrangement, authorities (municipalities) are required to take into account landowners' interests when implementing their spatial planning policy. In other words, the institution of property rights cannot be restricted easily in order to implement zoning plans and planning projects. This situation causes many problems. One of them is the gap between the way land is really used by the landowners and the way land should be used based on zoning plans. In fact, zoning plans only describe how landowners should use their property. There is no sufficient provision for handling cases where the use is not in accordance with zoning plans. In particular, landowners may not be expropriated for a non-conforming use of the land. This situation often leads to the opening of new building areas in greenfields and urban sprawl, which is in contradiction with the goals set into the Federal Law on Spatial Planning. In order to identify legal strategies of intervention to solve the problem, our paper is structured into three main parts. Firstly, we make a short description of the Swiss land management regime. Then, we focus on an innovative land management approach designed to implement zoning plans in accordance with property rights. Finally, we present a case study that shows the usefulness of the presented land management approach in practice. We develop three main results. Firstly, the land management approach brings a mechanism to involve landowners in planning projects. Coordination principle between spatial planning goals and landowners' interests is the cornerstone of all the process. Secondly, the land use is improved both in terms of space and time. Finally, the institution of property rights is not challenged, since there is no expropriation and the market stays free.
Resumo:
The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.
Resumo:
Hematocrit (Hct) is one of the most critical issues associated with the bioanalytical methods used for dried blood spot (DBS) sample analysis. Because Hct determines the viscosity of blood, it may affect the spreading of blood onto the filter paper. Hence, accurate quantitative data can only be obtained if the size of the paper filter extracted contains a fixed blood volume. We describe for the first time a microfluidic-based sampling procedure to enable accurate blood volume collection on commercially available DBS cards. The system allows the collection of a controlled volume of blood (e.g., 5 or 10 μL) within several seconds. Reproducibility of the sampling volume was examined in vivo on capillary blood by quantifying caffeine and paraxanthine on 5 different extracted DBS spots at two different time points and in vitro with a test compound, Mavoglurant, on 10 different spots at two Hct levels. Entire spots were extracted. In addition, the accuracy and precision (n = 3) data for the Mavoglurant quantitation in blood with Hct levels between 26% and 62% were evaluated. The interspot precision data were below 9.0%, which was equivalent to that of a manually spotted volume with a pipet. No Hct effect was observed in the quantitative results obtained for Hct levels from 26% to 62%. These data indicate that our microfluidic-based sampling procedure is accurate and precise and that the analysis of Mavoglurant is not affected by the Hct values. This provides a simple procedure for DBS sampling with a fixed volume of capillary blood, which could eliminate the recurrent Hct issue linked to DBS sample analysis.
Resumo:
Introduction: Growth is a central process in paediatrics. Weight and height evaluation are therefore routine exams for every child but in some situation, particularly inflammatory bowel disease (IBD), a wider evaluation of nutritional status needs to be performed. The assessment of body composition is essential in order to maintain acceptable growth using the following techniques: Dual-energy X-ray absorptiometry (DEXA), bio-impedance-analysis (BIA) and anthropometric measurements (skinfold thickness skin), the latter being most easily available and most cost effective. Objectives: To assess the accuracy of skinfold equations in estimating percentage body fat (%BF) in children with inflammatory bowel disease (IBD), compared with assessment of body fat dual energy X-ray absorptiometry (DEXA). Methods: Twenty-one patients (11 females, 10 males; mean age: 14.3 years, range 12 - 16 years) with IBD (Crohn's disease n = 15, ulcerative colitis n = 6)). Estimated%BF was computed using 6 established equations based on the triceps, biceps, subscapular and suprailiac skinfolds (Deurenberg, Weststrate, Slaughter, Durnin & Rahaman, Johnston, Brook) and compared to DEXA. Concordance analysis was performed using Lin's concordance correlation and the Bland-Altman limits of agreement method. Results: Durnin & Rahaman's equation shows a higher Lin's concordance coefficient with a small difference amongst raw values for skinfolds and DEXA compared to the other equations. Correlation coefficient between mean and difference is close to zero with a non-significant Bradley-Blackwood test. Conclusion: Body composition in paediatric IBD patients using the Durnin & Rahaman skinfold-equation adequately reflects values obtained by DEXA.
Resumo:
OBJECTIVES: Reactivation of latent tuberculosis (TB) in inflammatory bowel disease (IBD) patients treated with antitumor necrosis factor-alpha medication is a serious problem. Currently, TB screening includes chest x-rays and a tuberculin skin test (TST). The interferon-gamma release assay (IGRA) QuantiFERON-TB Gold In-Tube (QFT-G-IT) shows better specificity for diagnosing TB than the skin test. This study evaluates the two test methods among IBD patients. METHODS: Both TST and IGRA were performed on 212 subjects (114 Crohn's disease, 44 ulcerative colitis, 10 indeterminate colitis, 44 controls). RESULTS: Eighty-one percent of IBD patients were under immunosuppressive therapy; 71% of all subjects were vaccinated with Bacille Calmette Guérin; 18% of IBD patients and 43% of controls tested positive with the skin test (P < 0.0001). Vaccinated controls tested positive more often with the skin test (52%) than did vaccinated IBD patients (23%) (P = 0.011). Significantly fewer immunosuppressed patients tested positive with the skin test than did patients not receiving therapy (P = 0.007); 8% of patients tested positive with the QFT-G-IT test (14/168) compared to 9% (4/44) of controls. Test agreement was significantly higher in the controls (P = 0.044) compared to the IBD group. CONCLUSIONS: Agreement between the two test methods is poor in IBD patients. In contrast to the QFT-G-IT test, the TST is negatively influenced by immunosuppressive medication and vaccination status, and should thus be replaced by the IGRA for TB screening in immunosuppressed patients having IBD.
Resumo:
Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.
Resumo:
Photons participate in many atomic and molecular interactions and changes. Recent biophysical research has shown the induction of ultraweak photons in biological tissue. It is now established that plants, animal and human cells emit a very weak radiation which can be readily detected with an appropriate photomultiplier system. Although the emission is extremely low in mammalian cells, it can be efficiently induced by ultraviolet light. In our studies, we used the differentiation system of human skin fibroblasts from a patient with Xeroderma Pigmentosum of complementation group A in order to test the growth stimulation efficiency of various bone growth factors at concentrations as low as 5 ng/ml of cell culture medium. In additional experiments, the cells were irradiated with a moderate fluence of ultraviolet A. The different batches of growth factors showed various proliferation of skin fibroblasts in culture which could be correlated with the ultraweak photon emission. The growth factors reduced the acceleration of the fibroblast differentiation induced by mitomycin C by a factor of 10-30%. In view that fibroblasts play an essential role in skin aging and wound healing, the fibroblast differentiation system is a very useful tool in order to elucidate the efficacy of growth factors.
Resumo:
BACKGROUND: Excessive drinking is a major problem in Western countries. AUDIT (Alcohol Use Disorders Identification Test) is a 10-item questionnaire developed as a transcultural screening tool to detect excessive alcohol consumption and dependence in primary health care settings. OBJECTIVES: The aim of the study is to validate a French version of the Alcohol Use Disorders Identification Test (AUDIT). METHODS: We conducted a validation cross-sectional study in three French-speaking areas (Paris, Geneva and Lausanne). We examined psychometric properties of AUDIT as its internal consistency, and its capacity to correctly diagnose alcohol abuse or dependence as defined by DSM-IV and to detect hazardous drinking (defined as alcohol intake >30 g pure ethanol per day for men and >20 g of pure ethanol per day for women). We calculated sensitivity, specificity, positive and negative predictive values and Receiver Operator Characteristic curves. Finally, we compared the ability of AUDIT to accurately detect "alcohol abuse/dependence" with that of CAGE and MAST. RESULTS: 1207 patients presenting to outpatient clinics (Switzerland, n = 580) or general practitioners' (France, n = 627) successively completed CAGE, MAST and AUDIT self-administered questionnaires, and were independently interviewed by a trained addiction specialist. AUDIT showed a good capacity to discriminate dependent patients (with AUDIT > or =13 for males, sensitivity 70.1%, specificity 95.2%, PPV 85.7%, NPV 94.7% and for females sensitivity 94.7%, specificity 98.2%, PPV 100%, NPV 99.8%); and hazardous drinkers (with AUDIT > or =7, for males sensitivity 83.5%, specificity 79.9%, PPV 55.0%, NPV 82.7% and with AUDIT > or =6 for females, sensitivity 81.2%, specificity 93.7%, PPV 64.0%, NPV 72.0%). AUDIT gives better results than MAST and CAGE for detecting "Alcohol abuse/dependence" as showed on the comparative ROC curves. CONCLUSIONS: The AUDIT questionnaire remains a good screening instrument for French-speaking primary care.
Resumo:
Animals can compete for resources by displaying various acoustic signals that may differentially affect the outcome of competition. We propose the hypothesis that the most efficient signal to deter opponents should be the one that most honestly reveals motivation to compete. We tested this hypothesis in the barn owl (Tyto alba) in which nestlings produce more calls of longer duration than siblings to compete for priority access to the indivisible prey item their parents will deliver next. Because nestlings increase call rate to a larger extent than call duration when they become hungrier, call rate would signal more accurately hunger level. This leads us to propose three predictions. First, a high number of calls should be more efficient in deterring siblings to compete than long calls. Second, the rate at which an individual calls should be more sensitive to variation in the intensity of the sibling vocal competition than the duration of its calls. Third, call rate should influence competitors' vocalization for a longer period of time than call duration. To test these three predictions we performed playback experiments by broadcasting to singleton nestlings calls of varying durations and at different rates. According to the first prediction, singleton nestlings became less vocal to a larger extent when we broadcasted more calls compared to longer calls. In line with the second prediction, nestlings reduced vocalization rate to a larger extent than call duration when we broadcasted more or longer calls. Finally, call rate had a longer influence on opponent's vocal behavior than call duration. Young animals thus actively and differentially use multiple signaling components to compete with their siblings over parental resources.
Resumo:
The hydrological and biogeochemical processes that operate in catchments influence the ecological quality of freshwater systems through delivery of fine sediment, nutrients and organic matter. Most models that seek to characterise the delivery of diffuse pollutants from land to water are reductionist. The multitude of processes that are parameterised in such models to ensure generic applicability make them complex and difficult to test on available data. Here, we outline an alternative - data-driven - inverse approach. We apply SCIMAP, a parsimonious risk based model that has an explicit treatment of hydrological connectivity. we take a Bayesian approach to the inverse problem of determining the risk that must be assigned to different land uses in a catchment in order to explain the spatial patterns of measured in-stream nutrient concentrations. We apply the model to identify the key sources of nitrogen (N) and phosphorus (P) diffuse pollution risk in eleven UK catchments covering a range of landscapes. The model results show that: 1) some land use generates a consistently high or low risk of diffuse nutrient pollution; but 2) the risks associated with different land uses vary both between catchments and between nutrients; and 3) that the dominant sources of P and N risk in the catchment are often a function of the spatial configuration of land uses. Taken on a case-by-case basis, this type of inverse approach may be used to help prioritise the focus of interventions to reduce diffuse pollution risk for freshwater ecosystems. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
ABSTRACT: Massive synaptic pruning following over-growth is a general feature of mammalian brain maturation. Pruning starts near time of birth and is completed by time of sexual maturation. Trigger signals able to induce synaptic pruning could be related to dynamic functions that depend on the timing of action potentials. Spike-timing-dependent synaptic plasticity (STDP) is a change in the synaptic strength based on the ordering of pre- and postsynaptic spikes. The relation between synaptic efficacy and synaptic pruning suggests that the weak synapses may be modified and removed through competitive "learning" rules. This plasticity rule might produce the strengthening of the connections among neurons that belong to cell assemblies characterized by recurrent patterns of firing. Conversely, the connections that are not recurrently activated might decrease in efficiency and eventually be eliminated. The main goal of our study is to determine whether or not, and under which conditions, such cell assemblies may emerge out of a locally connected random network of integrate-and-fire units distributed on a 2D lattice receiving background noise and content-related input organized in both temporal and spatial dimensions. The originality of our study stands on the relatively large size of the network, 10,000 units, the duration of the experiment, 10E6 time units (one time unit corresponding to the duration of a spike), and the application of an original bio-inspired STDP modification rule compatible with hardware implementation. A first batch of experiments was performed to test that the randomly generated connectivity and the STDP-driven pruning did not show any spurious bias in absence of stimulation. Among other things, a scale factor was approximated to compensate for the network size on the ac¬tivity. Networks were then stimulated with the spatiotemporal patterns. The analysis of the connections remaining at the end of the simulations, as well as the analysis of the time series resulting from the interconnected units activity, suggest that feed-forward circuits emerge from the initially randomly connected networks by pruning. RESUME: L'élagage massif des synapses après une croissance excessive est une phase normale de la ma¬turation du cerveau des mammifères. L'élagage commence peu avant la naissance et est complété avant l'âge de la maturité sexuelle. Les facteurs déclenchants capables d'induire l'élagage des synapses pourraient être liés à des processus dynamiques qui dépendent de la temporalité rela¬tive des potentiels d'actions. La plasticité synaptique à modulation temporelle relative (STDP) correspond à un changement de la force synaptique basé sur l'ordre des décharges pré- et post- synaptiques. La relation entre l'efficacité synaptique et l'élagage des synapses suggère que les synapses les plus faibles pourraient être modifiées et retirées au moyen d'une règle "d'appren¬tissage" faisant intervenir une compétition. Cette règle de plasticité pourrait produire le ren¬forcement des connexions parmi les neurones qui appartiennent à une assemblée de cellules caractérisée par des motifs de décharge récurrents. A l'inverse, les connexions qui ne sont pas activées de façon récurrente pourraient voir leur efficacité diminuée et être finalement éliminées. Le but principal de notre travail est de déterminer s'il serait possible, et dans quelles conditions, que de telles assemblées de cellules émergent d'un réseau d'unités integrate-and¬-fire connectées aléatoirement et distribuées à la surface d'une grille bidimensionnelle recevant à la fois du bruit et des entrées organisées dans les dimensions temporelle et spatiale. L'originalité de notre étude tient dans la taille relativement grande du réseau, 10'000 unités, dans la durée des simulations, 1 million d'unités de temps (une unité de temps correspondant à une milliseconde), et dans l'utilisation d'une règle STDP originale compatible avec une implémentation matérielle. Une première série d'expériences a été effectuée pour tester que la connectivité produite aléatoirement et que l'élagage dirigé par STDP ne produisaient pas de biais en absence de stimu¬lation extérieure. Entre autres choses, un facteur d'échelle a pu être approximé pour compenser l'effet de la variation de la taille du réseau sur son activité. Les réseaux ont ensuite été stimulés avec des motifs spatiotemporels. L'analyse des connexions se maintenant à la fin des simulations, ainsi que l'analyse des séries temporelles résultantes de l'activité des neurones, suggèrent que des circuits feed-forward émergent par l'élagage des réseaux initialement connectés au hasard.