145 resultados para RESEARCHERS
em Université de Lausanne, Switzerland
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
The Swiss National Science Foundation made a call for National Centers fo Competence in Research (NCCR) for the first time in 1999 and 2004. Together, these announcements concerned all disciplines and led to 126 preproposals, which were put forward by 2134 men and women researchers. It can be assumed that this operation mobilised Swiss researchers who regarded themselves as particularly well qualified to conduct high-level research in their field. The article uses network analysis and regression analysis methods to examine to what extend women had a lower success rate than men in the two selection rounds because of their sex. On the whole, the findings attest the gender neutrality of the National Science Foundation's selection procedures. However, they also confirm the well-known fact that women scientists are less represented in the higher echelons of academia and concentrated in the social sciences and humanities, as well as showing that this concentration reduces women's chances of success in scientific competition. The article shows that unequal gender-specific success rates prior to the NCCR funding contest play a fairly significant role.
Resumo:
This commentary came from within the framework of integrating the humanities in medicine and from accompanying research on disease-related issues by teams involving clinicians and researchers in medical humanities. The purpose is to reflect on the challenges faced by researchers when conducting emotionally laden research and on how they impact observations and subsequent research findings. This commentary is furthermore a call to action since it promotes the institutionalization of a supportive context for medical humanities researchers who have not been trained to cope with sensitive medical topics in research. To that end, concrete recommendations regarding training and supervision were formulated.
Resumo:
Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.
Resumo:
Abstract: This article presents both a brief systemic intervention method (IBS) consisting in 6 sessions developed in an ambulatory service for couples and families, and two research projects done in collaboration with the Institute for Psychotherapy of the University of Lausanne. The first project is quantitative and it aims at evaluating the effectiveness of ISB. One of its main feature is that outcomes are assessed at different levels of individual and family functioning: 1) symptoms and individual functioning; 2) quality of marital relationship; 3) parental and co-parental relationships; 4) familial relationships. The second project is a qualitative case study about a marital therapy which identifies and analyses significant moments of the therapeutic process from the patients' perspective. Methodology was largely inspired by Daniel Stem's work about "moments of meeting" in psychotherapy. Results show that patients' theories about relationship and change are important elements that deepen our understanding of the change process in couple and family therapy. The interest of associating clinicians and researchers for the development and validation of a new clinical model is discussed.
Resumo:
This is the third edition of the compendium. It documents the status of important projects on nanomaterial toxicity and exposure monitoring, integrated risk management, research infrstructure and coordination and support activities. The compendium is not intended to be a guidance document for human health and environmental safety management of nanotechnologies, as such guidance documents already exist and are widely available. Neither is the compendium intended to be a medium for the publication of scientific papers and research results, as this task is covered by scientific conferences and the reviewed press. The compendium aims to bring researchers closer together and show them the potential for synergy in their work. It is a means to establish links and communication between them during the actual research phase and well before the publication of their results. It thus focuses on the communication of projects' strategic aims, extensively covers specific work objectives and the methods used in research, and documents human capacities and available laboratory infrastructure. As such, the compendium supports collaboration on common goals and the joint elaboration of future plans, whilst compromising neither the potential for scientific publication, nor intellectual property rights. [Auteurs]
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.
Resumo:
Developments in the field of neuroscience have created a high level of interest in the subject of adolescent psychosis, particularly in relation to prediction and prevention. As the medical practice of adolescent psychosis and its treatment is characterised by a heterogeneity which is both symptomatic and evolutive, the somewhat poor prognosis of chronic development justifies the research performed: apparent indicators of schizophrenic disorders on the one hand and specific endophenotypes on the other are becoming increasingly important. The significant progresses made on the human genome show that the genetic predetermination in current psychiatric pathologies is complex and subject to moderating effects and there is therefore significant potential for nature-nurture interactions (between the environment and the genes). The road to be followed in researching the phenotypic expression of a psychosis gene is long and winding and is susceptible to many external influences at various levels with different effects. Neurobiological, neurophysiological, neuropsychological and neuroanatomical studies help to identify endophenotypes, which allow researchers to create identifying "markers" along this winding road. The endophenotypes could make it possible to redefine the nosological categories and enhance understanding of the physiopathology of schizophrenia. In a predictive approach, large-scale retrospective and prospective studies make it possible to identify risk factors, which are compatible with the neurodevelopmental hypothesis of schizophrenia. However, the predictive value of such markers or risk indicators is not yet sufficiently developed to offer a reliable early-detection method or possible schizophrenia prevention measures. Nonetheless, new developments show promise against the background of a possible future nosographic revolution, based on a paradigm shift. It is perhaps on the basis of homogeneous endophenotypes in particular that we will be able to understand what protects against, or indeed can trigger, psychosis irrespective of the clinical expression or attempts to isolate the common genetic and biological bases according to homogeneous clinical characteristics, which have to date, proved unsuccessful
Resumo:
Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity--which includes omitted variables, omitted selection, simultaneity, common methods bias, and measurement error--renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation is confounded, including fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66 % and up to 90 % of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.
Resumo:
The first scientific meeting of the newly established European SYSGENET network took place at the Helmholtz Centre for Infection Research (HZI) in Braunschweig, April 7-9, 2010. About 50 researchers working in the field of systems genetics using mouse genetic reference populations (GRP) participated in the meeting and exchanged their results, phenotyping approaches, and data analysis tools for studying systems genetics. In addition, the future of GRP resources and phenotyping in Europe was discussed.