17 resultados para empirical methods
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
In French, a causal relation is often conveyed by the connectives car, parce que or puisque. Since the seminal work of the Lambda-l Group (1975), it has generally been assumed that parce que, used to relate semantic content, contrasts with car and puisque, both used to connect either speech act or epistemic content. However, this analysis leaves a number of questions unanswered. In this paper, I present a reanalysis of this trio, using empirical methods such as corpus analysis and constrained elicitation. Results indicate that car and parce que are interchangeable in many contexts, even if they are still prototypically used in their respective domain in writing. As for puisque, its distribution does not overlap with car, despite their similar domains of use. I argue that the specificity of puisque with respect to the other two connectives is to introduce a cause with an echoic meaning.
Resumo:
Wer die auf Geistes- und Sozialwissenschaften basierende Literatur aus dem Kanon der Geschlechtertheorie betrachtet, erhält den Eindruck, dass die Psychologie innerhalb dieses Forschungsbereichs keine tragende Rolle spielt. Ein möglicher Grund für die fehlende Integration psychologischer Forschung scheint ihr Zugriff auf quantitative empirische Methoden zu sein, ein Ansatz, der für die naturwissenschaftlich orientierte psychologische Forschung zentral ist. In diesem Artikel wollen wir eine Lanze brechen für eine geschlechter theoretisch informierte quantitative Experimentalpsychologie. Anhand unseres Forschungsgebietes Psychologie der Sprache illustrieren wir, an welchen Punkten die neueren behavioralen und neurowissenschaftlichen Methoden einen Beitrag leisten können und wie sie Erkenntnisse aus der qualitativen Genderforschung komplementieren. Der erste Teil befasst sich mit aktuellen Studien, die unter anderem mit Reaktionszeitmessungen und evozierten Potenzialen zeigen, wie stark Genderstereotypien in der Semantik verankert sind. Der zweite Teil thematisiert neuere Befunde aus der Neurobildgebung, die Geschlechtsunterschiede in der Lateralisierung von Sprachverarbeitung infrage stellen. Abschließend skizzieren wir neuere Forschungsansätze und plädieren für eine transdiziplinäre Kombination von qualitativen und quantitativen Methoden.
Resumo:
The Business and Information Technologies (BIT) project strives to reveal new insights into how modern IT impacts organizational structures and business practices using empirical methods. Due to its international scope, it allows for inter-country comparison of empirical results. Germany — represented by the European School of Management and Technologies (ESMT) and the Institute of Information Systems at Humboldt-Universität zu Berlin — joined the BIT project in 2006. This report presents the result of the first survey conducted in Germany during November–December 2006. The key results are as follows: • The most widely adopted technologies and systems in Germany are websites, wireless hardware and software, groupware/productivity tools, and enterprise resource planning (ERP) systems. The biggest potential for growth exists for collaboration and portal tools, content management systems, business process modelling, and business intelligence applications. A number of technological solutions have not yet been adopted by many organizations but also bear some potential, in particular identity management solutions, Radio Frequency Identification (RFID), biometrics, and third-party authentication and verification. • IT security remains on the top of the agenda for most enterprises: budget spending was increasing in the last 3 years. • The workplace and work requirements are changing. IT is used to monitor employees' performance in Germany, but less heavily compared to the United States (Karmarkar and Mangal, 2007).1 The demand for IT skills is increasing at all corporate levels. Executives are asking for more and better structured information and this, in turn, triggers the appearance of new decision-making tools and online technologies on the market. • The internal organization of companies in Germany is underway: organizations are becoming flatter, even though the trend is not as pronounced as in the United States (Karmarkar and Mangal, 2007), and the geographical scope of their operations is increasing. Modern IT plays an important role in enabling this development, e.g. telecommuting, teleconferencing, and other web-based collaboration formats are becoming increasingly popular in the corporate context. • The degree to which outsourcing is being pursued is quite limited with little change expected. IT services, payroll, and market research are the most widely outsourced business functions. This corresponds to the results from other countries. • Up to now, the adoption of e-business technologies has had a rather limited effect on marketing functions. Companies tend to extract synergies from traditional printed media and on-line advertising. • The adoption of e-business has not had a major impact on marketing capabilities and strategy yet. Traditional methods of customer segmentation are still dominating. The corporate identity of most organizations does not change significantly when going online. • Online sales channel are mainly viewed as a complement to the traditional distribution means. • Technology adoption has caused production and organizational costs to decrease. However, the costs of technology acquisition and maintenance as well as consultancy and internal communication costs have increased.
Resumo:
This article presents an empirical interdisciplinary study of an extensive participatory process that was carried out in 2004 in the recently established World Natural Heritage Site “Jungfrau–Aletsch– Bietschhorn” in the Swiss Alps. The study used qualitative and quantitative empirical methods of social science to address the question of success factors in establishing and concretizing a World Heritage Site. Current international scientific and policy debates agree that the most important success factors in defining pathways for nature conservation and protection are: linking development and conservation, involving multiple stakeholders, and applying participatory approaches. The results of the study indicate that linking development and conservation implies the need to extend the reach of negotiations beyond the area of conservation, and to develop both a regional perspective and a focus on sustainable regional development. In the process, regional and local stakeholders are less concerned with defining sustainability goals than elaborating strategies of sustainability, in particular defining the respective roles of the core sectors of society and economy. However, the study results also show that conflicting visions and perceptions of nature and landscape are important underlying currents in such negotiations. They differ significantly between various stakeholder categories and are an important cause of conflicts occurring at various stages of the participatory process.
Resumo:
The concept of theory of mind (ToM), a hot topic in cognitive psychology for the past twenty-five years, has gained increasing importance in the fields of linguistics and pragmatics. However, even though the relationship between ToM and verbal communication is now recognized, the extent, causality and full implications of this connection remain mostly to be explored. This book presents a comprehensive discussion of the interface between language, communication, and theory of mind, and puts forward an innovative proposal regarding the role of discourse connectives for this interface. The proposed analysis of connectives is tested from the perspective of their acquisition, using empirical methods such as corpus analysis and controlled experiments, thus placing the study of connectives within the emerging framework of experimental pragmatics.
Resumo:
OBJECTIVE: Meta-analysis of studies of the accuracy of diagnostic tests currently uses a variety of methods. Statistically rigorous hierarchical models require expertise and sophisticated software. We assessed whether any of the simpler methods can in practice give adequately accurate and reliable results. STUDY DESIGN AND SETTING: We reviewed six methods for meta-analysis of diagnostic accuracy: four simple commonly used methods (simple pooling, separate random-effects meta-analyses of sensitivity and specificity, separate meta-analyses of positive and negative likelihood ratios, and the Littenberg-Moses summary receiver operating characteristic [ROC] curve) and two more statistically rigorous approaches using hierarchical models (bivariate random-effects meta-analysis and hierarchical summary ROC curve analysis). We applied the methods to data from a sample of eight systematic reviews chosen to illustrate a variety of patterns of results. RESULTS: In each meta-analysis, there was substantial heterogeneity between the results of different studies. Simple pooling of results gave misleading summary estimates of sensitivity and specificity in some meta-analyses, and the Littenberg-Moses method produced summary ROC curves that diverged from those produced by more rigorous methods in some situations. CONCLUSION: The closely related hierarchical summary ROC curve or bivariate models should be used as the standard method for meta-analysis of diagnostic accuracy.
Resumo:
Objectives To examine the extent of multiplicity of data in trial reports and to assess the impact of multiplicity on meta-analysis results. Design Empirical study on a cohort of Cochrane systematic reviews. Data sources All Cochrane systematic reviews published from issue 3 in 2006 to issue 2 in 2007 that presented a result as a standardised mean difference (SMD). We retrieved trial reports contributing to the first SMD result in each review, and downloaded review protocols. We used these SMDs to identify a specific outcome for each meta-analysis from its protocol. Review methods Reviews were eligible if SMD results were based on two to ten randomised trials and if protocols described the outcome. We excluded reviews if they only presented results of subgroup analyses. Based on review protocols and index outcomes, two observers independently extracted the data necessary to calculate SMDs from the original trial reports for any intervention group, time point, or outcome measure compatible with the protocol. From the extracted data, we used Monte Carlo simulations to calculate all possible SMDs for every meta-analysis. Results We identified 19 eligible meta-analyses (including 83 trials). Published review protocols often lacked information about which data to choose. Twenty-four (29%) trials reported data for multiple intervention groups, 30 (36%) reported data for multiple time points, and 29 (35%) reported the index outcome measured on multiple scales. In 18 meta-analyses, we found multiplicity of data in at least one trial report; the median difference between the smallest and largest SMD results within a meta-analysis was 0.40 standard deviation units (range 0.04 to 0.91). Conclusions Multiplicity of data can affect the findings of systematic reviews and meta-analyses. To reduce the risk of bias, reviews and meta-analyses should comply with prespecified protocols that clearly identify time points, intervention groups, and scales of interest.
Resumo:
Objectives To compare the use of pair-wise meta-analysis methods to multiple treatment comparison (MTC) methods for evidence-based health-care evaluation to estimate the effectiveness and cost-effectiveness of alternative health-care interventions based on the available evidence. Methods Pair-wise meta-analysis and more complex evidence syntheses, incorporating an MTC component, are applied to three examples: 1) clinical effectiveness of interventions for preventing strokes in people with atrial fibrillation; 2) clinical and cost-effectiveness of using drug-eluting stents in percutaneous coronary intervention in patients with coronary artery disease; and 3) clinical and cost-effectiveness of using neuraminidase inhibitors in the treatment of influenza. We compare the two synthesis approaches with respect to the assumptions made, empirical estimates produced, and conclusions drawn. Results The difference between point estimates of effectiveness produced by the pair-wise and MTC approaches was generally unpredictable—sometimes agreeing closely whereas in other instances differing considerably. In all three examples, the MTC approach allowed the inclusion of randomized controlled trial evidence ignored in the pair-wise meta-analysis approach. This generally increased the precision of the effectiveness estimates from the MTC model. Conclusions The MTC approach to synthesis allows the evidence base on clinical effectiveness to be treated as a coherent whole, include more data, and sometimes relax the assumptions made in the pair-wise approaches. However, MTC models are necessarily more complex than those developed for pair-wise meta-analysis and thus could be seen as less transparent. Therefore, it is important that model details and the assumptions made are carefully reported alongside the results.
Resumo:
The longitudinal dimension of schizophrenia and related severe mental illness is a key component of theoretical models of recovery. However, empirical longitudinal investigations have been underrepresented in the psychopathology of schizophrenia. Similarly, traditional approaches to longitudinal analysis of psychopathological data have had serious limitations. The utilization of modern longitudinal methods is necessary to capture the complexity of biopsychosocial models of treatment and recovery in schizophrenia. The present paper summarizes empirical data from traditional longitudinal research investigating recovery in symptoms, neurocognition, and social functioning. Studies conducted under treatment as usual conditions are compared to psychosocial intervention studies and potential treatment mechanisms of psychosocial interventions are discussed. Investigations of rehabilitation for schizophrenia using the longitudinal analytic strategies of growth curve and time series analysis are demonstrated. The respective advantages and disadvantages of these modern methods are highlighted. Their potential use for future research of treatment effects and recovery in schizophrenia is also discussed.
Resumo:
BACKGROUND: This empirical study analyzes the current status of Cochrane Reviews (CRs) and their strength of recommendation for evidence-based decision making in the field of general surgery. METHODS: Systematic literature search of the Cochrane Database of Systematic Reviews and the Cochrane Collaboration's homepage to identify available CRs on surgical topics. Quantitative and qualitative characteristics, utilization, and formulated treatment recommendations were evaluated by 2 independent reviewers. Association of review characteristics with treatment recommendation was analyzed using univariate and multivariate logistic regression models. RESULTS: Ninety-three CRs, including 1,403 primary studies and 246,473 patients, were identified. Mean number of included primary studies per CR was 15.1 (standard deviation [SD] 14.5) including 2,650 (SD 3,340) study patients. Two and a half (SD 8.3) nonrandomized trials were included per analyzed CR. Seventy-two (77%) CRs were published or updated in 2005 or later. Explicit treatment recommendations were given in 45 (48%). Presence of a treatment recommendation was associated with the number of included primary studies and the proportion of randomized studies. Utilization of surgical CRs remained low and showed large inter-country differences. The most surgical CRs were accessed in UK, USA, and Australia, followed by several Western and Eastern European countries. CONCLUSION: Only a minority of available CRs address surgical questions and their current usage is low. Instead of unsystematically increasing the number of surgical CRs it would be far more efficient to focus the review process on relevant surgical questions. Prioritization of CRs needs valid methods which should be developed by the scientific surgical community.
Resumo:
Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.
Resumo:
OBJECTIVE: To investigate the prevalence of discontinuation and nonpublication of surgical versus medical randomized controlled trials (RCTs) and to explore risk factors for discontinuation and nonpublication of surgical RCTs. BACKGROUND: Trial discontinuation has significant scientific, ethical, and economic implications. To date, the prevalence of discontinuation of surgical RCTs is unknown. METHODS: All RCT protocols approved between 2000 and 2003 by 6 ethics committees in Canada, Germany, and Switzerland were screened. Baseline characteristics were collected and, if published, full reports retrieved. Risk factors for early discontinuation for slow recruitment and nonpublication were explored using multivariable logistic regression analyses. RESULTS: In total, 863 RCT protocols involving adult patients were identified, 127 in surgery (15%) and 736 in medicine (85%). Surgical trials were discontinued for any reason more often than medical trials [43% vs 27%, risk difference 16% (95% confidence interval [CI]: 5%-26%); P = 0.001] and more often discontinued for slow recruitment [18% vs 11%, risk difference 8% (95% CI: 0.1%-16%); P = 0.020]. The percentage of trials not published as full journal article was similar in surgical and medical trials (44% vs 40%, risk difference 4% (95% CI: -5% to 14%); P = 0.373). Discontinuation of surgical trials was a strong risk factor for nonpublication (odds ratio = 4.18, 95% CI: 1.45-12.06; P = 0.008). CONCLUSIONS: Discontinuation and nonpublication rates were substantial in surgical RCTs and trial discontinuation was strongly associated with nonpublication. These findings need to be taken into account when interpreting surgical literature. Surgical trialists should consider feasibility studies before embarking on full-scale trials.
Resumo:
This study compares the performance of four commonly used approaches to measure consumers’ willingness to pay with real purchase data (REAL): the open-ended (OE) question format; choicebased conjoint (CBC) analysis; Becker, DeGroot, and Marschak’s (BDM) incentive-compatible mechanism; and incentive-aligned choice-based conjoint (ICBC) analysis. With this five-in-one approach, the authors test the relative strengths of the four measurement methods, using REAL as the benchmark, on the basis of statistical criteria and decision-relevant metrics. The results indicate that the BDM and ICBC approaches can pass statistical and decision-oriented tests. The authors find that respondents are more price sensitive in incentive-aligned settings than in non-incentive-aligned settings and the REAL setting. Furthermore, they find a large number of “none” choices under ICBC than under hypothetical conjoint analysis. This study uncovers an intriguing possibility: Even when the OE format and CBC analysis generate hypothetical bias, they may still lead to the right demand curves and right pricing decisions.