32 resultados para Statistics and probability
Resumo:
BACKGROUND: Not all clinical trials are published, which may distort the evidence that is available in the literature. We studied the publication rate of a cohort of clinical trials and identified factors associated with publication and nonpublication of results. METHODS: We analysed the protocols of randomized clinical trials of drug interventions submitted to the research ethics committee of University Hospital (Inselspital) Bern, Switzerland from 1988 to 1998. We identified full articles published up to 2006 by searching the Cochrane CENTRAL database (issue 02/2006) and by contacting investigators. We analyzed factors associated with the publication of trials using descriptive statistics and logistic regression models. RESULTS: 451 study protocols and 375 corresponding articles were analyzed. 233 protocols resulted in at least one publication, a publication rate of 52%. A total of 366 (81%) trials were commercially funded, 47 (10%) had non-commercial funding. 346 trials (77%) were multi-centre studies and 272 of these (79%) were international collaborations. In the adjusted logistic regression model non-commercial funding (Odds Ratio [OR] 2.42, 95% CI 1.14-5.17), multi-centre status (OR 2.09, 95% CI 1.03-4.24), international collaboration (OR 1.87, 95% CI 0.99-3.55) and a sample size above the median of 236 participants (OR 2.04, 95% CI 1.23-3.39) were associated with full publication. CONCLUSIONS: In this cohort of applications to an ethics committee in Switzerland, only about half of clinical drug trials were published. Large multi-centre trials with non-commercial funding were more likely to be published than other trials, but most trials were funded by industry.
Resumo:
The aim of this study was to compare standard plaster models with their digital counterparts for the applicability of the Index of Complexity, Outcome, and Need (ICON). Generated study models of 30 randomly selected patients: 30 pre- (T(0)) and 30 post- (T(1)) treatment. Two examiners, calibrated in the ICON, scored the digital and plaster models. The overall ICON scores were evaluated for reliability and reproducibility using kappa statistics and reliability coefficients. The values for reliability of the total and weighted ICON scores were generally high for the T(0) sample (range 0.83-0.95) but less high for the T(1) sample (range 0.55-0.85). Differences in total ICON score between plaster and digital models resulted in mostly statistically insignificant values (P values ranging from 0.07 to 0.19), except for observer 1 in the T(1) sample. No statistically different values were found for the total ICON score on either plaster or digital models. ICON scores performed on computer-based models appear to be as accurate and reliable as ICON scores on plaster models.
Resumo:
High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments
Resumo:
This paper describes the role of small and medium-sized urban centers in Switzerland. Switzerland is a highly urbanized country where small and medium-sized urban centers play an important role in ensuring a balanced national urban system. Besides the four largest metropolitan regions (Zurich, Geneva, Basel and Bern), small and medium-sized towns function as central places for a wider, often extensive hinterland. They provide opportunities for living and working and they connect rural and mountain regions to national and international networks. Using secondary statistics and a case study, the paper shows that small and medium-sized urban centers are home to significant concentrations of export-oriented industries. Firms in these value-adding secondary sectors are rooted in these places and benefit from strong local embeddedness while also being oriented towards global markets. Small and medium-sized urban centers also profit from their strong local identities. While these places face various challenges, they function as important pillars in creating a balanced regional development pattern. Swiss regional development policy follows the goal of polycentric spatial development and it employs various instruments that aim to ensure a balanced urban system.
Resumo:
Recently divergent species that can hybridize are ideal models for investigating the genetic exchanges that can occur while preserving the species boundaries. Petunia exserta is an endemic species from a very limited and specific area that grows exclusively in rocky shelters. These shaded spots are an inhospitable habitat for all other Petunia species, including the closely related and widely distributed species P. axillaris. Individuals with intermediate morphologic characteristics have been found near the rocky shelters and were believed to be putative hybrids between P. exserta and P. axillaris, suggesting a situation where Petunia exserta is losing its genetic identity. In the current study, we analyzed the plastid intergenic spacers trnS/trnG and trnH/psbA and six nuclear CAPS markers in a large sampling design of both species to understand the evolutionary process occurring in this biological system. Bayesian clustering methods, cpDNA haplotype networks, genetic diversity statistics, and coalescence-based analyses support a scenario where hybridization occurs while two genetic clusters corresponding to two species are maintained. Our results reinforce the importance of coupling differentially inherited markers with an extensive geographic sample to assess the evolutionary dynamics of recently diverged species that can hybridize. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Let P be a probability distribution on q -dimensional space. The so-called Diaconis-Freedman effect means that for a fixed dimension d<and sufficient conditions for this phenomenon in a suitable asymptotic framework with increasing dimension q . It turns out, that the conditions formulated by Diaconis and Freedman (1984) are not only sufficient but necessary as well. Moreover, letting P ^ be the empirical distribution of n independent random vectors with distribution P , we investigate the behavior of the empirical process n √ (P ^ −P) under random projections, conditional on P ^ .
Resumo:
This paper introduces and analyzes a stochastic search method for parameter estimation in linear regression models in the spirit of Beran and Millar [Ann. Statist. 15(3) (1987) 1131–1154]. The idea is to generate a random finite subset of a parameter space which will automatically contain points which are very close to an unknown true parameter. The motivation for this procedure comes from recent work of Dümbgen et al. [Ann. Statist. 39(2) (2011) 702–730] on regression models with log-concave error distributions.
Resumo:
The first section of this chapter starts with the Buffon problem, which is one of the oldest in stochastic geometry, and then continues with the definition of measures on the space of lines. The second section defines random closed sets and related measurability issues, explains how to characterize distributions of random closed sets by means of capacity functionals and introduces the concept of a selection. Based on this concept, the third section starts with the definition of the expectation and proves its convexifying effect that is related to the Lyapunov theorem for ranges of vector-valued measures. Finally, the strong law of large numbers for Minkowski sums of random sets is proved and the corresponding limit theorem is formulated. The chapter is concluded by a discussion of the union-scheme for random closed sets and a characterization of the corresponding stable laws.
Resumo:
Between 2004 and 2007, NGOs, community based organisations and private investors promoted jatropha in Kenya with the aim of generating additional income and producing biofuel for rural development. By 2008 it became gradually evident that jatropha plantations (both mono- and intercropping) are uneconomical and risky due to competition for land and labour with food crops. Cultivation of jatropha hedges was found to have better chances of economic success and to present only little risks for the adopting farmers. Still, after 2008 a number of farmers went on adopting jatropha in plots rather than as hedges. It is hypothesised that lack of awareness about the low economic prospects of jatropha plantations was the main reason for continued adoption, and that smallholder farmers with higher resource endowments mainly ventured into its cultivation. In this study we provide an empirical basis for understanding the role of households' capital assets in taking up new livelihood strategies by smallholder farmers in three rural districts in Kenya. For that purpose, we assess the motivation and enabling factors that led to the adoption of jatropha as a new livelihood strategy, as well as the context in which promotion and adoption took place. A household survey was conducted in 2010, using a structured questionnaire, to collect information on household characteristics and capital asset endowment. Data were analysed using descriptive statistics and non-parametric statistical tests. We established that access to additional income and own energy supply were the main motivation for adoption of jatropha, and that financial capital assets do not necessarily have a positive influence on adoption as hypothesised. Further, we found that the main challenges that adopting farmers faced were lack of access to information on good management practices and lack of a reliable market. We conclude that continued adoption of on-farm jatropha after 2008 is a result of lacking awareness about the low economic value of this production type. We recommend abandoning on-farm production of jatropha until improved seed material and locally adapted agronomic knowledge about jatropha cultivation becomes available and its production becomes economically competitive.
Resumo:
OBJECTIVE The aim of the study was to describe the (a) symptom experience of women with vulvar intraepithelial neoplasia and vulvar cancer (vulvar neoplasia) during the first week after hospital discharge, and (b) associations between age, type of disease, stage of disease, the extent of surgical treatment and symptom experience. METHODS This cross-sectional study was conducted in eight hospitals in Germany and Switzerland (Clinical Trial ID: NCT01300663). Symptom experience after surgical treatment in women with vulvar neoplasia was measured with our newly developed WOMAN-PRO instrument. Outpatients (n=65) rated 31 items. We used descriptive statistics and regression analysis. RESULTS The average number of symptoms reported per patient was 20.2 (SD 5.77) with a range of 5 to 31 symptoms. The three most prevalent wound-related symptoms were 'swelling' (n=56), 'drainage' (n=54) and 'pain' (n=52). The three most prevalent difficulties in daily life were 'sitting' (n=63), 'wearing clothes' (n=56) and 'carrying out my daily activities' (n=51). 'Tiredness' (n=62), 'insecurity' (n=54) and 'feeling that my body has changed' (n=50) were the three most prevalent psychosocial symptoms/issues. The most distressing symptoms were 'sitting' (Mean 2.03, SD 0.88), 'open spot (e.g. opening of skin or suture)' (Mean 1.91, SD 0.93), and 'carrying out my daily activities' (Mean 1.86, SD 0.87), which were on average reported as 'quite a bit' distressing. Negative associations were found between psychosocial symptom experience and age. CONCLUSIONS WOMAN-PRO data showed a high symptom prevalence and distress, call for a comprehensive symptom assessment, and may allow identification of relevant areas in symptom management.
Resumo:
Objectives Despite many reports on best practises regarding onsite psychological services, little research has attempted to systematically explore the frequency, issues, nature and client groups of onsite sport psychology consultancy at the Olympic Games. The present paper will fill this gap through a systematic analysis of the sport psychology consultancy of the Swiss team for the Olympic Games of 2006 in Turin, 2008 in Beijing and 2010 in Vancouver. Design Descriptive research design. Methods The day reports of the official sport psychologist were analysed. Intervention issues were labelled using categories derived from previous research and divided into the following four intervention-issue dimensions: “general performance”, “specific Olympic performance”, “organisational” and “personal” issues. Data were analysed using descriptive statistics, chi square statistics and odds ratios. Results Across the Olympic Games, between 11% and 25% of the Swiss delegation used the sport psychology services. On average, the sport psychologist provided between 2.1 and 4.6 interventions per day. Around 50% of the interventions were informal interventions. Around 30% of the clients were coaches. The most commonly addressed issues were performance related. An association was observed between previous collaboration, intervention likelihood and intervention theme. Conclusions Sport psychologists working at the Olympic Games are fully engaged with daily interventions and should have developed ideally long-term relationships with clients to truly help athletes with general performance issues. Critical incidents, working with coaches, brief contact interventions and team conflicts are specific features of the onsite consultancy. Practitioners should be trained to deal with these sorts of challenges.
Resumo:
Assessing and managing risks relating to the consumption of food stuffs for humans and to the environment has been one of the most complex legal issues in WTO law, ever since the Agreement on Sanitary and Phytosanitary Measures was adopted at the end of the Uruguay Round and entered into force in 1995. The problem was expounded in a number of cases. Panels and the Appellate Body adopted different philosophies in interpreting the agreement and the basic concept of risk assessment as defined in Annex A para. 4 of the Agreement. Risk assessment entails fundamental question on law and science. Different interpretations reflect different underlying perceptions of science and its relationship to the law. The present thesis supported by the Swiss National Research Foundation undertakes an in-depth analysis of these underlying perceptions. The author expounds the essence and differences of positivism and relativism in philosophy and natural sciences. He clarifies the relationship of fundamental concepts such as risk, hazards and probability. This investigation is a remarkable effort on the part of lawyer keen to learn more about the fundamentals based upon which the law – often unconsciously – is operated by the legal profession and the trade community. Based upon these insights, he turns to a critical assessment of jurisprudence both of panels and the Appellate Body. Extensively referring and discussing the literature, he deconstructs findings and decisions in light of implied and assumed underlying philosophies and perceptions as to the relationship of law and science, in particular in the field of food standards. Finding that both positivism and relativism does not provide adequate answers, the author turns critical rationalism and applies the methodologies of falsification developed by Karl R. Popper. Critical rationalism allows combining discourse in science and law and helps preparing the ground for a new approach to risk assessment and risk management. Linking the problem to the doctrine of multilevel governance the author develops a theory allocating risk assessment to international for a while leaving the matter of risk management to national and democratically accountable government. While the author throughout the thesis questions the possibility of separating risk assessment and risk management, the thesis offers new avenues which may assist in structuring a complex and difficult problem
Resumo:
OBJECTIVE Approximately 85% of cervical cancer cases and deaths occur in resource-constrained countries where best practices for prevention, particularly for women with HIV infection, still need to be developed. The aim of this study was to assess cervical cancer prevention capacity in select HIV clinics located in resource-constrained countries. MATERIALS AND METHODS A cross-sectional survey of sub-Saharan African sites of 4 National Institutes of Health-funded HIV/AIDS networks was conducted. Sites were surveyed on the availability of cervical cancer screening and treatment among women with HIV infection and without HIV infection. Descriptive statistics and χ or Fisher exact test were used as appropriate. RESULTS Fifty-one (65%) of 78 sites responded. Access to cervical cancer screening was reported by 49 sites (96%). Of these sites, 39 (80%) performed screening on-site. Central African sites were less likely to have screening on-site (p = .02) versus other areas. Visual inspection with acetic acid and Pap testing were the most commonly available on-site screening methods at 31 (79%) and 26 (67%) sites, respectively. High-risk HPV testing was available at 29% of sites with visual inspection with acetic acid and 50% of sites with Pap testing. Cryotherapy and radical hysterectomy were the most commonly available on-site treatment methods for premalignant and malignant lesions at 29 (74%) and 18 (46%) sites, respectively. CONCLUSIONS Despite limited resources, most sites surveyed had the capacity to perform cervical cancer screening and treatment. The existing infrastructure of HIV clinical and research sites may provide the ideal framework for scale-up of cervical cancer prevention in resource-constrained countries with a high burden of cervical dysplasia.
Resumo:
Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.