897 resultados para Sample size
Resumo:
When a new treatment is compared to an established one in a randomized clinical trial, it is standard practice to statistically test for non-inferiority rather than for superiority. When the endpoint is binary, one usually compares two treatments using either an odds-ratio or a difference of proportions. In this paper, we propose a mixed approach which uses both concepts. One first defines the non-inferiority margin using an odds-ratio and one ultimately proves non-inferiority statistically using a difference of proportions. The mixed approach is shown to be more powerful than the conventional odds-ratio approach when the efficacy of the established treatment is known (with good precision) and high (e.g. with more than 56% of success). The gain of power achieved may lead in turn to a substantial reduction in the sample size needed to prove non-inferiority. The mixed approach can be generalized to ordinal endpoints.
Resumo:
There are suggestions of an inverse association between folate intake and serum folate levels and the risk of oral cavity and pharyngeal cancers (OPCs), but most studies are limited in sample size, with only few reporting information on the source of dietary folate. Our study aims to investigate the association between folate intake and the risk of OPC within the International Head and Neck Cancer Epidemiology (INHANCE) Consortium. We analyzed pooled individual-level data from ten case-control studies participating in the INHANCE consortium, including 5,127 cases and 13,249 controls. Odds ratios (ORs) and the corresponding 95% confidence intervals (CIs) were estimated for the associations between total folate intake (natural, fortification and supplementation) and natural folate only, and OPC risk. We found an inverse association between total folate intake and overall OPC risk (the adjusted OR for the highest vs. the lowest quintile was 0.65, 95% CI: 0.43-0.99), with a stronger association for oral cavity (OR = 0.57, 95% CI: 0.43-0.75). A similar inverse association, though somewhat weaker, was observed for folate intake from natural sources only in oral cavity cancer (OR = 0.64, 95% CI: 0.45-0.91). The highest OPC risk was observed in heavy alcohol drinkers with low folate intake as compared to never/light drinkers with high folate (OR = 4.05, 95% CI: 3.43-4.79); the attributable proportion (AP) owing to interaction was 11.1% (95% CI: 1.4-20.8%). Lastly, we reported an OR of 2.73 (95% CI:2.34-3.19) for those ever tobacco users with low folate intake, compared with nevere tobacco users and high folate intake (AP of interaction =10.6%, 95% CI: 0.41-20.8%). Our project of a large pool of case-control studies supports a protective effect of total folate intake on OPC risk.
Resumo:
BackgroundPulmonary Langerhans cell histiocytosis (PLCH) is a rare disorder characterised by granulomatous proliferation of CD1a-positive histiocytes forming granulomas within lung parenchyma, in strong association with tobacco smoking, and which may result in chronic respiratory failure. Smoking cessation is considered to be critical in management, but has variable effects on outcome. No drug therapy has been validated. Cladribine (chlorodeoxyadenosine, 2-CDA) down-regulates histiocyte proliferation and has been successful in curbing multi-system Langerhans cell histiocytosis and isolated PLCH.Methods and patientsWe retrospectively studied 5 patients (aged 37¿55 years, 3 females) with PLCH who received 3 to 4 courses of cladribine therapy as a single agent (0.1 mg/kg per day for 5 consecutive days at monthly intervals). One patient was treated twice because of relapse at 1 year. Progressive pulmonary disease with obstructive ventilatory pattern despite smoking cessation and/or corticosteroid therapy were indications for treatment. Patients were administered oral trimethoprim/sulfamethoxazole and valaciclovir to prevent opportunistic infections. They gave written consent to receive off-label cladribine in the absence of validated treatment.ResultsFunctional class dyspnea improved with cladribine therapy in 4 out of 5 cases, and forced expiratory volume in 1 second (FEV1) increased in all cases by a mean of 387 ml (100¿920 ml), contrasting with a steady decline prior to treatment. Chest high-resolution computed tomography (HRCT) features improved with cladribine therapy in 4 patients. Hemodynamic improvement was observed in 1 patient with pre-capillary pulmonary hypertension. The results suggested a greater treatment effect in subjects with nodular lung lesions and/or thick-walled cysts on chest HRCT, with diffuse hypermetabolism of lung lesions on positron emission tomography (PET)-scan, and with progressive disease despite smoking cessation. Infectious pneumonia developed in 1 patient, with later grade 4 neutrocytopenia but without infection.DiscussionData interpretation was limited by the retrospective, uncontrolled study design and small sample size.ConclusionCladribine as a single agent may be effective therapy in patients with progressive PLCH.
Resumo:
Important theoretical controversies remain unresolved in the literatire on occupational sex-segregation and the gender wage-gap. A useful way of summarising these controversies is viewing them as a debate between - cultural -socialisation. The paper discusses these theories in detail and carries out a preliminary test of the relative explanatory performance of some of their most consequential predictions. This is done by drawing on the Spanish sample of the second wave of the European Social Survey, ESS. The empirical analysis of ESS data illustrates the notable analytical pay-offs that can stem from using rich individual-level indicators, but also exemplifies the statistical llimitations generated by small sample size and high rates of non-response. Empirical results should, therefore, be taken as preliminary. They seem to suggest that the effect of occupational sex-segregation on wages could be explicable by workers' sex-role attitutes, their relative input in domestic production and the job-specific human capital requirements of their jobs. Of these three factors, job-specialisation seeems clearly the most important one.
Resumo:
The choice of design between individual randomisation, cluster or pseudo-cluster randomisation is often made difficult. Clear methodological guidelines have been given for trials in general practice, but not for vaccine trials. This article proposes a decisional flow-chart to choose the most adapted design for evaluating the effectiveness of a vaccine in large-scale studies. Six criteria have been identified: importance of herd immunity or herd protection, ability to delimit epidemiological units, homogeneity of transmission probability across sub-populations, population's acceptability of randomisation, availability of logistical resources, and estimated sample size. This easy to use decisional method could help sponsors, trial steering committees and ethical committees adopt the most suitable design.
Resumo:
Many regions of the world, including inland lakes, present with suboptimal conditions for the remotely sensed retrieval of optical signals, thus challenging the limits of available satellite data-processing tools, such as atmospheric correction models (ACM) and water constituent-retrieval (WCR) algorithms. Working in such regions, however, can improve our understanding of remote-sensing tools and their applicabil- ity in new contexts, in addition to potentially offering useful information about aquatic ecology. Here, we assess and compare 32 combinations of two ACMs, two WCRs, and three binary categories of data quality standards to optimize a remotely sensed proxy of plankton biomass in Lake Kivu. Each parameter set is compared against the available ground-truth match-ups using Spearman's right-tailed ρ. Focusing on the best sets from each ACM-WCR combination, their performances are discussed with regard to data distribution, sample size, spatial completeness, and seasonality. The results of this study may be of interest both for ecological studies on Lake Kivu and for epidemio- logical studies of disease, such as cholera, the dynamics of which has been associated with plankton biomass in other regions of the world.
Resumo:
Ophthalmoscopy performed for the early diagnosis of retinopathy of prematurity (ROP) is painful for preterm infants, thus necessitating interventions for minimizing pain. The present study aimed to establish the effectiveness of human milk, compared with sucrose, for pain relief in premature infants subjected to ophthalmoscopy for the early diagnosis of ROP. This investigation was a pilot, quasi-experimental study conducted with 14 premature infants admitted to the neonatal intensive care unit (NICU) of a university hospital. Comparison between the groups did not yield a statistically significant difference relative to the crying time, salivary cortisol, or heart rate (HR). Human milk appears to be as effective as sucrose in relieving acute pain associated with ophthalmoscopy. The study’s limitations included its small sample size and lack of randomization. Experimental investigations with greater sample power should be performed to reinforce the evidence found in the present study.
Resumo:
Barrels are discrete cytoarchitectonic neurons cluster located in the layer IV of the somatosensory¦cortex in mice brain. Each barrel is related to a specific whisker located on the mouse snout. The¦whisker-to-barrel pathway is a part of the somatosensory system that is intensively used to explore¦sensory activation induced plasticity in the cerebral cortex.¦Different recording methods exist to explore the cortical response induced by whisker deflection in¦the cortex of anesthetized mice. In this work, we used a method called the Single-Unit Analysis by¦which we recorded the extracellular electric signals of a single barrel neuron using a microelectrode.¦After recording the signal was processed by discriminators to isolate specific neuronal shape (action¦potentials).¦The objective of this thesis was to familiarize with the barrel cortex recording during whisker¦deflection and its theoretical background and to compare two different ways of discriminating and¦sorting cortical signal, the Waveform Window Discriminator (WWD) or the Spike Shape Discriminator (SSD).¦WWD is an electric module allowing the selection of specific electric signal shape. A trigger and a¦window potential level are set manually. During measurements, every time the electric signal passes¦through the two levels a dot is generated on time line. It was the method used in previous¦extracellular recording study in the Département de Biologie Cellulaire et de Morphologie (DBCM) in¦Lausanne.¦SSD is a function provided by the signal analysis software Spike2 (Cambridge Electronic Design). The¦neuronal signal is discriminated by a complex algorithm allowing the creation of specific templates.¦Each of these templates is supposed to correspond to a cell response profile. The templates are saved¦as a number of points (62 in this study) and are set for each new cortical location. During¦measurements, every time the cortical recorded signal corresponds to a defined number of templates¦points (60% in this study) a dot is generated on time line. The advantage of the SSD is that multiple¦templates can be used during a single stimulation, allowing a simultaneous recording of multiple¦signals.¦It exists different ways to represent data after discrimination and sorting. The most commonly used¦in the Single-Unit Analysis of the barrel cortex are the representation of the time between stimulation¦and the first cell response (the latency), the representation of the Response Magnitude (RM) after¦whisker deflection corrected for spontaneous activity and the representation of the time distribution¦of neuronal spikes on time axis after whisker stimulation (Peri-Stimulus Time Histogram, PSTH).¦The results show that the RMs and the latencies in layer IV were significantly different between the¦WWD and the SSD discriminated signal. The temporal distribution of the latencies shows that the¦different values were included between 6 and 60ms with no peak value for SSD while the WWD¦data were all gathered around a peak of 11ms (corresponding to previous studies). The scattered¦distribution of the latencies recorded with the SSD did not correspond to a cell response.¦The SSD appears to be a powerful tool for signal sorting but we do not succeed to use it for the¦Single-Unit Analysis extracellular recordings. Further recordings with different SSD templates settings¦and larger sample size may help to show the utility of this tool in Single-Unit Analysis studies.
Resumo:
Various test methods exist for measuring heat of cement hydration; however, most current methods require expensive equipment, complex testing procedures, and/or extensive time, thus not being suitable for field application. The objectives of this research are to identify, develop, and evaluate a standard test procedure for characterization and quality control of pavement concrete mixtures using a calorimetry technique. This research project has three phases. Phase I was designed to identify the user needs, including performance requirements and precision and bias limits, and to synthesize existing test methods for monitoring the heat of hydration, including device types, configurations, test procedures, measurements, advantages, disadvantages, applications, and accuracy. Phase II was designed to conduct experimental work to evaluate the calorimetry equipment recommended from the Phase I study and to develop a standard test procedure for using the equipment and interpreting the test results. Phase II also includes the development of models and computer programs for prediction of concrete pavement performance based on the characteristics of heat evolution curves. Phase III was designed to study for further development of a much simpler, inexpensive calorimeter for field concrete. In this report, the results from the Phase I study are presented, the plan for the Phase II study is described, and the recommendations for Phase III study are outlined. Phase I has been completed through three major activities: (1) collecting input and advice from the members of the project Technical Working Group (TWG), (2) conducting a literature survey, and (3) performing trials at the CP Tech Center’s research lab. The research results indicate that in addition to predicting maturity/strength, concrete heat evolution test results can also be used for (1) forecasting concrete setting time, (2) specifying curing period, (3) estimating risk of thermal cracking, (4) assessing pavement sawing/finishing time, (5) characterizing cement features, (6) identifying incompatibility of cementitious materials, (7) verifying concrete mix proportions, and (8) selecting materials and/or mix designs for given environmental conditions. Besides concrete materials and mix proportions, the configuration of the calorimeter device, sample size, mixing procedure, and testing environment (temperature) also have significant influences on features of concrete heat evolution process. The research team has found that although various calorimeter tests have been conducted for assorted purposes and the potential uses of calorimeter tests are clear, there is no consensus on how to utilize the heat evolution curves to characterize concrete materials and how to effectively relate the characteristics of heat evolution curves to concrete pavement performance. The goal of the Phase II study is to close these gaps.
Resumo:
Introduction: Mantle cell lymphoma (MCL) accounts for 6% of all B-cell lymphomas and remains incurable for most patients. Those who relapse after first line therapy or hematopoietic stem cell transplantation have a dismal prognosis with short response duration after salvage therapy. On a molecular level, MCL is characterised by the translocation t[11;14] leading to Cyclin D1 overexpression. Cyclin D1 is downstream of the mammalian target of rapamycin (mTOR) kinase and can be effectively blocked by mTOR inhibitors such as temsirolimus. We set out to define the single agent activity of the orally available mTOR inhibitor everolimus (RAD001) in a prospective, multi-centre trial in patients with relapsed or refractory MCL (NCT00516412). The study was performed in collaboration with the EU-MCL network. Methods: Eligible patients with histologically/cytologically confirmed relapsed (not more than 3 prior lines of systemic treatment) or refractory MCL received everolimus 10 mg orally daily on day 1 - 28 of each cycle (4 weeks) for 6 cycles or until disease progression. The primary endpoint was the best objective response with adverse reactions, time to progression (TTP), time to treatment failure, response duration and molecular response as secondary endpoints. A response rate of ≤ 10% was considered uninteresting and, conversely, promising if ≥ 30%. The required sample size was 35 pts using the Simon's optimal two-stage design with 90% power and 5% significance. Results: A total of 36 patients with 35 evaluable patients from 19 centers were enrolled between August 2007 and January 2010. The median age was 69.4 years (range 40.1 to 84.9 years), with 22 males and 13 females. Thirty patients presented with relapsed and 5 with refractory MCL with a median of two prior therapies. Treatment was generally well tolerated with anemia (11%), thrombocytopenia (11%), neutropenia (8%), diarrhea (3%) and fatigue (3%) being the most frequent complications of CTC grade III or higher. Eighteen patients received 6 or more cycles of everolimus treatment. The objective response rate was 20% (95% CI: 8-37%) with 2 CR, 5 PR, 17 SD, and 11 PD. At a median follow-up of 6 months, TTP was 5.45 months (95% CI: 2.8-8.2 months) for the entire population and 10.6 months for the 18 patients receiving 6 or more cycles of treatment. Conclusion: This study demonstrates that single agent everolimus 10 mg once daily orally is well tolerated. The null hypothesis of inactivity could be rejected indicating a moderate anti-lymphoma activity in relapsed/refractory MCL. Further studies of either everolimus in combination with chemotherapy or as single agent for maintenance treatment are warranted in MCL.
Resumo:
Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.
Resumo:
Surveys are a valuable instrument to find out about the social and politicalreality of our context. However, the work of researchers is often limitedby a number of handicaps that are mainly two. On one hand, the samples areusually low technical quality ones and the fieldwork is not carried out inthe finest conditions. On the other hand, many surveys are not especiallydesigned to allow their comparison, a precisely appreciated operation inpolitical research. The article presents the European Social Survey andjustifies its methodological bases. The survey, promoted by the EuropeanScience Foundation and the European Commission, is born from the collectiveeffort of the scientific community with the explicit aim to establishcertain quality standards in the sample design and in the carrying out ofthe fieldwork so as to guarantee the quality of the data and allow eachcomparison between countries.
Resumo:
Understanding the genetic structure of human populations is of fundamental interest to medical, forensic and anthropological sciences. Advances in high-throughput genotyping technology have markedly improved our understanding of global patterns of human genetic variation and suggest the potential to use large samples to uncover variation among closely spaced populations. Here we characterize genetic variation in a sample of 3,000 European individuals genotyped at over half a million variable DNA sites in the human genome. Despite low average levels of genetic differentiation among Europeans, we find a close correspondence between genetic and geographic distances; indeed, a geographical map of Europe arises naturally as an efficient two-dimensional summary of genetic variation in Europeans. The results emphasize that when mapping the genetic basis of a disease phenotype, spurious associations can arise if genetic structure is not properly accounted for. In addition, the results are relevant to the prospects of genetic ancestry testing; an individual's DNA can be used to infer their geographic origin with surprising accuracy-often to within a few hundred kilometres.
Resumo:
In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.
Resumo:
Any electoral system has an electoral formula that converts voteproportions into parliamentary seats. Pre-electoral polls usually focuson estimating vote proportions and then applying the electoral formulato give a forecast of the parliament's composition. We here describe theproblems arising from this approach: there is always a bias in theforecast. We study the origin of the bias and some methods to evaluateand to reduce it. We propose some rules to compute the sample sizerequired for a given forecast accuracy. We show by Monte Carlo simulationthe performance of the proposed methods using data from Spanish electionsin last years. We also propose graphical methods to visualize how electoralformulae and parliamentary forecasts work (or fail).