893 resultados para variable sample size
Resumo:
Barrels are discrete cytoarchitectonic neurons cluster located in the layer IV of the somatosensory¦cortex in mice brain. Each barrel is related to a specific whisker located on the mouse snout. The¦whisker-to-barrel pathway is a part of the somatosensory system that is intensively used to explore¦sensory activation induced plasticity in the cerebral cortex.¦Different recording methods exist to explore the cortical response induced by whisker deflection in¦the cortex of anesthetized mice. In this work, we used a method called the Single-Unit Analysis by¦which we recorded the extracellular electric signals of a single barrel neuron using a microelectrode.¦After recording the signal was processed by discriminators to isolate specific neuronal shape (action¦potentials).¦The objective of this thesis was to familiarize with the barrel cortex recording during whisker¦deflection and its theoretical background and to compare two different ways of discriminating and¦sorting cortical signal, the Waveform Window Discriminator (WWD) or the Spike Shape Discriminator (SSD).¦WWD is an electric module allowing the selection of specific electric signal shape. A trigger and a¦window potential level are set manually. During measurements, every time the electric signal passes¦through the two levels a dot is generated on time line. It was the method used in previous¦extracellular recording study in the Département de Biologie Cellulaire et de Morphologie (DBCM) in¦Lausanne.¦SSD is a function provided by the signal analysis software Spike2 (Cambridge Electronic Design). The¦neuronal signal is discriminated by a complex algorithm allowing the creation of specific templates.¦Each of these templates is supposed to correspond to a cell response profile. The templates are saved¦as a number of points (62 in this study) and are set for each new cortical location. During¦measurements, every time the cortical recorded signal corresponds to a defined number of templates¦points (60% in this study) a dot is generated on time line. The advantage of the SSD is that multiple¦templates can be used during a single stimulation, allowing a simultaneous recording of multiple¦signals.¦It exists different ways to represent data after discrimination and sorting. The most commonly used¦in the Single-Unit Analysis of the barrel cortex are the representation of the time between stimulation¦and the first cell response (the latency), the representation of the Response Magnitude (RM) after¦whisker deflection corrected for spontaneous activity and the representation of the time distribution¦of neuronal spikes on time axis after whisker stimulation (Peri-Stimulus Time Histogram, PSTH).¦The results show that the RMs and the latencies in layer IV were significantly different between the¦WWD and the SSD discriminated signal. The temporal distribution of the latencies shows that the¦different values were included between 6 and 60ms with no peak value for SSD while the WWD¦data were all gathered around a peak of 11ms (corresponding to previous studies). The scattered¦distribution of the latencies recorded with the SSD did not correspond to a cell response.¦The SSD appears to be a powerful tool for signal sorting but we do not succeed to use it for the¦Single-Unit Analysis extracellular recordings. Further recordings with different SSD templates settings¦and larger sample size may help to show the utility of this tool in Single-Unit Analysis studies.
Resumo:
Various test methods exist for measuring heat of cement hydration; however, most current methods require expensive equipment, complex testing procedures, and/or extensive time, thus not being suitable for field application. The objectives of this research are to identify, develop, and evaluate a standard test procedure for characterization and quality control of pavement concrete mixtures using a calorimetry technique. This research project has three phases. Phase I was designed to identify the user needs, including performance requirements and precision and bias limits, and to synthesize existing test methods for monitoring the heat of hydration, including device types, configurations, test procedures, measurements, advantages, disadvantages, applications, and accuracy. Phase II was designed to conduct experimental work to evaluate the calorimetry equipment recommended from the Phase I study and to develop a standard test procedure for using the equipment and interpreting the test results. Phase II also includes the development of models and computer programs for prediction of concrete pavement performance based on the characteristics of heat evolution curves. Phase III was designed to study for further development of a much simpler, inexpensive calorimeter for field concrete. In this report, the results from the Phase I study are presented, the plan for the Phase II study is described, and the recommendations for Phase III study are outlined. Phase I has been completed through three major activities: (1) collecting input and advice from the members of the project Technical Working Group (TWG), (2) conducting a literature survey, and (3) performing trials at the CP Tech Center’s research lab. The research results indicate that in addition to predicting maturity/strength, concrete heat evolution test results can also be used for (1) forecasting concrete setting time, (2) specifying curing period, (3) estimating risk of thermal cracking, (4) assessing pavement sawing/finishing time, (5) characterizing cement features, (6) identifying incompatibility of cementitious materials, (7) verifying concrete mix proportions, and (8) selecting materials and/or mix designs for given environmental conditions. Besides concrete materials and mix proportions, the configuration of the calorimeter device, sample size, mixing procedure, and testing environment (temperature) also have significant influences on features of concrete heat evolution process. The research team has found that although various calorimeter tests have been conducted for assorted purposes and the potential uses of calorimeter tests are clear, there is no consensus on how to utilize the heat evolution curves to characterize concrete materials and how to effectively relate the characteristics of heat evolution curves to concrete pavement performance. The goal of the Phase II study is to close these gaps.
Resumo:
Introduction: Mantle cell lymphoma (MCL) accounts for 6% of all B-cell lymphomas and remains incurable for most patients. Those who relapse after first line therapy or hematopoietic stem cell transplantation have a dismal prognosis with short response duration after salvage therapy. On a molecular level, MCL is characterised by the translocation t[11;14] leading to Cyclin D1 overexpression. Cyclin D1 is downstream of the mammalian target of rapamycin (mTOR) kinase and can be effectively blocked by mTOR inhibitors such as temsirolimus. We set out to define the single agent activity of the orally available mTOR inhibitor everolimus (RAD001) in a prospective, multi-centre trial in patients with relapsed or refractory MCL (NCT00516412). The study was performed in collaboration with the EU-MCL network. Methods: Eligible patients with histologically/cytologically confirmed relapsed (not more than 3 prior lines of systemic treatment) or refractory MCL received everolimus 10 mg orally daily on day 1 - 28 of each cycle (4 weeks) for 6 cycles or until disease progression. The primary endpoint was the best objective response with adverse reactions, time to progression (TTP), time to treatment failure, response duration and molecular response as secondary endpoints. A response rate of ≤ 10% was considered uninteresting and, conversely, promising if ≥ 30%. The required sample size was 35 pts using the Simon's optimal two-stage design with 90% power and 5% significance. Results: A total of 36 patients with 35 evaluable patients from 19 centers were enrolled between August 2007 and January 2010. The median age was 69.4 years (range 40.1 to 84.9 years), with 22 males and 13 females. Thirty patients presented with relapsed and 5 with refractory MCL with a median of two prior therapies. Treatment was generally well tolerated with anemia (11%), thrombocytopenia (11%), neutropenia (8%), diarrhea (3%) and fatigue (3%) being the most frequent complications of CTC grade III or higher. Eighteen patients received 6 or more cycles of everolimus treatment. The objective response rate was 20% (95% CI: 8-37%) with 2 CR, 5 PR, 17 SD, and 11 PD. At a median follow-up of 6 months, TTP was 5.45 months (95% CI: 2.8-8.2 months) for the entire population and 10.6 months for the 18 patients receiving 6 or more cycles of treatment. Conclusion: This study demonstrates that single agent everolimus 10 mg once daily orally is well tolerated. The null hypothesis of inactivity could be rejected indicating a moderate anti-lymphoma activity in relapsed/refractory MCL. Further studies of either everolimus in combination with chemotherapy or as single agent for maintenance treatment are warranted in MCL.
Resumo:
Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.
Resumo:
Surveys are a valuable instrument to find out about the social and politicalreality of our context. However, the work of researchers is often limitedby a number of handicaps that are mainly two. On one hand, the samples areusually low technical quality ones and the fieldwork is not carried out inthe finest conditions. On the other hand, many surveys are not especiallydesigned to allow their comparison, a precisely appreciated operation inpolitical research. The article presents the European Social Survey andjustifies its methodological bases. The survey, promoted by the EuropeanScience Foundation and the European Commission, is born from the collectiveeffort of the scientific community with the explicit aim to establishcertain quality standards in the sample design and in the carrying out ofthe fieldwork so as to guarantee the quality of the data and allow eachcomparison between countries.
Resumo:
In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.
Resumo:
Any electoral system has an electoral formula that converts voteproportions into parliamentary seats. Pre-electoral polls usually focuson estimating vote proportions and then applying the electoral formulato give a forecast of the parliament's composition. We here describe theproblems arising from this approach: there is always a bias in theforecast. We study the origin of the bias and some methods to evaluateand to reduce it. We propose some rules to compute the sample sizerequired for a given forecast accuracy. We show by Monte Carlo simulationthe performance of the proposed methods using data from Spanish electionsin last years. We also propose graphical methods to visualize how electoralformulae and parliamentary forecasts work (or fail).
Resumo:
ABSTRACT: BACKGROUND: Most scales that assess the presence and severity of psychotic symptoms often measure a broad range of experiences and behaviours, something that restricts the detailed measurement of specific symptoms such as delusions or hallucinations. The Psychotic Symptom Rating Scales (PSYRATS) is a clinical assessment tool that focuses on the detailed measurement of these core symptoms. The goal of this study was to examine the psychometric properties of the French version of the PSYRATS. METHODS: A sample of 103 outpatients suffering from schizophrenia or schizoaffective disorders and presenting persistent psychotic symptoms over the previous three months was assessed using the PSYRATS. Seventy-five sample participants were also assessed with the Positive And Negative Syndrome Scale (PANSS). RESULTS: ICCs were superior to .90 for all items of the PSYRATS. Factor analysis replicated the factorial structure of the original version of the delusions scale. Similar to previous replications, the factor structure of the hallucinations scale was partially replicated. Convergent validity indicated that some specific PSYRATS items do not correlate with the PANSS delusions or hallucinations. The distress items of the PSYRATS are negatively correlated with the grandiosity scale of the PANSS. CONCLUSIONS: The results of this study are limited by the relatively small sample size as well as the selection of participants with persistent symptoms. The French version of the PSYRATS partially replicates previously published results. Differences in factor structure of the hallucinations scale might be explained by greater variability of its elements. The future development of the scale should take into account the presence of grandiosity in order to better capture details of the psychotic experience.
Resumo:
A national survey designed for estimating a specific population quantity is sometimes used for estimation of this quantity also for a small area, such as a province. Budget constraints do not allow a greater sample size for the small area, and so other means of improving estimation have to be devised. We investigate such methods and assess them by a Monte Carlo study. We explore how a complementary survey can be exploited in small area estimation. We use the context of the Spanish Labour Force Survey (EPA) and the Barometer in Spain for our study.
The economic effects of the Protestant Reformation: Testing the Weber hypothesis in the German Lands
Resumo:
Many theories, most famously Max Weber s essay on the Protestant ethic, have hypothesizedthat Protestantism should have favored economic development. With their considerablereligious heterogeneity and stability of denominational affiliations until the 19th century, theGerman Lands of the Holy Roman Empire present an ideal testing ground for this hypothesis.Using population figures in a dataset comprising 272 cities in the years 1300 1900, I find no effectsof Protestantism on economic growth. The finding is robust to the inclusion of a varietyof controls, and does not appear to depend on data selection or small sample size. In addition,Protestantism has no effect when interacted with other likely determinants of economic development.I also analyze the endogeneity of religious choice; instrumental variables estimates ofthe effects of Protestantism are similar to the OLS results.
Resumo:
Structural equation models (SEM) are commonly used to analyze the relationship between variables some of which may be latent, such as individual ``attitude'' to and ``behavior'' concerning specific issues. A number of difficulties arise when we want to compare a large number of groups, each with large sample size, and the manifest variables are distinctly non-normally distributed. Using an specific data set, we evaluate the appropriateness of the following alternative SEM approaches: multiple group versus MIMIC models, continuous versus ordinal variables estimation methods, and normal theory versus non-normal estimation methods. The approaches are applied to the ISSP-1993 Environmental data set, with the purpose of exploring variation in the mean level of variables of ``attitude'' to and ``behavior''concerning environmental issues and their mutual relationship across countries. Issues of both theoretical and practical relevance arise in the course of this application.
Resumo:
We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.
Resumo:
Designing an efficient sampling strategy is of crucial importance for habitat suitability modelling. This paper compares four such strategies, namely, 'random', 'regular', 'proportional-stratified' and 'equal -stratified'- to investigate (1) how they affect prediction accuracy and (2) how sensitive they are to sample size. In order to compare them, a virtual species approach (Ecol. Model. 145 (2001) 111) in a real landscape, based on reliable data, was chosen. The distribution of the virtual species was sampled 300 times using each of the four strategies in four sample sizes. The sampled data were then fed into a GLM to make two types of prediction: (1) habitat suitability and (2) presence/ absence. Comparing the predictions to the known distribution of the virtual species allows model accuracy to be assessed. Habitat suitability predictions were assessed by Pearson's correlation coefficient and presence/absence predictions by Cohen's K agreement coefficient. The results show the 'regular' and 'equal-stratified' sampling strategies to be the most accurate and most robust. We propose the following characteristics to improve sample design: (1) increase sample size, (2) prefer systematic to random sampling and (3) include environmental information in the design'
Resumo:
Friedman et al. report that hemodialysis patients with the highest levels of n-3 fatty acids had impressively low odds of sudden cardiac death. The study is limited by a small sample size, and the analysis relies on only a single baseline measurement of blood levels. Recent randomized evidence indeed fails to support that n-3 fatty acids may prevent sudden death in nonrenal patients. More evidence is needed to advocate fish oil in this setting.
Resumo:
BACKGROUND AND PURPOSE: To investigate the effect of chronic hyperglycemia on cerebral microvascular remodeling using perfusion computed tomography. METHODS: We retrospectively identified 26 patients from our registry of 2453 patients who underwent a perfusion computed tomographic study and had their hemoglobin A1c (HbA1c) measured. These 26 patients were divided into 2 groups: those with HbA1c>6.5% (n=15) and those with HbA1c≤6.5% (n=11). Perfusion computed tomographic studies were processed using a delay-corrected, deconvolution-based software. Perfusion computed tomographic values were compared between the 2 patient groups, including mean transit time, which relates to the cerebral capillary architecture and length. RESULTS: Mean transit time values in the nonischemic cerebral hemisphere were significantly longer in the patients with HbA1c>6.5% (P=0.033), especially in the white matter (P=0.005). Significant correlation (R=0.469; P=0.016) between mean transit time and HbA1c level was observed. CONCLUSIONS: Our results from a small sample suggest that chronic hyperglycemia may be associated with cerebral microvascular remodeling in humans. Additional prospective studies with larger sample size are required to confirm this observation.