962 resultados para Medicine Research Statistical methods
Resumo:
Two contrasting multivariate statistical methods, viz., principal components analysis (PCA) and cluster analysis were applied to the study of neuropathological variations between cases of Alzheimer's disease (AD). To compare the two methods, 78 cases of AD were analyzed, each characterised by measurements of 47 neuropathological variables. Both methods of analysis revealed significant variations between AD cases. These variations were related primarily to differences in the distribution and abundance of senile plaques (SP) and neurofibrillary tangles (NFT) in the brain. Cluster analysis classified the majority of AD cases into five groups which could represent subtypes of AD. However, PCA suggested that variation between cases was more continuous with no distinct subtypes. Hence, PCA may be a more appropriate method than cluster analysis in the study of neuropathological variations between AD cases.
Resumo:
The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.
Resumo:
The last decade has seen a considerable increase in the application of quantitative methods in the study of histological sections of brain tissue and especially in the study of neurodegenerative disease. These disorders are characterised by the deposition and aggregation of abnormal or misfolded proteins in the form of extracellular protein deposits such as senile plaques (SP) and intracellular inclusions such as neurofibrillary tangles (NFT). Quantification of brain lesions and studying the relationships between lesions and normal anatomical features of the brain, including neurons, glial cells, and blood vessels, has become an important method of elucidating disease pathogenesis. This review describes methods for quantifying the abundance of a histological feature such as density, frequency, and 'load' and the sampling methods by which quantitative measures can be obtained including plot/quadrat sampling, transect sampling, and the point-quarter method. In addition, methods for determining the spatial pattern of a histological feature, i.e., whether the feature is distributed at random, regularly, or is aggregated into clusters, are described. These methods include the use of the Poisson and binomial distributions, pattern analysis by regression, Fourier analysis, and methods based on mapped point patterns. Finally, the statistical methods available for studying the degree of spatial correlation between pathological lesions and neurons, glial cells, and blood vessels are described.
Resumo:
A history of government drug regulation and the relationship between the pharmaceutical companies in the U.K. and the licensing authority is outlined. Phases of regulatory stringency are identified with the formation of the Committees on Safety of Drugs and Medicines viewed as watersheds. A study of the impact of government regulation on industrial R&D activities focuses on the effects on the rate and direction of new product innovation. A literature review examines the decline in new chemical entity innovation. Regulations are cited as a major but not singular cause of the decline. Previous research attempting to determine the causes of such a decline on an empirical basis is given and the methodological problems associated with such research are identified. The U.K. owned sector of the British pharmaceutical industry is selected for a study employing a bottom-up approach allowing disaggregation of data. A historical background to the industry is provided, with each company analysed or a case study basis. Variations between companies regarding the policies adopted for R&D are emphasised. The process of drug innovation is described in order to determine possible indicators of the rate and direction of inventive and innovative activity. All possible indicators are considered and their suitability assessed. R&D expenditure data for the period 1960-1983 is subsequently presented as an input indicator. Intermediate output indicators are treated in a similar way and patent data are identified as a readily-available and useful source. The advantages and disadvantages of using such data are considered. Using interview material, patenting policies for most of the U.K. companies are described providing a background for a patent-based study. Sources of patent data are examined with an emphasis on computerised systems. A number of searches using a variety of sources are presented. Patent family size is examined as a possible indicator of an invention's relative importance. The patenting activity of the companies over the period 1960-1983 is given and the variation between companies is noted. The relationship between patent data and other indicators used is analysed using statistical methods resulting in an apparent lack of correlation. An alternative approach taking into account variations in company policy and phases in research activity indicates a stronger relationship between patenting activity, R&D Expenditure and NCE output over the period. The relationship is not apparent at an aggregated company level. Some evidence is presented for a relationship between phases of regulatory stringency, inventive and innovative activity but the importance of other factors is emphasised.
Resumo:
The fluids used in hydraulic systems inevitably contain large numbers of small, solid particles, a phenomenon known as 'fluid contamination'. Particles enter a hydraulic system from the environment, and are generated within it by processes of wear. At the same time, particles are removed from the system fluid by sedimentation and in hydraulic filters. This thesis considers the problems caused by fluid contamination, as they affect a manufacturer of axial piston pumps. The specific project aim was to investigate methods of predicting or determining the effects of fluid contamination on this type of pump. The thesis starts with a theoretical analysis of the contaminated lubrication of a slipper-pad bearing. Statistical methods are used to develop a model of the blocking, by particles, of the control capillaries used in such bearings. The results obtained are compared to published, experimental data. Poor correlation between theory and practice suggests that more research is required in this area before such theoretical analysis can be used in industry. Accelerated wear tests have been developed in the U.S.A. in an attempt to predict pump life when operating on contaminated fluids. An analysis of such tests shows that reliability data can only be obtained from extensive test programmes. The value of contamination testing is suggested to be in determining failure modes, and in identifying those pump components which are susceptible to the effects of contamination. A suitable test is described, and the results of a series of tests on axial piston pumps are presented and discussed. The thesis concludes that pump reliability data can only be obtained from field experience. The level of confidence which can be placed in results from normal laboratory testing is shown to be too low for the data to be of real value. Recommendations are therefore given for the ways in which service data should be collected and analysed.
Resumo:
This work is concerned with the development of techniques for the evaluation of large-scale highway schemes with particular reference to the assessment of their costs and benefits in the context of the current transport planning (T.P.P.) process. It has been carried out in close cooperation with West Midlands County Council, although its application and results are applicable elsewhere. The background to highway evaluation and its development in recent years has been described and the emergence of a number of deficiencies in current planning practise noted. One deficiency in particular stood out, that stemming from inadequate methods of scheme generation and the research has concentrated upon improving this stage of appraisal, to ensure that subsequent stages of design, assessment and implementation are based upon a consistent and responsive foundation. Deficiencies of scheme evaluation were found to stem from inadequate development of appraisal methodologies suffering from difficulties of valuation, measurement and aggregation of the disparate variables that characterise highway evaluation. A failure to respond to local policy priorities was also noted. A 'problem' rather than 'goals' based approach to scheme generation was taken, as it represented the current and foreseeable resource allocation context more realistically. A review of techniques with potential for highway problem based scheme generation, which would work within a series of practical and theoretical constraints were assessed and that of multivariate analysis, and classical factor analysis in particular, was selected, because it offerred considerable application to the difficulties of valuation, measurement and aggregation that existed. Computer programs were written to adapt classical factor analysis to the requirements of T.P.P. highway evaluation, using it to derive a limited number of factors which described the extensive quantity of highway problem data. From this, a series of composite problem scores for 1979 were derived for a case study area of south Birmingham, based upon the factorial solutions, and used to assess highway sites in terms of local policy issues. The methodology was assessed in the light of its ability to describe highway problems in both aggregate and disaggregate terms, to guide scheme design, coordinate with current scheme evaluation methods, and in general to improve upon current appraisal. Analysis of the results was both in subjective, 'common-sense' terms and using statistical methods to assess the changes in problem definition, distribution and priorities that emerged. Overall, the technique was found to improve upon current scheme generation methods in all respects and in particular in overcoming the problems of valuation, measurement and aggregation without recourse to unsubstantiated and questionable assumptions. A number of deficiencies which remained have been outlined and a series of research priorities described which need to be reviewed in the light of current and future evaluation needs.
Resumo:
This hands-on, practical guide for ESL/EFL teachers and teacher educators outlines, for those who are new to doing action research, what it is and how it works. Straightforward and reader friendly, it introduces the concepts and offers a step-by-step guide to going through an action research process, including illustrations drawn widely from international contexts. Specifically, the text addresses: •action research and how it differs from other forms of research •the steps involved in developing an action research project •ways of developing a research focus •methods of data collection •approaches to data analysis •making sense of action research for further classroom action. Each chapter includes a variety of pedagogical activities: •Pre-Reading questions ask readers to consider what they already know about the topic •Reflection Points invite readers to think about/discuss what they have read •action points ask readers to carry out action-research tasks based on what they have read •Classroom Voices illustrate aspects of action research from teachers internationally •Summary Points provide a synopsis of the main points in the chapter
Resumo:
Objective - This study investigated and compared the prevalence of microalbuminuria and overt proteinuria and their determinants in a cohort of UK resident patients of white European or south Asian ethnicity with type 2 diabetes mellitus. Research design and methods - A total of 1978 patients, comprising 1486 of south Asian and 492 of white European ethnicity, in 25 general practices in Coventry and Birmingham inner city areas in England were studied in a cross-sectional study. Demographic and risk factor data were collected and presence of microalbuminuria and overt proteinuria assessed. Main outcome measures - Prevalences of microalbuminuria and overt proteinuria. Results - Urinary albumin:creatinine measurements were available for 1852 (94%) patients. The south Asian group had a lower prevalence of microalbuminuria, 19% vs. 23% and a higher prevalence of overt proteinuria, 8% vs. 3%, X2?=?15.85, 2df, P?=?0.0004. In multiple logistic regression models, adjusted for confounding factors, significantly increased risk for the south Asian vs. white European patients for overt proteinuria was shown; OR (95% CI) 2.17 (1.05, 4.49), P?=?0.0365. For microalbuminuria, an interaction effect for ethnicity and duration of diabetes suggested that risk for south Asian patients was lower in early years following diagnosis; OR for SA vs. WH at durations 0 and 1 year were 0.56 (0.37, 0.86) and 0.59 (0.39, 0.89) respectively. After 20 years’ duration, OR?=?1.40 (0.63, 3.08). Limitations - Comparability of ethnicity defined groups; statistical methods controlled for differences between groups, but residual confounding may remain. Analyses are based on a single measure of albumin:creatinine ratio. Conclusions - There were significant differences between ethnicity groups in risk factor profiles and microalbuminuria and overt proteinuria outcomes. Whilst south Asian patients had no excess risk of microalbuminuria, the risk of overt proteinuria was elevated significantly, which might be explained by faster progression of renal dysfunction in patients of south Asian ethnicity.
Resumo:
A major challenge in text mining for biomedicine is automatically extracting protein-protein interactions from the vast amount of biomedical literature. We have constructed an information extraction system based on the Hidden Vector State (HVS) model for protein-protein interactions. The HVS model can be trained using only lightly annotated data whilst simultaneously retaining sufficient ability to capture the hierarchical structure. When applied in extracting protein-protein interactions, we found that it performed better than other established statistical methods and achieved 61.5% in F-score with balanced recall and precision values. Moreover, the statistical nature of the pure data-driven HVS model makes it intrinsically robust and it can be easily adapted to other domains.
Resumo:
Richard Armstrong was educated at King’s College London (1968-1971) and subsequently at St. Catherine’s College Oxford (1972-1976). His early research involved the application of statistical methods to problems in botany and ecology. For the last 34 years, he has been a lecturer in Botany, Microbiology, Ecology, Neuroscience, and Optometry at the University of Aston. His current research interests include the application of quantitative methods to the study of neuropathology of neurodegenerative diseases with special reference to vision and the visual system.
Resumo:
The research presented in this thesis investigates the nature of the relationship between the development of the Knowledge-Based Economy (KBE) and Structural Funds (SF) in European regions. A particular focus is placed on the West Midlands (UK) and Silesia (Poland). The time-frame taken into account in this research is the years 1999 to 2009. This is methodologically addressed by firstly establishing a new way of calculating the General Index of the KBE for all of the EU regions; secondly, applying a number of statistical methods to measure the influence of the Funds on the changes in the regional KBE over time; and finally, by conducting a series of semi-structured stakeholder interviews in the two key case study regions: the West Midlands and Silesia. The three main findings of the thesis are: first, over the examined time-frame, the values of the KBE General Index increased in over 66% of the EU regions; furthermore, the number of the “new” EU regions in which the KBE increased over time is far higher than in the “old” EU. Second, any impact of Structural Funds on the regional KBE occurs only in the minority of the European regions and any form of functional dependency between the two can be observed only in 30% of the regions. Third, although the pattern of development of the regional KBE and the correlation coefficients differ in the cases of Silesia and the West Midlands, the analysis of variance carried out yields identical results for both regions. Furthermore, the qualitative analysis’ results show similarities in the approach towards the Structural Funds in the two key case-study regions.
Resumo:
Biological experiments often produce enormous amount of data, which are usually analyzed by data clustering. Cluster analysis refers to statistical methods that are used to assign data with similar properties into several smaller, more meaningful groups. Two commonly used clustering techniques are introduced in the following section: principal component analysis (PCA) and hierarchical clustering. PCA calculates the variance between variables and groups them into a few uncorrelated groups or principal components (PCs) that are orthogonal to each other. Hierarchical clustering is carried out by separating data into many clusters and merging similar clusters together. Here, we use an example of human leukocyte antigen (HLA) supertype classification to demonstrate the usage of the two methods. Two programs, Generating Optimal Linear Partial Least Square Estimations (GOLPE) and Sybyl, are used for PCA and hierarchical clustering, respectively. However, the reader should bear in mind that the methods have been incorporated into other software as well, such as SIMCA, statistiXL, and R.
Resumo:
The development of new, health supporting food of high quality and the optimization of food technological processes today require the application of statistical methods of experimental design. The principles and steps of statistical planning and evaluation of experiments will be explained. By example of the development of a gluten-free rusk (zwieback), which is enriched by roughage compounds the application of a simplex-centroid mixture design will be shown. The results will be illustrated by different graphics.
Resumo:
In Taiwan, the college freshmen are recruited graduates of both senior high school and senior vocational school. The Ministry of Education (MOE) of the Republic of China prescribes the standards of curriculum and equipment for schools at all levels and categories. There exists a considerably different curriculum arrangement for senior high schools and vocational high schools in Taiwan at the present time. The present study used a causal-comparative research design to identify the influences of different post-secondary educational background on specialized course performance of college business majors. ^ The students involved in this study were limited to the students of four business-related departments at Tamsui Oxford University College in Taiwan. Students were assigned to comparison groups based on their post-secondary educational background as senior high school graduates and commercial high school graduates. The analysis of this study included a comparison of students' performance on lower level courses and a comparison of students' performance in financial management. The analysis also considered the relationship between the students' performance in financial management and its related prerequisite courses. The Kolb Learning Style Inventory (LSI) survey was administered to categorize subjects' learning styles and to compare the learning styles between the two groups in this study. The applied statistical methods included t-test, correlation, multiple regression, and Chi-square. ^ The findings of this study indicated that there were significant differences between the commercial high school graduates and the senior high school graduates on academic performances in specialized courses but not in general courses. There were no significant differences in learning styles between the two groups. These findings lead to the conclusion that business majors' academic performance in specialized courses were influenced by their post-secondary educational background. ^
Resumo:
Efforts that are underway to rehabilitate the Florida Bay ecosystem to a more natural state are best guided by a comprehensive understanding of the natural versus human-induced variability that has existed within the ecosystem. Benthic foraminifera, which are well-known paleoenvironmental indicators, were identified in 203 sediment samples from six sediment cores taken from Florida Bay, and analyzed to understand the environmental variability through anthropogenically unaltered and altered periods. In this research, taxa serving as indicators of (1) seagrass abundance (which is correlated with water quality), (2) salinity, and (3) general habitat change, were studied in detail over the past 120 years, and more generally over the past ~4000 years. Historical seagrass abundance was reconstructed with the proportions of species that prefer living attached to seagrass blades over other substrates. Historical salinity trends were determined by analyzing brackish versus marine faunas, which were defined based on species’ salinity preferences. Statistical methods including cluster analysis, discriminant analysis, analysis of variance and Fisher’s α were used to analyze trends in the data. The changes in seagrass abundance and salinity over the last ~120 years are attributed to anthropogenic activities such as construction of the Flagler Railroad from the mainland to the Florida Keys, the Tamiami Trail that stretches from the east to west coast, and canals and levees in south Florida, as well as natural events such as droughts and increased rainfall from hurricanes. Longer term changes (over ~4000 years) in seagrass abundance and salinity are mostly related to sea level changes. Since seawater entered the Florida Bay area around ~4000 years ago, only one probable sea level drop occurring around ~3000 years was identified.