14 resultados para sampling methods

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study is concerned with labour productivity in traditional house building in Scotland. Productivity is a measure of the effective use of resources and provides vital benefits that can be combined in a number of ways. The introduction gives the background to two Scottish house building sites (Blantyre and Greenfield) that were surveyed by the Building Research Establishment (BEE) activity sampling method to provide the data for the study. The study had two main objectives; (1) summary data analysis in average manhours per house between all the houses on the site, and (2) detailed data analysis in average manhours for each house block on the site. The introduction also provides a literature review related to the objectives. The method is outlined in Chapter 2, the sites are discussed in Chapter 3, and Chapter 4 covers the method application on each site and a method development made in the study. The summary data analysis (Chapter 5) compares Blantyre and Greenfield, and two previous BEE surveys in England. The main detailed data analysis consisted of three forms, (Chapters 6, 7 and 8) each applied to a set of operations. The three forms of analysis were variations in average manhours per house for each house block on the site compared with; (1) block construction order, (2) average number of separate visits per house made by operatives to each block to complete an operation, and (3) average number of different operatives per house employed on an operation in each block. Three miscellaneous items of detail data analysis are discussed in Chapter 9. The conclusions to the whole study state that considerable variations in manhours for repeated operations were discovered, that the numbers of visits by operatives to complete operations were large and that the numbers of different operatives employed in some operations were a factor related to productivity. A critique of the activity sampling method suggests that the data produced is reliable in summary form and can give a good context for more detailed data collection. For future work, this could take the form of selected operations, with the context of an activity sampling survey, that wuld be intensively surveyed by other methods.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The use of quantitative methods has become increasingly important in the study of neurodegenerative disease. Disorders such as Alzheimer's disease (AD) are characterized by the formation of discrete, microscopic, pathological lesions which play an important role in pathological diagnosis. This article reviews the advantages and limitations of the different methods of quantifying the abundance of pathological lesions in histological sections, including estimates of density, frequency, coverage, and the use of semiquantitative scores. The major sampling methods by which these quantitative measures can be obtained from histological sections, including plot or quadrat sampling, transect sampling, and point-quarter sampling, are also described. In addition, the data analysis methods commonly used to analyse quantitative data in neuropathology, including analyses of variance (ANOVA) and principal components analysis (PCA), are discussed. These methods are illustrated with reference to particular problems in the pathological diagnosis of AD and dementia with Lewy bodies (DLB).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The last decade has seen a considerable increase in the application of quantitative methods in the study of histological sections of brain tissue and especially in the study of neurodegenerative disease. These disorders are characterised by the deposition and aggregation of abnormal or misfolded proteins in the form of extracellular protein deposits such as senile plaques (SP) and intracellular inclusions such as neurofibrillary tangles (NFT). Quantification of brain lesions and studying the relationships between lesions and normal anatomical features of the brain, including neurons, glial cells, and blood vessels, has become an important method of elucidating disease pathogenesis. This review describes methods for quantifying the abundance of a histological feature such as density, frequency, and 'load' and the sampling methods by which quantitative measures can be obtained including plot/quadrat sampling, transect sampling, and the point-quarter method. In addition, methods for determining the spatial pattern of a histological feature, i.e., whether the feature is distributed at random, regularly, or is aggregated into clusters, are described. These methods include the use of the Poisson and binomial distributions, pattern analysis by regression, Fourier analysis, and methods based on mapped point patterns. Finally, the statistical methods available for studying the degree of spatial correlation between pathological lesions and neurons, glial cells, and blood vessels are described.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article reviews methods for quantifying the abundance of histological features in thin tissue sections of brain such as neurons, glia, blood vessels, and pathological lesions. The sampling methods by which quantitative measures can be obtained are described. In addition, methods are described for determining the spatial pattern of an object and for measuring the degree of spatial correlation between two or more histological features.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background. Non-attendance at paediatric hospital outpatient appointments poses potential risks to children's health and welfare. Prevention and management of missed appointments depends on the perceptions of clinicians and decision makers from both primary and secondary care, including general practitioners (GPs) who are integral to non-attendance follow-up. Objectives. To examine the views of clinical, managerial and executive health care staff regarding occurrence and management of non-attendance at general paediatric outpatient clinics. Methods. A qualitative study using individual semi-structured interviews was carried out at three English Primary Care Trusts and a nearby children's hospital. Interviews were conducted with 37 staff, including GPs, hospital doctors, other health care professionals, managers, executives and commissioners. Participants were recruited through purposive and 'snowball' sampling methods. Data were analysed following a thematic framework approach. Results. GPs focused on situational difficulties for families, while hospital-based staff emphasized the influence of parents' beliefs on attendance. Managers, executives and commissioners presented a broad overview of both factors, but with less detailed views. All groups discussed sociodemographic factors, with non-attendance thought to be more likely in 'chaotic families'. Hospital interviewees emphasized child protection issues and the need for thorough follow-up of missed appointments. However, GPs were reluctant to interfere with parental responsibilities. Conclusion. Parental motivation and practical and social barriers should be considered. Responsibilities regarding missed appointments are not clear across health care sectors, but GPs are uniquely placed to address non-attendance issues and are central to child safeguarding. Primary care policies and strategies could be introduced to reduce non-attendance and ensure children receive the care they require. © The Author 2013.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Based upon unique survey data collected using respondent driven sampling methods, we investigate whether there is a gender pay gap among social entrepreneurs in the UK. We find that women as social entrepreneurs earn 29% less than their male colleagues, above the average UK gender pay gap of 19%. We estimate the adjusted pay gap to be about 23% after controlling for a range of demographic, human capital and job characteristics, as well as personal preferences and values. These differences are hard to explain by discrimination since these CEOs set their own pay. Income may not be the only aim in an entrepreneurial career, so we also look at job satisfaction to proxy for non-monetary returns. We find female social entrepreneurs to be more satisfied with their job as a CEO of a social enterprise than their male counterparts. This result holds even when we control for the salary generated through the social enterprise. Our results extend research in labour economics on the gender pay gap as well as entrepreneurship research on women’s entrepreneurship to the novel context of social enterprise. It provides the first evidence for a “contented female social entrepreneur” paradox.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background There is a paucity of data describing the prevalence of childhood refractive error in the United Kingdom. The Northern Ireland Childhood Errors of Refraction study, along with its sister study the Aston Eye Study, are the first population-based surveys of children using both random cluster sampling and cycloplegic autorefraction to quantify levels of refractive error in the United Kingdom. Methods Children aged 6–7 years and 12–13 years were recruited from a stratified random sample of primary and post-primary schools, representative of the population of Northern Ireland as a whole. Measurements included assessment of visual acuity, oculomotor balance, ocular biometry and cycloplegic binocular open-field autorefraction. Questionnaires were used to identify putative risk factors for refractive error. Results 399 (57%) of 6–7 years and 669 (60%) of 12–13 years participated. School participation rates did not vary statistically significantly with the size of the school, whether the school is urban or rural, or whether it is in a deprived/non-deprived area. The gender balance, ethnicity and type of schooling of participants are reflective of the Northern Ireland population. Conclusions The study design, sample size and methodology will ensure accurate measures of the prevalence of refractive errors in the target population and will facilitate comparisons with other population-based refractive data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose - To evaluate adherence to prescribed antiepileptic drugs (AEDs) in children with epilepsy using a combination of adherence-assessment methods. Methods - A total of 100 children with epilepsy (≤17 years old) were recruited. Medication adherence was determined via parental and child self-reporting (≥9 years old), medication refill data from general practitioner (GP) prescribing records, and via AED concentrations in dried blood spot (DBS) samples obtained from children at the clinic and via self- or parental-led sampling in children's own homes. The latter were assessed using population pharmacokinetic modeling. Patients were deemed nonadherent if any of these measures were indicative of nonadherence with the prescribed treatment. In addition, beliefs about medicines, parental confidence in seizure management, and the presence of depressed mood in parents were evaluated to examine their association with nonadherence in the participating children. Key Findings - The overall rate of nonadherence in children with epilepsy was 33%. Logistic regression analysis indicated that children with generalized epilepsy (vs. focal epilepsy) were more likely (odds ratio [OR] 4.7, 95% confidence interval [CI] 1.37–15.81) to be classified as nonadherent as were children whose parents have depressed mood (OR 3.6, 95% CI 1.16–11.41). Significance - This is the first study to apply the novel methodology of determining adherence via AED concentrations in clinic and home DBS samples. The present findings show that the latter, with further development, could be a useful approach to adherence assessment when combined with other measures including parent and child self-reporting. Seizure type and parental depressed mood were strongly predictive of nonadherence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent work we have developed a novel variational inference method for partially observed systems governed by stochastic differential equations. In this paper we provide a comparison of the Variational Gaussian Process Smoother with an exact solution computed using a Hybrid Monte Carlo approach to path sampling, applied to a stochastic double well potential model. It is demonstrated that the variational smoother provides us a very accurate estimate of mean path while conditional variance is slightly underestimated. We conclude with some remarks as to the advantages and disadvantages of the variational smoother. © 2008 Springer Science + Business Media LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A combination of experimental methods was applied at a clogged, horizontal subsurface flow (HSSF) municipal wastewater tertiary treatment wetland (TW) in the UK, to quantify the extent of surface and subsurface clogging which had resulted in undesirable surface flow. The three dimensional hydraulic conductivity profile was determined, using a purpose made device which recreates the constant head permeameter test in-situ. The hydrodynamic pathways were investigated by performing dye tracing tests with Rhodamine WT and a novel multi-channel, data-logging, flow through Fluorimeter which allows synchronous measurements to be taken from a matrix of sampling points. Hydraulic conductivity varied in all planes, with the lowest measurement of 0.1 md1 corresponding to the surface layer at the inlet, and the maximum measurement of 1550 md1 located at a 0.4m depth at the outlet. According to dye tracing results, the region where the overland flow ceased received five times the average flow, which then vertically short-circuited below the rhizosphere. The tracer break-through curve obtained from the outlet showed that this preferential flow-path accounted for approximately 80% of the flow overall and arrived 8 h before a distinctly separate secondary flow-path. The overall volumetric efficiencyof the clogged system was 71% and the hydrology was simulated using a dual-path, dead-zone storage model. It is concluded that uneven inlet distribution, continuous surface loading and high rhizosphere resistance is responsible for the clog formation observed in this system. The average inlet hydraulic conductivity was 2 md1, suggesting that current European design guidelines, which predict that the system will reach an equilibrium hydraulic conductivity of 86 md1, do not adequately describe the hydrology of mature systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To characterize the population pharmacokinetics of canrenone following administration of potassium canrenoate (K-canrenoate) in paediatric patients. Methods: Data were collected prospectively from 37 paediatric patients (median weight 2.9 kg, age range 2 days–0.85 years) who received intravenous K-canrenoate for management of retained fluids, for example in heart failure and chronic lung disease. Dried blood spot (DBS) samples (n = 213) from these were analysed for canrenone content and the data subjected to pharmacokinetic analysis using nonlinear mixed-effects modelling. Another group of patients (n = 16) who had 71 matching plasma and DBS samples was analysed separately to compare canrenone pharmacokinetic parameters obtained using the two different matrices. Results: A one-compartment model best described the DBS data. Significant covariates were weight, postmenstrual age (PMA) and gestational age. The final population models for canrenone clearance (CL/F) and volume of distribution (V/F) in DBS were CL/F (l/h) = 12.86 ×  (WT/70.0)0.75 × e [0.066 ×  (PMA - 40]) and V/F (l) = 603.30 ×  (WT/70) × (GA/40)1.89 where weight is in kilograms. The corresponding values of CL/F and V/F in a patient with a median weight of 2.9 kg are 1.11 l/h and 20.48 l, respectively. Estimated half-life of canrenone based on DBS concentrations was similar to that based on matched plasma concentrations (19.99 and 19.37 h, respectively, in 70 kg patient). Conclusion: The range of estimated CL/F in DBS for the study population was 0.12–9.62 l/h; hence, bodyweight-based dosage adjustment of K-canrenoate appears necessary. However, a dosing scheme that takes into consideration both weight and age (PMA/gestational age) of paediatric patients seems more appropriate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most pressing demands on electrophysiology applied to the diagnosis of epilepsy is the non-invasive localization of the neuronal generators responsible for brain electrical and magnetic fields (the so-called inverse problem). These neuronal generators produce primary currents in the brain, which together with passive currents give rise to the EEG signal. Unfortunately, the signal we measure on the scalp surface doesn't directly indicate the location of the active neuronal assemblies. This is the expression of the ambiguity of the underlying static electromagnetic inverse problem, partly due to the relatively limited number of independent measures available. A given electric potential distribution recorded at the scalp can be explained by the activity of infinite different configurations of intracranial sources. In contrast, the forward problem, which consists of computing the potential field at the scalp from known source locations and strengths with known geometry and conductivity properties of the brain and its layers (CSF/meninges, skin and skull), i.e. the head model, has a unique solution. The head models vary from the computationally simpler spherical models (three or four concentric spheres) to the realistic models based on the segmentation of anatomical images obtained using magnetic resonance imaging (MRI). Realistic models – computationally intensive and difficult to implement – can separate different tissues of the head and account for the convoluted geometry of the brain and the significant inter-individual variability. In real-life applications, if the assumptions of the statistical, anatomical or functional properties of the signal and the volume in which it is generated are meaningful, a true three-dimensional tomographic representation of sources of brain electrical activity is possible in spite of the ‘ill-posed’ nature of the inverse problem (Michel et al., 2004). The techniques used to achieve this are now referred to as electrical source imaging (ESI) or magnetic source imaging (MSI). The first issue to influence reconstruction accuracy is spatial sampling, i.e. the number of EEG electrodes. It has been shown that this relationship is not linear, reaching a plateau at about 128 electrodes, provided spatial distribution is uniform. The second factor is related to the different properties of the source localization strategies used with respect to the hypothesized source configuration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accuracy of a map is dependent on the reference dataset used in its construction. Classification analyses used in thematic mapping can, for example, be sensitive to a range of sampling and data quality concerns. With particular focus on the latter, the effects of reference data quality on land cover classifications from airborne thematic mapper data are explored. Variations in sampling intensity and effort are highlighted in a dataset that is widely used in mapping and modelling studies; these may need accounting for in analyses. The quality of the labelling in the reference dataset was also a key variable influencing mapping accuracy. Accuracy varied with the amount and nature of mislabelled training cases with the nature of the effects varying between classifiers. The largest impacts on accuracy occurred when mislabelling involved confusion between similar classes. Accuracy was also typically negatively related to the magnitude of mislabelled cases and the support vector machine (SVM), which has been claimed to be relatively insensitive to training data error, was the most sensitive of the set of classifiers investigated, with overall classification accuracy declining by 8% (significant at 95% level of confidence) with the use of a training set containing 20% mislabelled cases.