935 resultados para Scale Factor
Resumo:
BACKGROUND: Gilles de la Tourette syndrome (GTS) is a chronic childhood-onset neuropsychiatric disorder with a significant impact on patients' health-related quality of life (HR-QOL). Cavanna et al. (Neurology 2008; 71: 1410-1416) developed and validated the first disease-specific HR-QOL assessment tool for adults with GTS (Gilles de la Tourette Syndrome-Quality of Life Scale, GTS-QOL). This paper presents the translation, adaptation and validation of the GTS-QOL for young Italian patients with GTS. METHODS: A three-stage process involving 75 patients with GTS recruited through three Departments of Child and Adolescent Neuropsychiatry in Italy led to the development of a 27-item instrument (Gilles de la Tourette Syndrome-Quality of Life Scale in children and adolescents, C&A-GTS-QOL) for the assessment of HR-QOL through a clinician-rated interview for 6-12 year-olds and a self-report questionnaire for 13-18 year-olds. RESULTS: The C&A-GTS-QOL demonstrated satisfactory scaling assumptions and acceptability. Internal consistency reliability was high (Cronbach's alpha > 0.7) and validity was supported by interscale correlations (range 0.4-0.7), principal-component factor analysis and correlations with other rating scales and clinical variables. CONCLUSIONS: The present version of the C&A-GTS-QOL is the first disease-specific HR-QOL tool for Italian young patients with GTS, satisfying criteria for acceptability, reliability and validity. © 2013 - IOS Press and the authors. All rights reserved.
Resumo:
This paper analyses the efficiency of Malaysian commercial banks between 1996 and 2002 and finds that while the East Asian financial crisis caused a short-term increase in efficiency in 1998 primarily due to cost-cutting, increases in non-performing loans after the crisis caused a more sustained decline in bank efficiency. It is also found that mergers, fully Islamic banks, and conventional banks operating Islamic banking windows are all associated with lower efficiency. The paper estimates suggest mild decreasing returns to scale, and an average productivity change of 2.37% that is primarily attributable to technical change, which has nonetheless declined over time. Finally, while Islamic banks have been moderately successful in developing new products and technologies, the results suggest that the potential for Islamic banks to overcome their relative inefficiency is limited.
Resumo:
Objectives: To conduct an independent evaluation of the first phase of the Health Foundation's Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design: Mixed method evaluation involving five substudies, before and after design. Setting: NHS hospitals in United Kingdom. Participants: Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention: The SPI1 was a compound (multicomponent) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results: Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration - monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items) - there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for "difference in difference" 2.1, 99% confidence interval 1.0 to 4.3; P=0.008). Use of a formal scoring system for patients with pneumonia also increased over time (from 2% (102) to 23% (111) in control hospitals and from 2% (170) to 9% (189) in SPI1 hospitals), which favoured controls and was not significant (0.3, 0.02 to 3.4; P=0.173). There were no improvements in the proportion of prescription errors and no effects that could be attributed to SPI1 in non-targeted generic areas (such as enhanced safety culture). On some measures, the lack of effect could be because compliance was already high at baseline (such as use of steroids in over 85% of cases where indicated), but even when there was more room for improvement (such as in quality of medical history taking), there was no significant additional net effect of SPI1. There were no changes over time or between control and SPI1 hospitals in errors or rates of adverse events in patients in medical wards. Mortality increased from 11% (27) to 16% (39) among controls and decreased from17%(63) to13%(49) among SPI1 hospitals, but the risk adjusted difference was not significant (0.5, 0.2 to 1.4; P=0.085). Poor care was a contributing factor in four of the 178 deaths identified by review of case notes. The survey of patients showed no significant differences apart from an increase in perception of cleanliness in favour of SPI1 hospitals. Conclusions The introduction of SPI1 was associated with improvements in one of the types of clinical process studied (monitoring of vital signs) and one measure of staff perceptions of organisational climate. There was no additional effect of SPI1 on other targeted issues nor on other measures of generic organisational strengthening.
Resumo:
Defining 'effectiveness' in the context of community mental health teams (CMHTs) has become increasingly difficult under the current pattern of provision required in National Health Service mental health services in England. The aim of this study was to establish the characteristics of multi-professional team working effectiveness in adult CMHTs to develop a new measure of CMHT effectiveness. The study was conducted between May and November 2010 and comprised two stages. Stage 1 used a formative evaluative approach based on the Productivity Measurement and Enhancement System to develop the scale with multiple stakeholder groups over a series of qualitative workshops held in various locations across England. Stage 2 analysed responses from a cross-sectional survey of 1500 members in 135 CMHTs from 11 Mental Health Trusts in England to determine the scale's psychometric properties. Based on an analysis of its structural validity and reliability, the resultant 20-item scale demonstrated good psychometric properties and captured one overall latent factor of CMHT effectiveness comprising seven dimensions: improved service user well-being, creative problem-solving, continuous care, inter-team working, respect between professionals, engagement with carers and therapeutic relationships with service users. The scale will be of significant value to CMHTs and healthcare commissioners both nationally and internationally for monitoring, evaluating and improving team functioning in practice.
Resumo:
Background: Food allergy is often a life-long condition that requires constant vigilance in order to prevent accidental exposure and avoid potentially life-threatening symptoms. Parents’ confidence in managing their child’s food allergy may relate to the poor quality of life anxiety and worry reported by parents of food allergic children. Objective: The aim of the current study was to develop and validate the first scale to measure parental confidence (self-efficacy) in managing food allergy in their child. Methods: The Food Allergy Self-Efficacy Scale for Parents (FASE-P) was developed through interviews with 53 parents, consultation of the literature and experts in the area. The FASE-P was then completed by 434 parents of food allergic children from a general population sample in addition to the General Self-Efficacy Scale (GSES), the Food Allergy Quality of Life Parental Burden Scale (FAQL-PB), the General Health Questionnaire (GHQ12) and the Food Allergy Impact Measure (FAIM). A total of 250 parents completed the re-test of the FASE-P. Results: Factor and reliability analysis resulted in a 21 item scale with 5 sub-scales. The overall scale and sub-scales has good to excellent internal consistency (α’s of 0.63-0.89) and the scale is stable over time. There were low to moderate significant correlations with the GSES, FAIM and GHQ12 and strong correlations with the FAQL-PB, with better parental confidence relating to better general self-efficacy, better quality of life and better mental health in the parent. Poorer self-efficacy was related to egg and milk allergy; self-efficacy was not related to severity of allergy. Conclusions and clinical relevance: The FASE-P is a reliable and valid scale for use with parents from a general population. Its application within clinical settings could aid provision of advice and improve targeted interventions by identifying areas where parents have less confidence in managing their child’s food allergy.
Resumo:
The Center for Epidemiologic Studies-Depression Scale (CES-D) is the most frequently used scale for measuring depressive symptomatology in caregiving research. The aim of this study is to test its construct structure and measurement equivalence between caregivers from two Spanish-speaking countries. Face-to-face interviews were carried out with 595 female dementia caregivers from Madrid, Spain, and from Coahuila, Mexico. The structure of the CES-D was analyzed using exploratory and confirmatory factor analysis (EFA and CFA, respectively). Measurement invariance across samples was analyzed comparing a baseline model with a more restrictive model. Significant differences between means were found for 7 items. The results of the EFA clearly supported a four-factor solution. The CFA for the whole sample with the four factors revealed high and statistically significant loading coefficients for all items (except item number 4). When equality constraints were imposed to test for the invariance between countries, the change in chi-square was significant, indicating that complete invariance could not be assumed. Significant between-countries differences were found for three of the four latent factor mean scores. Although the results provide general support for the original four-factor structure, caution should be exercised on reporting comparisons of depression scores between Spanish-speaking countries.
Resumo:
The extant literature on workplace coaching is characterised by a lack of theoretical and empirical understanding regarding the effectiveness of coaching as a learning and development tool; the types of outcomes one can expect from coaching; the tools that can be used to measure coaching outcomes; the underlying processes that explain why and how coaching works and the factors that may impact on coaching effectiveness. This thesis sought to address these substantial gaps in the literature with three linked studies. Firstly, a meta-analysis of workplace coaching effectiveness (k = 17), synthesizing the existing research was presented. A framework of coaching outcomes was developed and utilised to code the studies. Analysis indicated that coaching had positive effects on all outcomes. Next, the framework of outcomes was utilised as the deductive start-point to the development of the scale measuring perceived coaching effectiveness. Utilising a multi-stage approach (n = 201), the analysis indicated that perceived coaching effectiveness may be organised into a six factor structure: career clarity; team performance; work well-being; performance; planning and organizing and personal effectiveness and adaptability. The final study was a longitudinal field experiment to test a theoretical model of individual differences and coaching effectiveness developed in this thesis. An organizational sample of 84 employees each participated in a coaching intervention, completed self-report surveys, and had their job performance rated by peers, direct reports and supervisors (a total of 352 employees provided data on participant performance). The results demonstrate that compared to a control group, the coaching intervention generated a number of positive outcomes. The analysis indicated that coachees’ enthusiasm, intellect and orderliness influenced the impact of coaching on outcomes. Mediation analysis suggested that mastery goal orientation, performance goal orientation and approach motivation in the form of behavioural activation system (BAS) drive, were significant mediators between personality and outcomes. Overall, the findings of this thesis make an original contribution to the understanding of the types of outcomes that can be expected from coaching, and the magnitude of impact coaching has on outcomes. The thesis also provides a tool for reliably measuring coaching effectiveness and a theoretical model to understand the influence of coachee individual differences on coaching outcomes.
Resumo:
Standard economic theory suggests that capital should flow from rich countries to poor countries. However, capital has predominantly flowed to rich countries. The three essays in this dissertation attempt to explain this phenomenon. The first two essays suggest theoretical explanations for why capital has not flowed to the poor countries. The third essay empirically tests the theoretical explanations.^ The first essay examines the effects of increasing returns to scale on international lending and borrowing with moral hazard. Introducing increasing returns in a two-country general equilibrium model yields possible multiple equilibria and helps explain the possibility of capital flows from a poor to a rich country. I find that a borrowing country may need to borrow sufficient amounts internationally to reach a minimum investment threshold in order to invest domestically.^ The second essay examines how a poor country may invest in sectors with low productivity because of sovereign risk, and how collateral differences across sectors may exacerbate the problem. I model sovereign borrowing with a two-sector economy: one sector with increasing returns to scale (IRS) and one sector with diminishing returns to scale (DRS). Countries with incomes below a threshold will only invest in the DRS sector, and countries with incomes above a threshold will invest mostly in the IRS sector. The results help explain the existence of a bimodal world income distribution.^ The third essay empirically tests the explanations for why capital has not flowed from the rich to the poor countries, with a focus on institutions and initial capital. I find that institutional variables are a very important factor, but in contrast to other studies, I show that institutions do not account for the Lucas Paradox. Evidence of increasing returns still exists, even when controlling for institutions and other variables. In addition, I find that the determinants of capital flows may depend on whether a country is rich or poor.^
Resumo:
This research sought to determine the implications of a non-traded differentiated commodity produced with increasing returns to scale, for the welfare of countries that allowed free international migration. We developed two- and three-country Ricardian models in which labor was the only factor of production. The countries traded freely in homogeneous goods produced with constant returns to scale. Each also had a non-traded differentiated good sector where production took place using increasing returns to scale technology. Then we allowed for free international migration between two of the countries and observed what happened to welfare in both countries as indicated by their per capita utilities in the new equilibrium relative to their pre-migration utilities. ^ Preferences of consumers were represented by a two-tier utility function [Dixit and Stiglitz 1977]. As migration took place it impacted utility in two ways. The expanding country enjoyed the positive effect of increased product diversity in the non-traded good sector. However, it also suffered adverse terms-of-trade as its production cost declined. The converse was true for the contracting country. To determine the net impact on welfare we derived indirect per capita utility functions of the countries algebraically and graphically. Then we juxtaposed the graphs of the utility functions to obtain possible general equilibria. These we used to observe the welfare outcomes. ^ We found that the most likely outcomes were either that both countries gained, or one country lost while the other gained. We were, however, able to generate cases where both countries lost as a result of allowing free inter-country migration. This was most likely to happen when the shares of income spent on each country's export good differed significantly. In the three country world when we allowed two of the countries to engage in preferential trading arrangements while imposing a prohibitive tariff on imports from the third country welfare of the partner countries declined. When inter-union migration was permitted welfare declined even further. This we showed was due to the presence of the non-traded good sector. ^
Resumo:
The purpose of this study was to create a scale that could measure compartmentalization. In the first of two studies 311 working undergraduates were asked to indicate agreement with 119 items that measured compartmentalization. The resulting scale's reliability and validity were evaluated by having a second sample of 312 working students complete the items that comprise a sphere overlap scale, two measures of spillover, and a measure of personality, coping, and demoralization. Although the study's original goal was not realized, its procedures were successful in developing a short (10-item) measure of work-to-home spillover whose items loaded on a single factor. Structural equation modeling indicated that SOS items were correlated with existing measures of spillover and could be discriminated from related concepts of personality and coping. The SOS was also more highly correlated with demoralization than existing measures of spillover in hierarchical analyses that controlled for demographic factors, personality characteristics, and coping style. It is concluded that the SOS shows enough promise to warrant the cost of its appraisal as an alternative measure of spillover in a longitudinal study.
Resumo:
This thesis extends previous research on critical decision making and problem-solving by refining and validating a self-report measure designed to assess the use of critical decision making and problem solving in making life choices. The analysis was conducted by performing two studies, and therefore collecting two sets of data on the psychometric properties of the measure. Psychometric analyses included: item analysis, internal consistency reliability, interrater reliability, and an exploratory factor analysis. This study also included regression analysis with the Wonderlic, an established measure of general intelligence, to provide preliminary evidence for the construct validity of the measure.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
A low-threshold nanolaser with all three dimensions at the subwavelength scale is proposed and investigated. The nanolaser is constructed based on an asymmetric hybrid plasmonic F-P cavity with Ag-coated end facets. Lasing characteristics are calculated using finite element method at the wavelength of 1550 nm. The results show that owing to the low modal loss, large modal confinement factor of the asymmetric plasmonic cavity structure, in conjunction with the high reflectivity of the Ag reflectors, a minimum threshold gain of 240 cm−1 is predicted. Furthermore, the Purcell factor as large as 2518 is obtained with optimized structure parameters to enhance rates of spontaneous and stimulated emission.
Resumo:
The Drive for Muscularity Scale (DMS) is a widely used measure in studies of men’s body image, but few studies have examined its psychometric properties outside English-speaking samples. Here, we assessed the factor structure of a Malay translation of the DMS. A community sample of 159 Malay men from Kuala Lumpur, Malaysia, completed the DMS, along with measures of self-esteem, body appreciation, and muscle discrepancy. Exploratory factor analysis led to the extraction of two factors, differentiating attitudes from behaviours, which mirrors the parent scale. Both factors also loaded on to a higher-order drive for muscularity factor. The subscales of the Malay DMS had adequate internal consistencies and good convergent validity, insofar as significant relationships were reported with self-esteem, body appreciation,muscle discrepancy, and body mass index. These results indicate that the Malay DMS has acceptable psychometric properties and can be used to assess body image concerns in Malay men.
Resumo:
A compositional multivariate approach is used to analyse regional scale soil geochemical data obtained as part of the Tellus Project generated by the Geological Survey Northern Ireland (GSNI). The multi-element total concentration data presented comprise XRF analyses of 6862 rural soil samples collected at 20cm depths on a non-aligned grid at one site per 2 km2. Censored data were imputed using published detection limits. Using these imputed values for 46 elements (including LOI), each soil sample site was assigned to the regional geology map provided by GSNI initially using the dominant lithology for the map polygon. Northern Ireland includes a diversity of geology representing a stratigraphic record from the Mesoproterozoic, up to and including the Palaeogene. However, the advance of ice sheets and their meltwaters over the last 100,000 years has left at least 80% of the bedrock covered by superficial deposits, including glacial till and post-glacial alluvium and peat. The question is to what extent the soil geochemistry reflects the underlying geology or superficial deposits. To address this, the geochemical data were transformed using centered log ratios (clr) to observe the requirements of compositional data analysis and avoid closure issues. Following this, compositional multivariate techniques including compositional Principal Component Analysis (PCA) and minimum/maximum autocorrelation factor (MAF) analysis method were used to determine the influence of underlying geology on the soil geochemistry signature. PCA showed that 72% of the variation was determined by the first four principal components (PC’s) implying “significant” structure in the data. Analysis of variance showed that only 10 PC’s were necessary to classify the soil geochemical data. To consider an improvement over PCA that uses the spatial relationships of the data, a classification based on MAF analysis was undertaken using the first 6 dominant factors. Understanding the relationship between soil geochemistry and superficial deposits is important for environmental monitoring of fragile ecosystems such as peat. To explore whether peat cover could be predicted from the classification, the lithology designation was adapted to include the presence of peat, based on GSNI superficial deposit polygons and linear discriminant analysis (LDA) undertaken. Prediction accuracy for LDA classification improved from 60.98% based on PCA using 10 principal components to 64.73% using MAF based on the 6 most dominant factors. The misclassification of peat may reflect degradation of peat covered areas since the creation of superficial deposit classification. Further work will examine the influence of underlying lithologies on elemental concentrations in peat composition and the effect of this in classification analysis.