897 resultados para Sample size
Resumo:
Purpose – The UK experienced a number of Extreme Weather Events (EWEs) during recent years and a significant number of businesses were affected as a result. With the intensity and frequency of weather extremes predicted in the future, enhancing the resilience of businesses, especially of Small and Medium-sized Enterprises (SMEs), who are considered as highly vulnerable, has become a necessity. However, little research has been undertaken on how construction SMEs respond to the risk of EWEs. In seeking to help address this dearth of research, this investigation sought to identify how construction SMEs were being affected by EWEs and the coping strategies being used. Design/methodology/approach – A mixed methods research design was adopted to elicit information from construction SMEs, involving a questionnaire survey and case study approach. Findings – Results indicate a lack of coping strategies among the construction SMEs studied. Where the coping strategies have been implemented, these were found to be extensions of their existing risk management strategies rather than radical measures specifically addressing EWEs. Research limitations/implications – The exploratory survey focused on the Greater London area and was limited to a relatively small sample size. This limitation is overcome by conducting detailed case studies utilising two SMEs whose projects were located in EWE prone localities. The mixed method research design adopted benefits the research by presenting more robust findings. Practical implications – A better way of integrating the potential of EWEs into the initial project planning stage is required by the SMEs. This could possibly be achieved through a better risk assessment model supported by better EWE prediction data. Originality/value – The paper provides an original contribution towards the overarching agenda of resilience of SMEs and policy making in the area of EWE risk management. It informs both policy makers and practitioners on issues of planning and preparedness against EWEs.
Resumo:
The principal theme of this thesis is the identification of additional factors affecting, and consequently to better allow, the prediction of soft contact lens fit. Various models have been put forward in an attempt to predict the parameters that influence soft contact lens fit dynamics; however, the factors that influence variation in soft lens fit are still not fully understood. The investigations in this body of work involved the use of a variety of different imaging techniques to both quantify the anterior ocular topography and assess lens fit. The use of Anterior-Segment Optical Coherence Tomography (AS-OCT) allowed for a more complete characterisation of the cornea and corneoscleral profile (CSP) than either conventional keratometry or videokeratoscopy alone, and for the collection of normative data relating to the CSP for a substantial sample size. The scleral face was identified as being rotationally asymmetric, the mean corneoscleral junction (CSJ) angle being sharpest nasally and becoming progressively flatter at the temporal, inferior and superior limbal junctions. Additionally, 77% of all CSJ angles were within ±50 of 1800, demonstrating an almost tangential extension of the cornea to form the paralimbal sclera. Use of AS-OCT allowed for a more robust determination of corneal diameter than that of white-to-white (WTW) measurement, which is highly variable and dependent on changes in peripheral corneal transparency. Significant differences in ocular topography were found between different ethnicities and sexes, most notably for corneal diameter and corneal sagittal height variables. Lens tightness was found to be significantly correlated with the difference between horizontal CSJ angles (r =+0.40, P =0.0086). Modelling of the CSP data gained allowed for prediction of up to 24% of the variance in contact lens fit; however, it was likely that stronger associations and an increase in the modelled prediction of variance in fit may have occurred had an objective method of lens fit assessment have been made. A subsequent investigation to determine the validity and repeatability of objective contact lens fit assessment using digital video capture showed no significant benefit over subjective evaluation. The technique, however, was employed in the ensuing investigation to show significant changes in lens fit between 8 hours (the longest duration of wear previously examined) and 16 hours, demonstrating that wearing time is an additional factor driving lens fit dynamics. The modelling of data from enhanced videokeratoscopy composite maps alone allowed for up to 77% of the variance in soft contact lens fit, and up to almost 90% to be predicted when used in conjunction with OCT. The investigations provided further insight into the ocular topography and factors affecting soft contact lens fit.
Resumo:
Graph embedding is a general framework for subspace learning. However, because of the well-known outlier-sensitiveness disadvantage of the L2-norm, conventional graph embedding is not robust to outliers which occur in many practical applications. In this paper, an improved graph embedding algorithm (termed LPP-L1) is proposed by replacing L2-norm with L1-norm. In addition to its robustness property, LPP-L1 avoids small sample size problem. Experimental results on both synthetic and real-world data demonstrate these advantages. © 2009 Elsevier B.V. All rights reserved.
Resumo:
An application of the heterogeneous variables system prediction method to solving the time series analysis problem with respect to the sample size is considered in this work. It is created a logical-and-probabilistic correlation from the logical decision function class. Two ways is considered. When the information about event is kept safe in the process, and when it is kept safe in depending process.
Resumo:
* This work was financially supported by RFBR-04-01-00858.
Resumo:
* This work was financially supported by RFBR-04-01-00858.
Resumo:
Background: Despite initial concerns about the sensitivity of the proposed diagnostic criteria for DSM-5 Autism Spectrum Disorder (ASD; e.g. Gibbs et al., 2012; McPartland et al., 2012), evidence is growing that the DSM-5 criteria provides an inclusive description with both good sensitivity and specificity (e.g. Frazier et al., 2012; Kent, Carrington et al., 2013). The capacity of the criteria to provide high levels of sensitivity and specificity comparable with DSM-IV-TR however relies on careful measurement to ensure that appropriate items from diagnostic instruments map onto the new DSM-5 descriptions.Objectives: To use an existing DSM-5 diagnostic algorithm (Kent, Carrington et .al., 2013) to identify a set of ‘essential’ behaviors sufficient to make a reliable and accurate diagnosis of DSM-5 Autism Spectrum Disorder (ASD) across age and ability level. Methods: Specific behaviors were identified and tested from the recently published DSM-5 algorithm for the Diagnostic Interview for Social and Communication Disorders (DISCO). Analyses were run on existing DISCO datasets, with a total participant sample size of 335. Three studies provided step-by-step development towards identification of a minimum set of items. Study 1 identified the most highly discriminating items (p<.0001). Study 2 used a lower selection threshold than in Study 1 (p<.05) to facilitate better representation of the full DSM-5 ASD profile. Study 3 included additional items previously reported as significantly more frequent in individuals with higher ability. The discriminant validity of all three item sets was tested using Receiver Operating Characteristic curves. Finally, sensitivity across age and ability was investigated in a subset of individuals with ASD (n=190).Results: Study 1 identified an item set (14 items) with good discriminant validity, but which predominantly measured social-communication behaviors (11/14). The Study 2 item set (48 items) better represented the DSM-5 ASD and had good discriminant validity, but the item set lacked sensitivity for individuals with higher ability. The final Study 3 adjusted item set (54 items) improved sensitivity for individuals with higher ability and performance and was comparable to the published DISCO DSM-5 algorithm.Conclusions: This work represents a first attempt to derive a reduced set of behaviors for DSM-5 directly from an existing standardized ASD developmental history interview. Further work involving existing ASD diagnostic tools with community-based and well characterized research samples will be required to replicate these findings and exploit their potential to contribute to a more efficient and focused ASD diagnostic process.
Resumo:
This thesis describes the investigation of the effects of ocular supplements with different levels of nutrients on the macular pigment optical density (MPOD) in participants with healthy eyes. Abstract A review of the literature highlighted that ocular supplements are produced in various combinations of nutrients and concentrations. The ideal concentrations of nutrients such as lutein (L) have not been established. It was unclear whether different stages of eye disease require different concentrations of key nutrients, leading to the design of this study. The primary aim was to determine the effects of ocular supplements with different concentrations of nutrients on the MPOD of healthy participants. The secondary aim was to determine L and zeaxanthin (Z) intake at the start and end of the study through completion of food diaries. The primary study was split into two experiments. Experiment 1 was an exploratory study to determine sample size and experiment 2 the main study. Statistical power was calculated and a sample size of 38 was specified. Block stratification for age, gender and smoking habit was applied and from 101 volunteers 42 completed the study, 31 with both sets of food diaries. Four confounders were accounted for in the design of the study; gender, smoking habit, age and diet. Further factors that could affect comparability of results between studies were identified during the study and were not monitored; ethnicity, gastro-intestinal health, alcohol intake, body mass index and genetics. Comparisons were made between the sample population and the Sheffield general population according to recent demographic results in the public domain. Food diaries were analysed and shown to have no statistical difference when comparing baseline to final results. The average L and Z intake for the 31 participants who returned both sets of food diaries was initially 1.96mg and 1.51mg for the final food diaries. The effect of the two ocular supplements with different levels of xanthophyll (6mg lutein/zeaxanthin and 10mg lutein only) on MPOD was not significantly different over a four-month period.
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
Resumo:
This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied.
Resumo:
Medication reconciliation is an important process in reducing medication errors in many countries. Canada, the USA, and UK have incorporated medication reconciliation as a priority area for national patient safety initiatives and goals. The UK national guidance excludes the pediatric population. The aim of this review was to explore the occurrence of medication discrepancies in the pediatric population. The primary objective was to identify studies reporting the rate and clinical significance of the discrepancies and the secondary objective was to ascertain whether any specific interventions have been used for medication reconciliation in pediatric settings. The following electronic bibliographic databases were used to identify studies: PubMed, OVID EMBASE (1980 to 2012 week 1), ISI Web of Science, ISI Biosis, Cumulative Index to Nursing and Allied Health Literature, and OVID International Pharmaceutical Abstracts (1970 to January 2012). Primary studies were identified that observed medication discrepancies in children under 18 years of age upon hospital admission, transfer and discharge, or had reported medication reconciliation interventions. Two independent reviewers screened titles and abstracts for relevant articles and extracted data using pre-defined data fields, including risk of bias assessment. Ten studies were identified with variances in reportage of stage and rate of discrepancies. Studies were heterogeneous in definitions, methods, and patient populations. Most studies related to admissions and reported consistently high rates of discrepancies ranging from 22 to 72.3 % of patients (sample size ranging from 23 to 272). Seven of the studies were low-quality observational studies and three studies were 'grey literature' non-peer reviewed conference abstracts. Studies involving small numbers of patients have shown that medication discrepancies occur at all transitions of care in children. Further research is required to investigate and demonstrate how implementing medication reconciliation can reduce discrepancies and potential patient harm. © 2013 Springer International Publishing Switzerland.
Resumo:
2000 Mathematics Subject Classification: Primary 60G51, secondary 60G70, 60F17.
Resumo:
INTRODUCTION: The inappropriate use of antipsychotics in people with dementia for behaviour that challenges is associated with an estimated 1800 deaths annually. However, solely focusing on antipsychotics may transfer prescribing to other equally dangerous psychotropics. Little is known about the role of pharmacists in the management of psychotropics used to treat behaviours that challenge. This research aims to determine whether it is feasible to implement and measure the effectiveness of a combined pharmacy-health psychology intervention incorporating a medication review and staff training package to limit the prescription of psychotropics to manage behaviour that challenges in care home residents with dementia. METHODS/ANALYSIS: 6 care homes within the West Midlands will be recruited. People with dementia receiving medication for behaviour that challenges, or their personal consultee, will be approached regarding participation. Medication used to treat behaviour that challenges will be reviewed by the pharmacist, in collaboration with the general practitioner (GP), person with dementia and carer. The behavioural intervention consists of a training package for care home staff and GPs promoting person-centred care and treating behaviours that challenge as an expression of unmet need. The primary outcome measure is the Neuropsychiatric Inventory-Nursing Home version (NPI-NH). Other outcomes include quality of life (EQ-5D and DEMQoL), cognition (sMMSE), health economic (CSRI) and prescribed medication including whether recommendations were implemented. Outcome data will be collected at 6 weeks, and 3 and 6 months. Pretraining and post-training interviews will explore stakeholders' expectations and experiences of the intervention. Data will be used to estimate the sample size for a definitive study. ETHICS/DISSEMINATION: The project has received a favourable opinion from the East Midlands REC (15/EM/3014). If potential participants lack capacity, a personal consultee will be consulted regarding participation in line with the Mental Capacity Act. Results will be published in peer-reviewed journals and presented at conferences.
Resumo:
Privately owned water utilities typically operate under a regulated monopoly regime. Price-cap regulation has been introduced as a means to enhance efficiency and innovation. The main objective of this paper is to propose a methodology for measuring productivity change across companies and over time when the sample size is limited. An empirical application is developed for the UK water and sewerage companies (WaSCs) for the period 1991-2008. A panel index approach is applied to decompose and derive unit-specific productivity growth as a function of the productivity growth achieved by benchmark firms, and the catch-up to the benchmark firm achieved by less productive firms. The results indicated that significant gains in productivity occurred after 2000, when the regulator set tighter reviews. However, the average WaSC still must improve towards the benchmarking firm by 2.69% over a period of five years to achieve comparable performance. This study is relevant to regulators who are interested in developing comparative performance measurement when the number of water companies that can be evaluated is limited. Moreover, setting an appropriate X factor is essential to improve the efficiency of water companies and this study helps to achieve this challenge.