956 resultados para Models for count data
Resumo:
El control del estado en el que se encuentran las estructuras ha experimentado un gran auge desde hace varias décadas, debido a que los costes de rehabilitación de estructuras tales como los oleoductos, los puentes, los edificios y otras más son muy elevados. En las últimas dos décadas, se han desarrollado una gran cantidad de métodos que permiten identificar el estado real de una estructura, basándose en modelos físicos y datos de ensayos. El ensayo modal es el más común; mediante el análisis modal experimental de una estructura se pueden determinar parámetros como la frecuencia, los modos de vibración y la amortiguación y también la función de respuesta en frecuencia de la estructura. Mediante estos parámetros se pueden implementar diferentes indicadores de daño. Sin embargo, para estructuras complejas y grandes, la implementación de metodologías basadas en la función de respuesta en frecuencia requeriría realizar hipótesis sobre la fuerza utilizada para excitar la estructura. Dado que el análisis modal operacional utiliza solamente las señales de respuesta del sistema para extraer los parámetros dinámicos estructurales y, por tanto, para evaluar el estado de una estructura, el uso de la transmisibilidad sería posible. En este sentido, dentro del análisis modal operacional, la transmisibilidad ha concentrado mucha atención en el mundo científico en la última década. Aunque se han publicado muchos trabajos sobre el tema, en esta Tesis se proponen diferentes técnicas para evaluar el estado de una estructura basándose exclusivamente en la transmisibilidad. En primer lugar, se propone un indicador de daño basado en un nuevo parámetro, la coherencia de transmisibilidad; El indicador se ha valido mediante resultados numéricos y experimentales obtenidos sobre un pórtico de tres pisos. En segundo lugar, la distancia de Mahalanobis se aplica sobre la transmisibilidad como procedimiento para detectar variaciones estructurales provocadas por el daño. Este método se ha validado con éxito sobre una viga libre-libre ensayada experimentalmente. En tercer lugar, se ha implementado una red neuronal basada en medidas de transmisibilidad como metodología de predicción de daño sobre una viga simulada numéricamente. ABSTRACT Structural health monitoring has experienced a huge development from several decades ago since the cost of rehabilitation of structures such as oil pipes, bridges and tall buildings is very high. In the last two decades, a lot of methods able to identify the real stage of a structure have been developed basing on both models and experimental data. Modal testing is the most common; by carrying out the experimental modal analysis of a structure, some parameters, such as frequency, mode shapes and damping, as well as the frequency response function of the structure can be obtained. From these parameters, different damage indicators have been proposed. However, for complex and large structures, any frequency domain approach that relies on frequency response function estimation would be of difficult application since an assumption of the input excitations to the system should be carried out. Operational modal analysis uses only output signals to extract the structural dynamic parameters and, therefore, to identify the structural stage. In this sense, within operational modal analysis, transmissibility has attracted a lot of attention in the scientific field in the last decade. In this work new damage detection approaches based on transmissibility are developed. Firstly, a new theory of transmissibility coherence is developed and it is tested with a three-floor-structure both in simulation and in experimental data analysis; secondly, Mahalanobis distance is taken into use with the transmissibility, and a free-free beam is used to test the approach performance; thirdly, neural networks are used in transmissibility for structural health monitoring; a simulated beam is used to validate the proposed method.
Resumo:
El control del estado en el que se encuentran las estructuras ha experimentado un gran auge desde hace varias décadas, debido a que los costes de rehabilitación de estructuras tales como los oleoductos, los puentes, los edificios y otras más son muy elevados. En las últimas dos décadas, se han desarrollado una gran cantidad de métodos que permiten identificar el estado real de una estructura, basándose en modelos físicos y datos de ensayos. El ensayo modal es el más común; mediante el análisis modal experimental de una estructura se pueden determinar parámetros como la frecuencia, los modos de vibración y la amortiguación y también la función de respuesta en frecuencia de la estructura. Mediante estos parámetros se pueden implementar diferentes indicadores de daño. Sin embargo, para estructuras complejas y grandes, la implementación de metodologías basadas en la función de respuesta en frecuencia requeriría realizar hipótesis sobre la fuerza utilizada para excitar la estructura. Dado que el análisis modal operacional utiliza solamente las señales de respuesta del sistema para extraer los parámetros dinámicos estructurales y, por tanto, para evaluar el estado de una estructura, el uso de la transmisibilidad sería posible. En este sentido, dentro del análisis modal operacional, la transmisibilidad ha concentrado mucha atención en el mundo científico en la última década. Aunque se han publicado muchos trabajos sobre el tema, en esta Tesis se proponen diferentes técnicas para evaluar el estado de una estructura basándose exclusivamente en la transmisibilidad. En primer lugar, se propone un indicador de daño basado en un nuevo parámetro, la coherencia de transmisibilidad; El indicador se ha valido mediante resultados numéricos y experimentales obtenidos sobre un pórtico de tres pisos. En segundo lugar, la distancia de Mahalanobis se aplica sobre la transmisibilidad como procedimiento para detectar variaciones estructurales provocadas por el daño. Este método se ha validado con éxito sobre una viga libre-libre ensayada experimentalmente. En tercer lugar, se ha implementado una red neuronal basada en medidas de transmisibilidad como metodología de predicción de daño sobre una viga simulada numéricamente. ABSTRACT Structural health monitoring has experienced a huge development from several decades ago since the cost of rehabilitation of structures such as oil pipes, bridges and tall buildings is very high. In the last two decades, a lot of methods able to identify the real stage of a structure have been developed basing on both models and experimental data. Modal testing is the most common; by carrying out the experimental modal analysis of a structure, some parameters, such as frequency, mode shapes and damping, as well as the frequency response function of the structure can be obtained. From these parameters, different damage indicators have been proposed. However, for complex and large structures, any frequency domain approach that relies on frequency response function estimation would be of difficult application since an assumption of the input excitations to the system should be carried out. Operational modal analysis uses only output signals to extract the structural dynamic parameters and, therefore, to identify the structural stage. In this sense, within operational modal analysis, transmissibility has attracted a lot of attention in the scientific field in the last decade. In this work new damage detection approaches based on transmissibility are developed. Firstly, a new theory of transmissibility coherence is developed and it is tested with a three-floor-structure both in simulation and in experimental data analysis; secondly, Mahalanobis distance is taken into use with the transmissibility, and a free-free beam is used to test the approach performance; thirdly, neural networks are used in transmissibility for structural health monitoring; a simulated beam is used to validate the proposed method.
Resumo:
A simple mathematical model of bacterial transmission within a hospital was used to study the effects of measures to control nosocomial transmission of bacteria and reduce antimicrobial resistance in nosocomial pathogens. The model predicts that: (i) Use of an antibiotic for which resistance is not yet present in a hospital will be positively associated at the individual level (odds ratio) with carriage of bacteria resistant to other antibiotics, but negatively associated at the population level (prevalence). Thus inferences from individual risk factors can yield misleading conclusions about the effect of antibiotic use on resistance to another antibiotic. (ii) Nonspecific interventions that reduce transmission of all bacteria within a hospital will disproportionately reduce the prevalence of colonization with resistant bacteria. (iii) Changes in the prevalence of resistance after a successful intervention will occur on a time scale of weeks to months, considerably faster than in community-acquired infections. Moreover, resistance can decline rapidly in a hospital even if it does not carry a fitness cost. The predictions of the model are compared with those of other models and published data. The implications for resistance control and study design are discussed, along with the limitations and assumptions of the model.
Resumo:
Proportion correct in two-alternative forcedchoice (2AFC) detection tasks often varies when the stimulus is presented in the first or in the second interval.Reanalysis of published data reveals that these order effects (or interval bias) are strong and prevalent, refuting the standard difference model of signal detection theory. Order effects are commonly regarded as evidence that observers use an off-center criterion under the difference model with bias. We consider an alternative difference model with indecision whereby observers are occasionally undecided and guess with some bias toward one of the response options. Whether or not the data show order effects, the two models fit 2AFC data indistinguishably, but they yield meaningfully different estimates of sensory parameters. Under indeterminacy as to which model governs 2AFC performance, parameter estimates are suspect and potentially misleading. The indeterminacy can be circumvented by modifying the response format so that observers can express indecision when needed. Reanalysis of published data collected in this way lends support to the indecision model. We illustrate alternative approaches to fitting psychometric functions under the indecision model and discuss designs for 2AFC experiments that improve the accuracy of parameter estimates, whether or not order effects are apparent in the data.
Resumo:
O diagnóstico de câncer infantojuvenil e as demandas do seu tratamento transforma a vida da família, particularmente das mães que acompanham seus filhos de perto e não medem esforços para oferecer-lhes o melhor. O cuidado prestado pelas mães é permeado de influências culturais que podem favorecer a introdução de terapias complementares no cuidado dos seus filhos. O objetivo deste estudo foi analisar os sentidos das experiências de um grupo de mães de crianças e adolescentes com câncer com a terapia complementar. Para alcançar este objetivo, realizou-se estudo com abordagem metodológica qualitativa, adotando o referencial teórico da Antropologia Médica e a narrativa como método. Após aprovação ética da pesquisa, foram convidadas a participar do estudo quinze mães de crianças e adolescentes com câncer, em acompanhamento terapêutico em serviço de saúde localizado no interior do estado de São Paulo. A coleta de dados foi realizada por meio de duas entrevistas semiestruturadas com cada participante, realizadas nas dependências do complexo hospitalar e nos domicílios, no período de julho de 2014 a julho de 2015. A partir das entrevistas foram construídas as narrativas individuais das participantes e utilizamos os pressupostos dos modelos explicativos para organizar os dados relativos à reconstrução das experiências das mães acerca da causa, tratamento e o prognóstico da doença. Para a análise dos dados provenientes das narrativas, utilizou-se a análise temática indutiva. Relacionaram-se os aspectos semelhantes e os particulares das narrativas sobre as experiências das mães com os tratamentos complementares e eles foram integrados em dois temas representativos, apresentados sob a forma de sínteses narrativas ou unidades de sentidos. Os resultados foram analisados e apresentados a partir das narrativas temáticas: Quando um filho tem câncer, não se imagina a força de uma mãe, em que se apresenta a persistência, energia, entusiasmo e motivação das mães para lidar com as demandas do diagnóstico oncológico e tratamento do câncer, bem como a sua influência nas decisões que garantem a qualidade do tratamento dos filhos, incluindo ou não a incorporação de práticas alternativas; e A utilização da terapia complementar motivada pela esperança, em que o sentido atribuído pelas mães à incorporação de terapias complementares no cuidado do filho é o de renovação da esperança, com o propósito de promoção do bem-estar da criança ou adolescente e cura da doença. Os sentidos dessas experiências foram explicados por meio de conceitos derivados da antropologia. A interpretação das narrativas centradas na experiência de um grupo de mães de crianças e adolescentes oncológicos com a terapia complementar, a partir do sistema cultural, permitiu-nos explicar compreensivamente como a cultura influencia o cuidado prestado pelas mães aos seus filhos, por meio dos sentidos. Os sentidos das experiências desse grupo de mães constituem-se em conhecimento que pode ser aplicado na prática clínica e em pesquisas futuras
Resumo:
Background: The Strengths and Difficulties Questionnaire (SDQ) is a tool to measure the risk for mental disorders in children. The aim of this study is to describe the diagnostic efficiency and internal structure of the SDQ in the sample of children studied in the Spanish National Health Survey 2006. Methods: A representative sample of 6,773 children aged 4 to 15 years was studied. The data were obtained using the Minors Questionnaire in the Spanish National Health Survey 2006. The ROC curve was constructed and calculations made of the area under the curve, sensitivity, specificity and the Youden J indices. The factorial structure was studied using models of exploratory factorial analysis (EFA) and confirmatory factorial analysis (CFA). Results: The prevalence of behavioural disorders varied between 0.47% and 1.18% according to the requisites of the diagnostic definition. The area under the ROC curve varied from 0.84 to 0.91 according to the diagnosis. Factor models were cross-validated by means of two different random subsamples for EFA and CFA. An EFA suggested a three correlated factor model. CFA confirmed this model. A five-factor model according to EFA and the theoretical five-factor model described in the bibliography were also confirmed. The reliabilities of the factors of the different models were acceptable (>0.70, except for one factor with reliability 0.62). Conclusions: The diagnostic behaviour of the SDQ in the Spanish population is within the working limits described in other countries. According to the results obtained in this study, the diagnostic efficiency of the questionnaire is adequate to identify probable cases of psychiatric disorders in low prevalence populations. Regarding the factorial structure we found that both the five and the three factor models fit the data with acceptable goodness of fit indexes, the latter including an externalization and internalization dimension and perhaps a meaningful positive social dimension. Accordingly, we recommend studying whether these differences depend on sociocultural factors or are, in fact, due to methodological questions.
Resumo:
Based on models and proxy data it has been proposed that salinity-driven stratification weakened in the subarctic North Pacific during the last deglaciation, which potentially contributed to the deglacial rise in atmospheric carbon dioxide. We present high-resolution subsurface temperature (TMg/Ca) and subsurface salinity-approximating (d18Oivc-sw) records across the last 20,000 years from the subarctic North Pacific and its marginal seas, derived from combined stable oxygen isotopes and Mg/Ca ratios of the planktonic foraminiferal species Neogloboquadrina pachyderma (sin.). Our results indicate regionally differing changes of subsurface conditions. During the Heinrich Stadial 1 and the Younger Dryas cold phases our sites were subject to reduced thermal stratification, brine rejection due to sea-ice formation, and increased advection of low-salinity water from the Alaskan Stream. In contrast, the Bølling-Allerød warm phase was characterized by strengthened thermal stratification, stronger sea-ice melting, and influence of surface waters that were less diluted by the Alaskan Stream. From direct comparison with alkenone-based sea surface temperature estimates (SSTUk'37), we suggest deglacial thermocline changes that were closely related to changes in seasonal contrasts and stratification of the mixed layer. The modern upper-ocean conditions seem to have developed only since the early Holocene.
Resumo:
Esse trabalho investiga empiricamente a relação entre custo de agência e as medidas de monitoramento interno disponíveis aos investidores brasileiros nas empresas nacionais, utilizando amostras de companhias abertas entre os anos de 2010 e 2014, totalizando 134 empresas analisadas e 536 observações. Para medir tal relação, foram utilizadas, como variáveis de monitoramento interno, informações sobre a remuneração variável dos executivos, entre elas o uso de outorgas de opções de compra de ações, a composição do conselho de administração, dando ênfase à representatividade dos conselheiros independentes e à dualidade entre Chairman e CEO, e o percentual do capital social das companhias que está sob propriedade dos executivos. Como proxy para custo de agência, foram utilizados os indicadores Asset Turnover Ratio e General & Administrative Expenses (G&A) como percentual da Receita Líquida. Neste contexto, foram estabelecidas duas hipóteses de pesquisa e estimados modelos de regressão em painel controlados por efeitos fixos de tempo e empresa, empregando como variável dependente as variáveis proxy do custo de agência e utilizando as variáveis endividamento e tamanho das empresas como variáveis de controle. Os resultados dos modelos demonstram que, na amostra selecionada, há uma relação positiva e significativa entre o percentual da remuneração variável e as proxies de custo de agência, comportamento este contrário ao esperado originalmente. Conclui-se assim que as empresas que apresentam uma maior composição variável no total remunerado ao executivo, incorrem em um maior custo de agência, o que leva à conclusão de que tais ferramentas não são boas estratégias de alinhamento de interesses entre executivos e acionistas. As demais variáveis de monitoramento interno não apresentaram significância.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.
Resumo:
Objective: Science needs to constantly match research models against the data. With respect to the epidemiology of schizophrenia, the widely held belief that the incidence of schizophrenia shows little variation may no longer be supported by the data. The aims of this paper are (i) to explore data-vs.-belief mismatch with respect to the incidence of schizophrenia, and (ii) to speculate on the causes and consequences of such discrepancies. Method: Based on a recently published systematic review of the incidence of schizophrenia, the distribution of incidence rates around the world was examined. In order to examine if the incidence of schizophrenia differed by sex, male vs. female risk ratios were generated. Results: The distribution of incidence rates for schizophrenia is asymmetrical with many high rates skewing the distribution. Based on the central 80% of rates, the incidence of schizophrenia varies in a five-fold range (between 7.7 and 43.0 per 100 000). Males have a significantly higher incidence of schizophrenia compared with females (median male to female risk ratio = 1.4), and this difference could not be accounted for by diagnostic criteria or age range. Conclusion: The beliefs that (i) the incidence of schizophrenia does not vary between sites and (ii) males and females are equally affected, may have persisted because of an unspoken deeper belief that schizophrenia is an egalitarian and exceptional disorder. Our ability to generate productive hypotheses about the aetiology of schizophrenia rests on an accurate appraisal of the data. Beliefs not supported by data should be identified and relabelled as myths.
Resumo:
Background The 2001 Australian census revealed that adults aged 65 years and over constituted 12.6% of the population, up from 12.1% in 1996. It is projected that this figure will rise to 21% or 5.1 million Australians by 2031. In 1998, 6% (134 000) of adults in Australia aged 65 years and over were residing in nursing homes or hostels and this number is also expected to rise. As skin ages, there is a decreased turnover and replacement of epidermal skin cells, a thinning subcutaneous fat layer and a reduced production of protective oils. These changes can affect the normal functions of the skin such as its role as a barrier to irritants and pathogens, temperature and water regulation. Generally, placement in a long-term care facility indicates an inability of the older person to perform all of the activities of daily living such as skin care. Therefore, skin care management protocols should be available to reduce the likelihood of skin irritation and breakdown and ultimately promote comfort of the older person. Objectives The objective of this review was to determine the best available evidence for the effectiveness and safety of topical skin care regimens for older adults residing in long-term aged care facilities. The primary outcome was the incidence of adverse skin conditions with patient satisfaction considered as a secondary outcome. Search strategy A literature search was performed using the following databases: PubMed (NLM) (1966–4/2003), Embase (1966–4/2003), CINAHL (1966–4/2003), Current Contents (1993–4/2003), Cochrane Library (1966–2/2003), Web of Science (1995–12/2002), Science Citation Index Expanded and ProceedingsFirst (1993–12/2002). Health Technology Assessment websites were also searched. No language restrictions were applied. Selection criteria Systematic reviews of randomised controlled trials, randomised and non-randomised controlled trials evaluating any non-medical intervention or program that aimed to maintain or improve the integrity of skin in older adults were considered for inclusion. Participants were 65 years of age or over and residing in an aged care facility, hospital or long-term care in the community. Studies were excluded if they evaluated pressure-relieving techniques for the prevention of skin breakdown. Data collection and analysis Two independent reviewers assessed study eligibility for inclusion. Study design and quality were tabulated and relative risks, odds ratios, mean differences and associated 95% confidence intervals were calculated from individual comparative studies containing count data. Results The resulting evidence of the effectiveness of topical skin care interventions was variable and dependent upon the skin condition outcome being assessed. The strongest evidence for maintenance of skin condition in incontinent patients found that disposable bodyworn incontinence protection reduced the odds of deterioration of skin condition compared with non-disposable bodyworns. The best evidence for non-pressure relieving topical skin care interventions on pressure sore formation found the no-rinse cleanser Clinisan to be more effective than soap and water at maintaining healthy skin (no ulcers) in elderly incontinent patients in long-term care. The quality of studies examining the effectiveness of topical skin care interventions on the incidence of skin tears was very poor and inconclusive. Topical skin care for prevention of dermatitis found that Sudocrem could reduce the redness of skin compared with zinc cream if applied regularly after each pad change, but not the number of lesions. Topical skin care on dry skin found the Bag Bath/Travel Bath no-rinse skin care cleanser to be more effective at preventing overall skin dryness and most specifically flaking and scaling when compared with the traditional soap and water washing method in residents of a long-term care facility. Information on the safety of topical skin care interventions is lacking. Therefore, because of the lack of evidence, no recommendation on the safety on any intervention included in this review can be made.
Resumo:
Measuring Job Openings: Evidence from Swedish Plant Level Data. In modern macroeconomic models “job openings'' are a key component. Thus, when taking these models to the data we need an empirical counterpart to the theoretical concept of job openings. To achieve this, the literature relies on job vacancies measured either in survey or register data. Insofar as this concept captures the concept of job openings well we should see a tight relationship between vacancies and subsequent hires on the micro level. To investigate this, I analyze a new data set of Swedish hires and job vacancies on the plant level covering the period 2001-2012. I find that vacancies contain little power in predicting hires over and above (i) whether the number of vacancies is positive and (ii) plant size. Building on this, I propose an alternative measure of job openings in the economy. This measure (i) better predicts hiring at the plant level and (ii) provides a better fitting aggregate matching function vis-à-vis the traditional vacancy measure. Firm Level Evidence from Two Vacancy Measures. Using firm level survey and register data for both Sweden and Denmark we show systematic mis-measurement in both vacancy measures. While the register-based measure on the aggregate constitutes a quarter of the survey-based measure, the latter is not a super-set of the former. To obtain the full set of unique vacancies in these two databases, the number of survey vacancies should be multiplied by approximately 1.2. Importantly, this adjustment factor varies over time and across firm characteristics. Our findings have implications for both the search-matching literature and policy analysis based on vacancy measures: observed changes in vacancies can be an outcome of changes in mis-measurement, and are not necessarily changes in the actual number of vacancies. Swedish Unemployment Dynamics. We study the contribution of different labor market flows to business cycle variations in unemployment in the context of a dual labor market. To this end, we develop a decomposition method that allows for a distinction between permanent and temporary employment. We also allow for slow convergence to steady state which is characteristic of European labor markets. We apply the method to a new Swedish data set covering the period 1987-2012 and show that the relative contributions of inflows and outflows to/from unemployment are roughly 60/30. The remaining 10\% are due to flows not involving unemployment. Even though temporary contracts only cover 9-11\% of the working age population, variations in flows involving temporary contracts account for 44\% of the variation in unemployment. We also show that the importance of flows involving temporary contracts is likely to be understated if one does not account for non-steady state dynamics. The New Keynesian Transmission Mechanism: A Heterogeneous-Agent Perspective. We argue that a 2-agent version of the standard New Keynesian model---where a ``worker'' receives only labor income and a “capitalist'' only profit income---offers insights about how income inequality affects the monetary transmission mechanism. Under rigid prices, monetary policy affects the distribution of consumption, but it has no effect on output as workers choose not to change their hours worked in response to wage movements. In the corresponding representative-agent model, in contrast, hours do rise after a monetary policy loosening due to a wealth effect on labor supply: profits fall, thus reducing the representative worker's income. If wages are rigid too, however, the monetary transmission mechanism is active and resembles that in the corresponding representative-agent model. Here, workers are not on their labor supply curve and hence respond passively to demand, and profits are procyclical.
Resumo:
Benchmarking techniques have evolved over the years since Xerox’s pioneering visits to Japan in the late 1970s. The focus of benchmarking has also shifted during this period. By tracing in detail the evolution of benchmarking in one specific area of business activity, supply and distribution management, as seen by the participants in that evolution, creates a picture of a movement from single function, cost-focused, competitive benchmarking, through cross-functional, cross-sectoral, value-oriented benchmarking to process benchmarking. As process efficiency and effectiveness become the primary foci of benchmarking activities, the measurement parameters used to benchmark performance converge with the factors used in business process modelling. The possibility is therefore emerging of modelling business processes and then feeding the models with actual data from benchmarking exercises. This would overcome the most common criticism of benchmarking, namely that it intrinsically lacks the ability to move beyond current best practice. In fact the combined power of modelling and benchmarking may prove to be the basic building block of informed business process re-engineering.
Resumo:
Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the 'COOPER-framework' a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly. © 2010 Elsevier B.V. All rights reserved.