963 resultados para STATISTICAL MODELS
Resumo:
The aim of this research work was primarily to examine the relevance of patient parameters, ward structures, procedures and practices, in respect of the potential hazards of wound cross-infection and nasal colonisation with multiple resistant strains of Staphylococcus aureus, which it is thought might provide a useful indication of a patient's general susceptibility to wound infection. Information from a large cross-sectional survey involving 12,000 patients from some 41 hospitals and 375 wards was collected over a five-year period from 1967-72, and its validity checked before any subsequent analysis was carried out. Many environmental factors and procedures which had previously been thought (but never conclusively proved) to have an influence on wound infection or nasal colonisation rates, were assessed, and subsequently dismissed as not being significant, provided that the standard of the current range of practices and procedures is maintained and not allowed to deteriorate. Retrospective analysis revealed that the probability of wound infection was influenced by the patient's age, duration of pre-operative hospitalisation, sex, type of wound, presence and type of drain, number of patients in ward, and other special risk factors, whilst nasal colonisation was found to be influenced by the patient's age, total duration of hospitalisation, sex, antibiotics, proportion of occupied beds in the ward, average distance between bed centres and special risk factors. A multi-variate regression analysis technique was used to develop statistical models, consisting of variable patient and environmental factors which were found to have a significant influence on the risks pertaining to wound infection and nasal colonisation. A relationship between wound infection and nasal colonisation was then established and this led to the development of a more advanced model for predicting wound infections, taking advantage of the additional knowledge of the patient's state of nasal colonisation prior to operation.
Resumo:
This thesis describes the development of a simple and accurate method for estimating the quantity and composition of household waste arisings. The method is based on the fundamental tenet that waste arisings can be predicted from information on the demographic and socio-economic characteristics of households, thus reducing the need for the direct measurement of waste arisings to that necessary for the calibration of a prediction model. The aim of the research is twofold: firstly to investigate the generation of waste arisings at the household level, and secondly to devise a method for supplying information on waste arisings to meet the needs of waste collection and disposal authorities, policy makers at both national and European level and the manufacturers of plant and equipment for waste sorting and treatment. The research was carried out in three phases: theoretical, empirical and analytical. In the theoretical phase specific testable hypotheses were formulated concerning the process of waste generation at the household level. The empirical phase of the research involved an initial questionnaire survey of 1277 households to obtain data on their socio-economic characteristics, and the subsequent sorting of waste arisings from each of the households surveyed. The analytical phase was divided between (a) the testing of the research hypotheses by matching each household's waste against its demographic/socioeconomic characteristics (b) the development of statistical models capable of predicting the waste arisings from an individual household and (c) the development of a practical method for obtaining area-based estimates of waste arisings using readily available data from the national census. The latter method was found to represent a substantial improvement over conventional methods of waste estimation in terms of both accuracy and spatial flexibility. The research therefore represents a substantial contribution both to scientific knowledge of the process of household waste generation, and to the practical management of waste arisings.
Resumo:
We discuss aggregation of data from neuropsychological patients and the process of evaluating models using data from a series of patients. We argue that aggregation can be misleading but not aggregating can also result in information loss. The basis for combining data needs to be theoretically defined, and the particular method of aggregation depends on the theoretical question and characteristics of the data. We present examples, often drawn from our own research, to illustrate these points. We also argue that statistical models and formal methods of model selection are a useful way to test theoretical accounts using data from several patients in multiple-case studies or case series. Statistical models can often measure fit in a way that explicitly captures what a theory allows; the parameter values that result from model fitting often measure theoretically important dimensions and can lead to more constrained theories or new predictions; and model selection allows the strength of evidence for models to be quantified without forcing this into the artificial binary choice that characterizes hypothesis testing methods. Methods that aggregate and then formally model patient data, however, are not automatically preferred to other methods. Which method is preferred depends on the question to be addressed, characteristics of the data, and practical issues like availability of suitable patients, but case series, multiple-case studies, single-case studies, statistical models, and process models should be complementary methods when guided by theory development.
Resumo:
Accurate T-cell epitope prediction is a principal objective of computational vaccinology. As a service to the immunology and vaccinology communities at large, we have implemented, as a server on the World Wide Web, a partial least squares-base multivariate statistical approach to the quantitative prediction of peptide binding to major histocom-patibility complexes (MHC), the key checkpoint on the antigen presentation pathway within adaptive,cellular immunity. MHCPred implements robust statistical models for both Class I alleles (HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203,HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3301, HLA-A*6801, HLA-A*6802 and HLA-B*3501) and Class II alleles (HLA-DRB*0401, HLA-DRB*0401and HLA-DRB* 0701).
Resumo:
This paper proposes a new method using radial basis neural networks in order to find the classification and the recognition of trees species for forest inventories. This method computes the wood volume using a set of data easily obtained. The results that are obtained improve the used classic and statistical models.
Resumo:
We present three jargonaphasic patients who made phonological errors in naming, repetition and reading. We analyse target/response overlap using statistical models to answer three questions: 1) Is there a single phonological source for errors or two sources, one for target-related errors and a separate source for abstruse errors? 2) Can correct responses be predicted by the same distribution used to predict errors or do they show a completion boost (CB)? 3) Is non-lexical and lexical information summed during reading and repetition? The answers were clear. 1) Abstruse errors did not require a separate distribution created by failure to access word forms. Abstruse and target-related errors were the endpoints of a single overlap distribution. 2) Correct responses required a special factor, e.g., a CB or lexical/phonological feedback, to preserve their integrity. 3) Reading and repetition required separate lexical and non-lexical contributions that were combined at output.
Resumo:
Extreme stock price movements are of great concern to both investors and the entire economy. For investors, a single negative return, or a combination of several smaller returns, can possible wipe out so much capital that the firm or portfolio becomes illiquid or insolvent. If enough investors experience this loss, it could shock the entire economy. An example of such a case is the stock market crash of 1987. Furthermore, there has been a lot of recent interest regarding the increasing volatility of stock prices. ^ This study presents an analysis of extreme stock price movements. The data utilized was the daily returns for the Standard and Poor's 500 index from January 3, 1978 to May 31, 2001. Research questions were analyzed using the statistical models provided by extreme value theory. One of the difficulties in examining stock price data is that there is no consensus regarding the correct shape of the distribution function generating the data. An advantage with extreme value theory is that no detailed knowledge of this distribution function is required to apply the asymptotic theory. We focus on the tail of the distribution. ^ Extreme value theory allows us to estimate a tail index, which we use to derive bounds on the returns for very low probabilities on an excess. Such information is useful in evaluating the volatility of stock prices. There are three possible limit laws for the maximum: Gumbel (thick-tailed), Fréchet (thin-tailed) or Weibull (no tail). Results indicated that extreme returns during the time period studied follow a Fréchet distribution. Thus, this study finds that extreme value analysis is a valuable tool for examining stock price movements and can be more efficient than the usual variance in measuring risk. ^
Resumo:
Run-off-road (ROR) crashes have increasingly become a serious concern for transportation officials in the State of Florida. These types of crashes have increased proportionally in recent years statewide and have been the focus of the Florida Department of Transportation. The goal of this research was to develop statistical models that can be used to investigate the possible causal relationships between roadway geometric features and ROR crashes on Florida's rural and urban principal arterials. ^ In this research, Zero-Inflated Poisson (ZIP) and Zero-Inflated Negative Binomial (ZINB) Regression models were used to better model the excessive number of roadway segments with no ROR crashes. Since Florida covers a diverse area and since there are sixty-seven counties, it was divided into four geographical regions to minimize possible unobserved heterogeneity. Three years of crash data (2000–2002) encompassing those for principal arterials on the Florida State Highway System were used. Several statistical models based on the ZIP and ZINB regression methods were fitted to predict the expected number of ROR crashes on urban and rural roads for each region. Each region was further divided into urban and rural areas, resulting in a total of eight crash models. A best-fit predictive model was identified for each of these eight models in terms of AIC values. The ZINB regression was found to be appropriate for seven of the eight models and the ZIP regression was found to be more appropriate for the remaining model. To achieve model convergence, some explanatory variables that were not statistically significant were included. Therefore, strong conclusions cannot be derived from some of these models. ^ Given the complex nature of crashes, recommendations for additional research are made. The interaction of weather and human condition would be quite valuable in discerning additional causal relationships for these types of crashes. Additionally, roadside data should be considered and incorporated into future research of ROR crashes. ^
Resumo:
A major goal of the Comprehensive Everglades Restoration Plan (CERP) is to recover historical (pre-drainage) wading bird rookeries and reverse marked decreases in wading bird nesting success in Everglades National Park. To assess efforts to restore wading birds, a trophic hypothesis was developed that proposes seasonal concentrations of small-fish and crustaceans (i.e., wading bird prey) were a key factor to historical wading bird success. Drainage of the Everglades has diminished these seasonal concentrations, leading to a decline in wading bird nesting and displacing them from their historical nesting locations. The trophic hypothesis predicts that restoring historical hydrological patterns to pre-drainage conditions will recover the timing and location of seasonally concentrated prey, ultimately restoring wading bird nesting and foraging to the southern Everglades. We identified a set of indicators using small-fish and crustaceans that can be predicted from hydrological targets and used to assess management success in regaining suitable wading bird foraging habitat. Small-fish and crustaceans are key components of the Everglades food web and are sensitive to hydrological management, track hydrological history with little time lag, and can be studied at the landscape scale. The seasonal hydrological variation of the Everglades that creates prey concentrations presents a challenge to interpreting monitoring data. To account for the variable hydrology of the Everglades in our assessment, we developed dynamic hydrological targets that respond to changes in prevailing regional rainfall. We also derived statistical relationships between density and hydrological drivers for species representing four different life-history responses to drought. Finally, we use these statistical relationships and hydrological targets to set restoration targets for prey density. We also describe a report-card methodology to communicate the results of model-based assessments for communication to a broad audience.
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.
Resumo:
The purpose of this study was threefold: first, to investigate variables associated with learning, and performance as measured by the National Council Licensure Examination for Registered Nurses (NCLEX-RN). The second purpose was to validate the predictive value of the Assessment Technologies Institute (ATI) achievement exit exam, and lastly, to provide a model that could be used to predict performance on the NCLEX-RN, with implications for admission and curriculum development. The study was based on school learning theory, which implies that acquisition in school learning is a function of aptitude (pre-admission measures), opportunity to learn, and quality of instruction (program measures). Data utilized were from 298 graduates of an associate degree nursing program in the Southeastern United States. Of the 298 graduates, 142 were Hispanic, 87 were Black, non-Hispanic, 54 White, non-Hispanic, and 15 reported as Others. The graduates took the NCLEX-RN for the first time during the years 2003–2005. This study was a predictive, correlational design that relied upon retrospective data. Point biserial correlations, and chi-square analyses were used to investigate relationships between 19 selected predictor variables and the dichotomous criterion variable, NCLEX-RN. The correlation and chi square findings indicated that men did better on the NCLEX-RN than women; Blacks had the highest failure rates, followed by Hispanics; older students were more likely to pass the exam than younger students; and students who passed the exam started and completed the nursing program with a higher grade point average, than those who failed the exam. Using logistic regression, five statistical models that used variables associated with learning and student performance on the NCLEX-RN were tested with a model adapted from Bloom's (1976) and Carroll's (1963) school learning theories. The derived model included: NCLEX-RNsuccess = f (Nurse Entrance Test and advanced medical-surgical nursing course grade achieved). The model demonstrates that student performance on the NCLEX-RN can be predicted by one pre-admission measure, and a program measure. The Assessment Technologies Institute achievement exit exam (an outcome measure) had no predictive value for student performance on the NCLEX-RN. The model developed accurately predicted 94% of the student's successful performance on the NCLEX-RN.
Resumo:
Rates of HIV infection continue to climb among minority populations and men who have sex with men (MSM), with African American/Black MSM being especially impacted. Numerous studies have found HIV transmission risk to be associated with many health and social disparities resulting from larger environmental and structural forces. Using anthropological and social environment-based theories of resilience that focus on individual agency and larger social and environmental structures, this dissertation employed a mixed methods design to investigate resilience processes among African American/Black MSM.^ Quantitative analyses compared African American/Black (N=108) and Caucasian/White (N=250) MSM who participated in a previously conducted randomized controlled trial (RCT) of sexual and substance use risk reduction interventions. At RCT study entry, using past 90 day recall periods, there were no differences in unprotected sex frequency, however African American/Black MSM reported higher frequencies of days high (P<0.000), and drugs and sex used in combination (P<0.000), and substance dependence (P<0.000) and lower levels of social support (P<0.024) compared to Caucasian/White MSM. At 12- month follow-up, multi-level statistical models found that African American/Black MSM reduced their frequencies of days high and unprotected sex at greater rates than Caucasian/White MSM (P<0.001).^ Qualitative data collected among a sub-sample of African American/Black MSM from the RCT (N=21) described the men's experiences of living with multiple health and social disparities and the importance of RCT study assessments in facilitating reductions in risk behaviors. A cross-case analysis showed different resilience processes undertaken by men who experienced low socioeconomic status, little family support, and homophobia (N=16) compared to those who did not (N=5).^ The dissertation concludes that resilience processes to HIV transmission risk and related health and social disparities among African American/Black MSM varies and are dependent on specific social environmental factors, including social relationships, structural homophobia, and access to social, economic, and cultural capital. Men define for themselves what it means to be resilient within their social environment. These conclusions suggest that both individual and structural-level resilience-based HIV prevention interventions are needed.^
Public Service Motivation in Public and Nonprofit Service Providers: The Cases of Belarus and Poland
Resumo:
The work motivation construct is central to the theory and practice of many social science disciplines. Yet, due to the novelty of validated measures appropriate for a deep cross-national comparison, studies that contrast different administrative regimes remain scarce. This study represents an initial empirical effort to validate the Public Service Motivation (PSM) instrument proposed by Kim and colleagues (2013) in a previously unstudied context. The two former communist countries analyzed in this dissertation—Belarus and Poland— followed diametrically opposite development strategies: a fully decentralized administrative regime in Poland and a highly centralized regime in Belarus. The employees (n = 677) of public and nonprofit organizations in the border regions of Podlaskie Wojewodstwo (Poland) and Hrodna Voblasc (Belarus) are the subjects of study. Confirmatory factor analysis revealed three dimensions of public service motivation in the two regions: compassion, self-sacrifice, and attraction to public service. The statistical models tested in this dissertation suggest that nonprofit sector employees exhibit higher levels of PSM than their public sector counterparts. Nonprofit sector employees also reveal a similar set of values and work attitudes across the countries. Thus, the study concludes that in terms of PSM, employees of nonprofit organizations constitute a homogenous group that exists atop the administrative regimes. However, the findings propose significant differences between public sector agencies across the two countries. Contrary to expectations, data suggest that organization centralization in Poland is equal to—or for some items even higher than—that of Belarus. We can conclude that the absence of administrative decentralization of service provision in a country does not necessarily undermine decentralized practices within organizations. Further analysis reveals strong correlations between organization centralization and PSM for the Polish sample. Meanwhile, in Belarus, correlations between organization centralization items and PSM are weak and mostly insignificant. The analysis indicates other factors beyond organization centralization that significantly impact PSM in both sectors. PSM of the employees in the studied region is highly correlated with their participation in religious practices, political parties, or labor unions as well as location of their organization in a capital and type of social service provided.
Resumo:
The purpose of this study was to examine the effects of the use of technology on students’ mathematics achievement, particularly the Florida Comprehensive Assessment Test (FCAT) mathematics results. Eleven schools within the Miami-Dade County Public School System participated in a pilot program on the use of Geometers Sketchpad (GSP). Three of these schools were randomly selected for this study. Each school sent a teacher to a summer in-service training program on how to use GSP to teach geometry. In each school, the GSP class and a traditional geometry class taught by the same teacher were the study participants. Students’ mathematics FCAT results were examined to determine if the GSP produced any effects. Students’ scores were compared based on assignment to the control or experimental group as well as gender and SES. SES measurements were based on whether students qualified for free lunch. The findings of the study revealed a significant difference in the FCAT mathematics scores of students who were taught geometry using GSP compared to those who used the traditional method. No significant differences existed between the FCAT mathematics scores of the students based on SES. Similarly, no significant differences existed between the FCAT scores based on gender. In conclusion, the use of technology (particularly GSP) is likely to boost students’ FCAT mathematics test scores. The findings also show that the use of GSP may be able to close known gender and SES related achievement gaps. The results of this study promote policy changes in the way geometry is taught to 10th grade students in Florida’s public schools.
Resumo:
The soursop (A. muricata) is a fruit rich in minerals especially the potassium content. The commercialization of soursop in natura and processed has increased greatly in recent years. Drying fruit pulp in order to obtain the powdered pulp has been studied, seeking alternatives to ensure the quality of dehydrated products at a low cost of production. The high concentration of sugars reducing present in fruits causes problems of agglomeration and retention during fruit pulp drying in spouted bed dryers. On the other hand in relation to drying of milk and fruit pulp with added milk in spouted bed, promising results are reported in the literature. Based on these results was studied in this work drying of the pulp soursop with added milk in spouted bed with inert particles. The tests were based on a 24 factorial design were evaluated for the effects of milk concentration (30 to 50% m/m), drying air temperature (70 to 90 °C), intermittency time (10 to 14 min), and ratio of air velocity in relation to the minimum spout (1.2 to 1.5) on the rate of production, of powder moisture, yield, rate of drying and thermal efficiency of the process. There were physical and chemical analysis of mixtures, of powders and of mixtures reconstituted by rehydration powders. Were adjusted statistical models of first order to data the rate of production, yield and thermal efficiency, that were statistically significant and predictive. An efficiency greater than 40% under the conditions of 50% milk mixture, at 70 ° C the drying air temperature and 1.5 for the ratio between the air velocity and the minimum spout has been reached. The intermittency time showed no significant effect on the analyzed variables. The final product had moisture in the range of 4.18% to 9.99% and water activity between 0.274 to 0.375. The mixtures reconstituted by rehydration powders maintained the same characteristics of natural blends.