20 resultados para GOODNESS-OF-FIT

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Goodness-of-fit tests have been studied by many researchers. Among them, an alternative statistical test for uniformity was proposed by Chen and Ye (2009). The test was used by Xiong (2010) to test normality for the case that both location parameter and scale parameter of the normal distribution are known. The purpose of the present thesis is to extend the result to the case that the parameters are unknown. A table for the critical values of the test statistic is obtained using Monte Carlo simulation. The performance of the proposed test is compared with the Shapiro-Wilk test and the Kolmogorov-Smirnov test. Monte-Carlo simulation results show that proposed test performs better than the Kolmogorov-Smirnov test in many cases. The Shapiro Wilk test is still the most powerful test although in some cases the test proposed in the present research performs better.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Five models delineating the person-situation fit controversy were developed and tested. Hypotheses were tested to determine the linkages between vision congruence, empowerment, locus of control, job satisfaction, organizational commitment, and employee performance. Vision was defined as a mental image of a possible and desirable future state of the organization.^ Data were collected from 213 employees in a major flower import company. Participants were from various organizational levels and ethnic backgrounds. The data collection procedure consisted of three parts. First, a profile analysis instrument was used which was developed employing a Q-sort based technique, to measure the vision congruence between the CEO and each employee. Second, employees completed a survey instrument which included scales measuring empowerment, locus of control, job satisfaction, organizational commitment, and social desirability. Third, supervisor performance ratings were gathered from employee files. Data analysis consisted of using Kendall's tau to measure the correlation between CEO's and each employee's vision. Path analyses were conducted using the EQS structural equation program to test five theoretical models for goodness-of-fit. Regression analysis was employed to test whether locus of control acted as a moderator variable.^ The results showed that vision congruence is significantly related to job satisfaction and employee commitment, and perceived empowerment acts as an intervening variable affecting employee outcomes. The study also found that people with an internal locus of control were more likely to feel empowered than were those with external beliefs. Implications of these findings for both researchers and practitioners are discussed and suggestions for future research directions are provided. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to test Lotka’s law of scientific publication productivity using the methodology outlined by Pao (1985), in the field of Library and Information Studies (LIS). Lotka’s law has been sporadically tested in the field over the past 30+ years, but the results of these studies are inconclusive due to the varying methods employed by the researchers. ^ A data set of 1,856 citations that were found using the ISI Web of Knowledge databases were studied. The values of n and c were calculated to be 2.1 and 0.6418 (64.18%) respectively. The Kolmogorov-Smirnov (K-S) one sample goodness-of-fit test was conducted at the 0.10 level of significance. The Dmax value is 0.022758 and the calculated critical value is 0.026562. It was determined that the null hypothesis stating that there is no difference in the observed distribution of publications and the distribution obtained using Lotka’s and Pao’s procedure could not be rejected. ^ This study finds that literature in the field of Library and Information Studies does conform to Lotka’s law with reliable results. As result, Lotka’s law can be used in LIS as a standardized means of measuring author publication productivity which will lead to findings that are comparable on many levels (e.g., department, institution, national). Lotka’s law can be employed as an empirically proven analytical tool to establish publication productivity benchmarks for faculty and faculty librarians. Recommendations for further study include (a) exploring the characteristics of the high and low producers; (b) finding a way to successfully account for collaborative contributions in the formula; and, (c) a detailed study of institutional policies concerning publication productivity and its impact on the appointment, tenure and promotion process of academic librarians. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research sought to understand the role that differentially assessed lands (lands in the United States given tax breaks in return for their guarantee to remain in agriculture) play in influencing urban growth. Our method was to calibrate the SLEUTH urban growth model under two different conditions. The first used an excluded layer that ignored such lands, effectively rendering them available for development. The second treated those lands as totally excluded from development. Our hypothesis was that excluding those lands would yield better metrics of fit with past data. Our results validate our hypothesis since two different metrics that evaluate goodness of fit both yielded higher values when differentially assessed lands are treated as excluded. This suggests that, at least in our study area, differential assessment, which protects farm and ranch lands for tenuous periods of time, has indeed allowed farmland to resist urban development. Including differentially assessed lands also yielded very different calibrated coefficients of growth as the model tried to account for the same growth patterns over two very different excluded areas. Excluded layer design can greatly affect model behavior. Since differentially assessed lands are quite common through the United States and are often ignored in urban growth modeling, the findings of this research can assist other urban growth modelers in designing excluded layers that result in more accurate model calibration and thus forecasting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency's safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hydrophobicity as measured by Log P is an important molecular property related to toxicity and carcinogenicity. With increasing public health concerns for the effects of Disinfection By-Products (DBPs), there are considerable benefits in developing Quantitative Structure and Activity Relationship (QSAR) models capable of accurately predicting Log P. In this research, Log P values of 173 DBP compounds in 6 functional classes were used to develop QSAR models, by applying 3 molecular descriptors, namely, Energy of the Lowest Unoccupied Molecular Orbital (ELUMO), Number of Chlorine (NCl) and Number of Carbon (NC) by Multiple Linear Regression (MLR) analysis. The QSAR models developed were validated based on the Organization for Economic Co-operation and Development (OECD) principles. The model Applicability Domain (AD) and mechanistic interpretation were explored. Considering the very complex nature of DBPs, the established QSAR models performed very well with respect to goodness-of-fit, robustness and predictability. The predicted values of Log P of DBPs by the QSAR models were found to be significant with a correlation coefficient R2 from 81% to 98%. The Leverage Approach by Williams Plot was applied to detect and remove outliers, consequently increasing R 2 by approximately 2% to 13% for different DBP classes. The developed QSAR models were statistically validated for their predictive power by the Leave-One-Out (LOO) and Leave-Many-Out (LMO) cross validation methods. Finally, Monte Carlo simulation was used to assess the variations and inherent uncertainties in the QSAR models of Log P and determine the most influential parameters in connection with Log P prediction. The developed QSAR models in this dissertation will have a broad applicability domain because the research data set covered six out of eight common DBP classes, including halogenated alkane, halogenated alkene, halogenated aromatic, halogenated aldehyde, halogenated ketone, and halogenated carboxylic acid, which have been brought to the attention of regulatory agencies in recent years. Furthermore, the QSAR models are suitable to be used for prediction of similar DBP compounds within the same applicability domain. The selection and integration of various methodologies developed in this research may also benefit future research in similar fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance of checking the normality assumption in most statistical procedures especially parametric tests cannot be over emphasized as the validity of the inferences drawn from such procedures usually depend on the validity of this assumption. Numerous methods have been proposed by different authors over the years, some popular and frequently used, others, not so much. This study addresses the performance of eighteen of the available tests for different sample sizes, significance levels, and for a number of symmetric and asymmetric distributions by conducting a Monte-Carlo simulation. The results showed that considerable power is not achieved for symmetric distributions when sample size is less than one hundred and for such distributions, the kurtosis test is most powerful provided the distribution is leptokurtic or platykurtic. The Shapiro-Wilk test remains the most powerful test for asymmetric distributions. We conclude that different tests are suitable under different characteristics of alternative distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to test Lotka’s law of scientific publication productivity using the methodology outlined by Pao (1985), in the field of Library and Information Studies (LIS). Lotka’s law has been sporadically tested in the field over the past 30+ years, but the results of these studies are inconclusive due to the varying methods employed by the researchers. A data set of 1,856 citations that were found using the ISI Web of Knowledge databases were studied. The values of n and c were calculated to be 2.1 and 0.6418 (64.18%) respectively. The Kolmogorov-Smirnov (K-S) one sample goodness-of-fit test was conducted at the 0.10 level of significance. The Dmax value is 0.022758 and the calculated critical value is 0.026562. It was determined that the null hypothesis stating that there is no difference in the observed distribution of publications and the distribution obtained using Lotka’s and Pao’s procedure could not be rejected. This study finds that literature in the field of library and Information Studies does conform to Lotka’s law with reliable results. As result, Lotka’s law can be used in LIS as a standardized means of measuring author publication productivity which will lead to findings that are comparable on many levels (e.g., department, institution, national). Lotka’s law can be employed as an empirically proven analytical tool to establish publication productivity benchmarks for faculty and faculty librarians. Recommendations for further study include (a) exploring the characteristics of the high and low producers; (b) finding a way to successfully account for collaborative contributions in the formula; and, (c) a detailed study of institutional policies concerning publication productivity and its impact on the appointment, tenure and promotion process of academic librarians.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency’s safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of the present research is to demonstrate the influence of a fair price (independent of the subjective evaluation of the price magnitude) on buyers' willingness to purchase. The perceived fairness of a price is conceived to have three components: perceived equity, perceived need, and inferred compliance of the seller to the process rules of pricing. These components reflect the Theories of Distributive Justice (as adjusted for conditions of need) and Procedural Justice.^ The effect of the three components of a fair price on willingness to purchase is depicted in a theoretically causal chain model. Based on the Theories of Dissonance and Attribution, conditions of inequity and need activate concerns for Procedural Justice. Under conditions of inequity and need, buyers tend to infer that the seller has not complied with the generally accepted pricing practices, thus violating the social norms of Procedural justice. Inferred violations of Procedural Justice influence the buyer's attitude toward the seller. As predicted by the Theory of Reasoned Action, attitude is then positively related to willingness to purchase.^ The model was tested with a survey-based experiment conducted with 408 respondents. Two levels of both equity and need were manipulated with scenarios, a common research method in studies of Distributive and Procedural Justice. The data were analyzed with a structural equation model using LISREL. Although the effect of the "need" manipulation was insignificant, the results indicated a good fit of the model (Chi-square = 281, Degrees of Freedom = 104, Goodness of Fit Index =.924). The conclusion is that the fairness of a price does have a significant effect on willingness to purchase, independent of the subjective evaluation of the objective price. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A challenge facing nutrition care providers and the Chinese community is how to improve and maintain dietary adequacy (DA) and quality of life (QoL) in older Chinese Americans. Little is known about the factors contributing to DA and the relationships between DA and QoL among community-dwelling older Chinese adults in South Florida. A DA model and a QoL model were hypothesized. ^ Structured interviews with 100 Chinese Floridians, ages ≥60, provided data to test the hypothesized models, using structured equation modeling. Participants (mean age ± SD = 70.9 + 6.8 years) included 59% females, 98% foreign-born, 23% non-English speakers, and 68% residents of Florida for 20 years or more. The findings supported the study hypotheses: an excellent goodness-of-fit of the DA model (χ2/DF (7) = .286; CFI = 1.000; TLI = 1.704; NFI = .934; RMSEA < .001, 90% CI < .0001 to < .001; SRMR = .033; AIC = 30.000; and BIC = 66.472) and an excellent goodness-of-fit of the QoL model (χ2/DF (6) = .811; CFI = 1.000; TLI = 1.013; NFI = .979; RMSEA < .001, 90% CI < .001 to .116; SRMR = .0429; AIC = 34.869; and BIC = 73.946). ^ The DA model consisted of a structure of four indicators (i.e. Body Mass Index, food practices, diet satisfaction, and appetite) and one intervening variable (i.e. combining nutrient adequacy with nutritional risk). BMI was the strongest, most reliable indicator of DA with the highest predictability coefficient (.63) and the ability to differentiate between participants with different DA levels. The QoL model consisted of a two-dimensional construct with one indicator (i.e. physical function) and one intervening variable (i.e. combining loneliness with social resources, depression, social function, and mental health). Physical function had the strongest predictability coefficient (.89), while other indicators contributed to QoL indirectly. When integrating the DA model to the QoL model, DA appears to influence QoL via indirect pathways. ^ It is necessary to include a precise measure of BMI as the basis for assessing DA in this population. Important goals of dietary interventions should be improving physical function and alleviating social and emotional isolation. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. ^ A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: (a) increase the efficiency of the portfolio optimization process, (b) implement large-scale optimizations, and (c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. ^ The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. ^ The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH). ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anxiety sensitivity is a multifaceted cognitive risk factor currently being examined in relation to anxiety and depression. The paucity of research on the relative contribution of the facets of anxiety sensitivity to anxiety and depression, coupled with variations in existing findings, indicate that the relations remain inadequately understood. In the present study, the relations between the facets of anxiety sensitivity, anxiety, and depression were examined in 730 Hispanic-Latino and European-American youth referred to an anxiety specialty clinic. Youth completed the Childhood Anxiety Sensitivity Index, the Revised Children’s Manifest Anxiety Scale, and the Children’s Depression Inventory. The factor structure of the Childhood Anxiety Sensitivity Index was examined using ordered-categorical confirmatory factor analytic techniques. Goodness-of-fit criteria indicated that a two-factor model fit the data best. The identified facets of anxiety sensitivity included Physical/Mental Concerns and Social Concerns. Support was also found for cross-ethnic equivalence of the two-factor model across Hispanic-Latino and European-American youth. Structural equation modeling was used to examine models involving anxiety sensitivity, anxiety, and depression. Results indicated that an overall measure of anxiety sensitivity was positively associated with both anxiety and depression, while the facets of anxiety sensitivity showed differential relations to anxiety and depression symptoms. Both facets of anxiety sensitivity were related to overall anxiety and its symptom dimensions, with the exception being that Social Concerns was not related to physiological anxiety symptoms. Physical/Mental Concerns were strongly associated with overall depression and with all depression symptom dimensions. Social Concerns was not significantly associated with depression or its symptom dimensions. These findings highlight that anxiety sensitivity’s relations to youth psychiatric symptoms are complex. Results suggest that focusing on anxiety sensitivity’s facets is important to fully understand its role in psychopathology. Clinicians may want to target all facets of anxiety sensitivity when treating anxious youth. However, in the context of depression, it might be sufficient for clinicians to target Physical/Mental Incapacitation Concerns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urban growth models have been used for decades to forecast urban development in metropolitan areas. Since the 1990s cellular automata, with simple computational rules and an explicitly spatial architecture, have been heavily utilized in this endeavor. One such cellular-automata-based model, SLEUTH, has been successfully applied around the world to better understand and forecast not only urban growth but also other forms of land-use and land-cover change, but like other models must be fed important information about which particular lands in the modeled area are available for development. Some of these lands are in categories for the purpose of excluding urban growth that are difficult to quantify since their function is dictated by policy. One such category includes voluntary differential assessment programs, whereby farmers agree not to develop their lands in exchange for significant tax breaks. Since they are voluntary, today’s excluded lands may be available for development at some point in the future. Mapping the shifting mosaic of parcels that are enrolled in such programs allows this information to be used in modeling and forecasting. In this study, we added information about California’s Williamson Act into SLEUTH’s excluded layer for Tulare County. Assumptions about the voluntary differential assessments were used to create a sophisticated excluded layer that was fed into SLEUTH’s urban growth forecasting routine. The results demonstrate not only a successful execution of this method but also yielded high goodness-of-fit metrics for both the calibration of enrollment termination as well as the urban growth modeling itself.