420 resultados para Goodness
Resumo:
The L-moments based index-flood procedure had been successfully applied for Regional Flood Frequency Analysis (RFFA) for the Island of Newfoundland in 2002 using data up to 1998. This thesis, however, considered both Labrador and the Island of Newfoundland using the L-Moments index-flood method with flood data up to 2013. For Labrador, the homogeneity test showed that Labrador can be treated as a single homogeneous region and the generalized extreme value (GEV) was found to be more robust than any other frequency distributions. The drainage area (DA) is the only significant variable for estimating the index-flood at ungauged sites in Labrador. In previous studies, the Island of Newfoundland has been considered as four homogeneous regions (A,B,C and D) as well as two Water Survey of Canada's Y and Z sub-regions. Homogeneous regions based on Y and Z was found to provide more accurate quantile estimates than those based on four homogeneous regions. Goodness-of-fit test results showed that the generalized extreme value (GEV) distribution is most suitable for the sub-regions; however, the three-parameter lognormal (LN3) gave a better performance in terms of robustness. The best fitting regional frequency distribution from 2002 has now been updated with the latest flood data, but quantile estimates with the new data were not very different from the previous study. Overall, in terms of quantile estimation, in both Labrador and the Island of Newfoundland, the index-flood procedure based on L-moments is highly recommended as it provided consistent and more accurate result than other techniques such as the regression on quantile technique that is currently used by the government.
Resumo:
Recent discussion regarding whether the noise that limits 2AFC discrimination performance is fixed or variable has focused either on describing experimental methods that presumably dissociate the effects of response mean and variance or on reanalyzing a published data set with the aim of determining how to solve the question through goodness-of-fit statistics. This paper illustrates that the question cannot be solved by fitting models to data and assessing goodness-of-fit because data on detection and discrimination performance can be indistinguishably fitted by models that assume either type of noise when each is coupled with a convenient form for the transducer function. Thus, success or failure at fitting a transducer model merely illustrates the capability (or lack thereof) of some particular combination of transducer function and variance function to account for the data, but it cannot disclose the nature of the noise. We also comment on some of the issues that have been raised in recent exchange on the topic, namely, the existence of additional constraints for the models, the presence of asymmetric asymptotes, the likelihood of history-dependent noise, and the potential of certain experimental methods to dissociate the effects of response mean and variance.
Resumo:
The standard difference model of two-alternative forced-choice (2AFC) tasks implies that performance should be the same when the target is presented in the first or the second interval. Empirical data often show “interval bias” in that percentage correct differs significantly when the signal is presented in the first or the second interval. We present an extension of the standard difference model that accounts for interval bias by incorporating an indifference zone around the null value of the decision variable. Analytical predictions are derived which reveal how interval bias may occur when data generated by the guessing model are analyzed as prescribed by the standard difference model. Parameter estimation methods and goodness-of-fit testing approaches for the guessing model are also developed and presented. A simulation study is included whose results show that the parameters of the guessing model can be estimated accurately. Finally, the guessing model is tested empirically in a 2AFC detection procedure in which guesses were explicitly recorded. The results support the guessing model and indicate that interval bias is not observed when guesses are separated out.
Resumo:
The purpose of this study was to test Lotka’s law of scientific publication productivity using the methodology outlined by Pao (1985), in the field of Library and Information Studies (LIS). Lotka’s law has been sporadically tested in the field over the past 30+ years, but the results of these studies are inconclusive due to the varying methods employed by the researchers. A data set of 1,856 citations that were found using the ISI Web of Knowledge databases were studied. The values of n and c were calculated to be 2.1 and 0.6418 (64.18%) respectively. The Kolmogorov-Smirnov (K-S) one sample goodness-of-fit test was conducted at the 0.10 level of significance. The Dmax value is 0.022758 and the calculated critical value is 0.026562. It was determined that the null hypothesis stating that there is no difference in the observed distribution of publications and the distribution obtained using Lotka’s and Pao’s procedure could not be rejected. This study finds that literature in the field of library and Information Studies does conform to Lotka’s law with reliable results. As result, Lotka’s law can be used in LIS as a standardized means of measuring author publication productivity which will lead to findings that are comparable on many levels (e.g., department, institution, national). Lotka’s law can be employed as an empirically proven analytical tool to establish publication productivity benchmarks for faculty and faculty librarians. Recommendations for further study include (a) exploring the characteristics of the high and low producers; (b) finding a way to successfully account for collaborative contributions in the formula; and, (c) a detailed study of institutional policies concerning publication productivity and its impact on the appointment, tenure and promotion process of academic librarians.
Resumo:
Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: a) increase the efficiency of the portfolio optimization process, b) implement large-scale optimizations, and c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH).
Resumo:
Mineralogical, geochemical, magnetic, and siliciclastic grain-size signatures of 34 surface sediment samples from the Mackenzie-Beaufort Sea Slope and Amundsen Gulf were studied in order to better constrain the redox status, detrital particle provenance, and sediment dynamics in the western Canadian Arctic. Redox-sensitive elements (Mn, Fe, V, Cr, Zn) indicate that modern sedimentary deposition within the Mackenzie-Beaufort Sea Slope and Amundsen Gulf took place under oxic bottom-water conditions, with more turbulent mixing conditions and thus a well-oxygenated water column prevailing within the Amundsen Gulf. The analytical data obtained, combined with multivariate statistical (notably, principal component and fuzzy c-means clustering analyses) and spatial analyses, allowed the division of the study area into four provinces with distinct sedimentary compositions: (1) the Mackenzie Trough-Canadian Beaufort Shelf with high phyllosilicate-Fe oxide-magnetite and Al-K-Ti-Fe-Cr-V-Zn-P contents; (2) Southwestern Banks Island, characterized by high dolomite-K-feldspar and Ca-Mg-LOI contents; (3) the Central Amundsen Gulf, a transitional zone typified by intermediate phyllosilicate-magnetite-K-feldspar-dolomite and Al-K-Ti-Fe-Mn-V-Zn-Sr-Ca-Mg-LOI contents; and (4) mud volcanoes on the Canadian Beaufort Shelf distinguished by poorly sorted coarse-silt with high quartz-plagioclase-authigenic carbonate and Si-Zr contents, as well as high magnetic susceptibility. Our results also confirm that the present-day sedimentary dynamics on the Canadian Beaufort Shelf is mainly controlled by sediment supply from the Mackenzie River. Overall, these insights provide a basis for future studies using mineralogical, geochemical, and magnetic signatures of Canadian Arctic sediments in order to reconstruct past variations in sediment inputs and transport pathways related to late Quaternary climate and oceanographic changes.
Resumo:
La percepción clara y distinta es el elemento sobre el que se asienta la certeza metafísica de Descartes. Con todo, el planteamiento de los argumentos escépticos referidos a la duda metódica cartesiana ha evidenciado la necesidad de hallar una justificación al propio criterio de la percepción clara y distinta. Frente a los intentos basados en la indubitabilidad de la percepción o en la garantía surgida de la bondad divina, se defenderá una justificación alternativa pragmatista.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
We present an IP-based nonparametric (revealed preference) testing procedure for rational consumption behavior in terms of general collective models, which include consumption externalities and public consumption. An empirical application to data drawn from the Russia Longitudinal Monitoring Survey (RLMS) demonstrates the practical usefulness of the procedure. Finally, we present extensions of the testing procedure to evaluate the goodness-of- t of the collective model subject to testing, and to quantify and improve the power of the corresponding collective rationality tests.
Resumo:
Hydrometallurgical process modeling is the main objective of this Master’s thesis work. Three different leaching processes namely, high pressure pyrite oxidation, direct oxidation zinc concentrate (sphalerite) leaching and gold chloride leaching using rotating disc electrode (RDE) are modeled and simulated using gPROMS process simulation program in order to evaluate its model building capabilities. The leaching mechanism in each case is described in terms of a shrinking core model. The mathematical modeling carried out included process model development based on available literature, estimation of reaction kinetic parameters and assessment of the model reliability by checking the goodness fit and checking the cross correlation between the estimated parameters through the use of correlation matrices. The estimated parameter values in each case were compared with those obtained using the Modest simulation program. Further, based on the estimated reaction kinetic parameters, reactor simulation and modeling for direct oxidation zinc concentrate (sphalerite) leaching is carried out in Aspen Plus V8.6. The zinc leaching autoclave is based on Cominco reactor configuration and is modeled as a series of continuous stirred reactors (CSTRs). The sphalerite conversion is calculated and a sensitivity analysis is carried out so to determine the optimum reactor operation temperature and optimum oxygen mass flow rate. In this way, the implementation of reaction kinetic models into the process flowsheet simulation environment has been demonstrated.
Resumo:
The aim of this study was to compute a swimming performance confirmatory model based on biomechanical parameters. The sample included 100 young swimmers (overall: 12.3 ± 0.74 years; 49 boys: 12.5 ± 0.76 years; 51 girls: 12.2 ± 0.71 years; both genders in Tanner stages 1-2 by self-report) participating on a regular basis in regional and national-level events. The 100 m freestyle event was chosen as the performance indicator. Anthropometric (arm span), strength (throwing velocity), power output (power to overcome drag), kinematic (swimming velocity) and efficiency (propelling efficiency) parameters were measured and included in the model. The path-flow analysis procedure was used to design and compute the model. The anthropometric parameter (arm span) was excluded in the final model, increasing its goodness-of-fit. The final model included the throw velocity, power output, swimming velocity and propelling efficiency. All links were significant between the parameters included, but the throw velocity-power output. The final model was explained by 69% presenting a reasonable adjustment (model's goodness-of-fit; x(2)/df = 3.89). This model shows that strength and power output parameters do play a mediator and meaningful role in the young swimmers' performance.
Resumo:
The Questionnaire on the Frequency of and Satisfaction with Social Support (QFSSS) was designed to assess the frequency of and the degree of satisfaction with perceived social support received from different sources in relation to three types of support: emotional, informational, and instrumental. This study tested the reliability of the questionnaire scores and its criterion and structural validity. The data were drawn from survey interviews of 2042 Spanish people. The results show high internal consistency (values of Cronbach's alpha ranged from .763 to .952). The correlational analysis showed significant positive associations between QFSSS scores and measures of subjective well-being and perceived social support, as well as significant negative associations with measures of loneliness (values of Pearson's r correlation ranged from .11 to .97). Confirmatory factor analysis using structural equation modelling verified an internal 4-factor structure that corresponds to the sources of support analysed: partner, family, friends, and community (values ranged from .93 to .95 for the Goodness of Fit Index (GFI); from .95 to .98 for the Comparative Fit Index (CFI); and from .10 to .07 for the Root Mean Square Error of Approximation (RMSEA)). These results confirm the validity of the QFSSS as a versatile tool which is suitable for the multidimensional assessment of social support.
Resumo:
Background: Appetite and symptoms, conditions generally reported by the patients with cancer, are somewhat challenging for professionals to measure directly in clinical routine (latent conditions). Therefore, specific instruments are required for this purpose. This study aimed to perform a cultural adaptation of the Cancer Appetite and Symptom Questionnaire (CASQ), into Portuguese and evaluate its psychometric properties on a sample of Brazilian cancer patients. Methods: This is a validation study with Brazilian cancer patients. The face, content, and construct (factorial and convergent) validities of the Cancer Appetite and Symptom Questionnaire, the study tool, were estimated. Further, a confirmatory factor analysis (CFA) was conducted. The ratio of chi-square and degrees of freedom (χ2/df), comparative fit index (CFI), goodness of fit index (GFI) and root mean square error of approximation (RMSEA) were used for fit model assessment. In addition, the reliability of the instrument was estimated using the composite reliability (CR) and Cronbach’s alpha coefficient (α), and the invariance of the model in independent samples was estimated by a multigroup analysis (Δχ2). Results: Participants included 1,140 cancer patients with a mean age of 53.95 (SD = 13.25) years; 61.3% were women. After the CFA of the original CASQ structure, 2 items with inadequate factor weights were removed. Four correlations between errors were included to provide adequate fit to the sample (χ2/df = 8.532, CFI = .94, GFI = .95, and RMSEA = .08). The model exhibited a low convergent validity (AVE = .32). The reliability was adequate (CR = .82 α = .82). The refined model showed strong invariance in two independent samples (Δχ2: λ: p = .855; i: p = .824; Res: p = .390). A weak stability was obtained between patients undergoing chemotherapy and radiotherapy (Δχ2: λ: p = .155; i: p < .001; Res: p < .001), and between patients undergoing chemotherapy combined with radiotherapy and palliative care (Δχ2: λ: p = .058; i: p < .001; Res: p < .001). Conclusion: The Portuguese version of the CASQ had good face and construct validity and reliability. However, the CASQ still presented invariance in independent samples of Brazilian patients with cancer. However, the tool has low convergent validity and weak invariance in samples with different treatments.
Resumo:
Objective: Evaluate the validity, reliability, and factorial invariance of the complete Portuguese version of the Oral Health Impact Profile (OHIP) and its short version (OHIP-14). Methods: A total of 1,162 adults enrolled in the Faculty of Dentistry of Araraquara/UNESP participated in the study; 73.1% were women; and the mean age was 40.7 ± 16.3 yr. We conducted a confirmatory factor analysis, where χ2/df, comparative fit index, goodness of fit index, and root mean square error of approximation were used as indices of goodness of fit. The convergent validity was judged from the average variance extracted and the composite reliability, and the internal consistency was estimated by Cronbach standardized alpha. The stability of the models was evaluated by multigroup analysis in independent samples (test and validation) and between users and nonusers of dental prosthesis. Results: We found best-fitting models for the OHIP-14 and among dental prosthesis users. The convergent validity was below adequate values for the factors “functional limitation” and “physical pain” for the complete version and for the factors “functional limitation” and “psychological discomfort” for the OHIP-14. Values of composite reliability and internal consistency were below adequate in the OHIP-14 for the factors “functional limitation” and “psychological discomfort.” We detected strong invariance between test and validation samples of the full version and weak invariance for OHIP-14. The models for users and nonusers of dental prosthesis were not invariant for both versions. Conclusion: The reduced version of the OHIP was parsimonious, reliable, and valid to capture the construct “impact of oral health on quality of life,” which was more pronounced in prosthesis users.