940 resultados para Goodness of fit
Resumo:
Urban growth models have been used for decades to forecast urban development in metropolitan areas. Since the 1990s cellular automata, with simple computational rules and an explicitly spatial architecture, have been heavily utilized in this endeavor. One such cellular-automata-based model, SLEUTH, has been successfully applied around the world to better understand and forecast not only urban growth but also other forms of land-use and land-cover change, but like other models must be fed important information about which particular lands in the modeled area are available for development. Some of these lands are in categories for the purpose of excluding urban growth that are difficult to quantify since their function is dictated by policy. One such category includes voluntary differential assessment programs, whereby farmers agree not to develop their lands in exchange for significant tax breaks. Since they are voluntary, today’s excluded lands may be available for development at some point in the future. Mapping the shifting mosaic of parcels that are enrolled in such programs allows this information to be used in modeling and forecasting. In this study, we added information about California’s Williamson Act into SLEUTH’s excluded layer for Tulare County. Assumptions about the voluntary differential assessments were used to create a sophisticated excluded layer that was fed into SLEUTH’s urban growth forecasting routine. The results demonstrate not only a successful execution of this method but also yielded high goodness-of-fit metrics for both the calibration of enrollment termination as well as the urban growth modeling itself.
Resumo:
In an effort to improve instruction and better accommodate the needs of students, community colleges are offering courses delivered in a variety of delivery formats that require students to have some level of technology fluency to be successful in the course. This study was conducted to investigate the relationship between student socioeconomic status (SES), course delivery method, and course type on enrollment, final course grades, course completion status, and course passing status at a state college. ^ A dataset for 20,456 students of low and not low SES enrolled in science, technology, engineering, and mathematics (STEM) course types delivered using traditional, online, blended, and web enhanced course delivery formats at Miami Dade College, a large open access 4-year state college located in Miami-Dade County, Florida, was analyzed. A factorial ANOVA using course type, course delivery method, and student SES found no significant differences in final course grades when used to determine if course delivery methods were equally effective for students of low and not low SES taking STEM course types. Additionally, three chi-square goodness-of-fit tests were used to investigate for differences in enrollment, course completion and course passing status by SES, course type, and course delivery method. The findings of the chi-square tests indicated that: (a) there were significant differences in enrollment by SES and course delivery methods for the Engineering/Technology, Math, and overall course types but not for the Natural Science course type and (b) there were no significant differences in course completion status and course passing status by SES and course types overall and SES and course delivery methods overall. However, there were statistically significant but weak relationships between course passing status, SES and the math course type as well as between course passing status, SES, and online and traditional course delivery methods. ^ The mixed findings in the study indicate that strides have been made in closing the theoretical gap in education and technology skills that may exist for students of different SES levels. MDC's course delivery and student support models may assist other institutions address student success in courses that necessitate students having some level of technology fluency. ^
Resumo:
Intense precipitation events (IPE) have been causing great social and economic losses in the affected regions. In the Amazon, these events can have serious impacts, primarily for populations living on the margins of its countless rivers, because when water levels are elevated, floods and/or inundations are generally observed. Thus, the main objective of this research is to study IPE, through Extreme Value Theory (EVT), to estimate return periods of these events and identify regions of the Brazilian Amazon where IPE have the largest values. The study was performed using daily rainfall data of the hydrometeorological network managed by the National Water Agency (Agência Nacional de Água) and the Meteorological Data Bank for Education and Research (Banco de Dados Meteorológicos para Ensino e Pesquisa) of the National Institute of Meteorology (Instituto Nacional de Meteorologia), covering the period 1983-2012. First, homogeneous rainfall regions were determined through cluster analysis, using the hierarchical agglomerative Ward method. Then synthetic series to represent the homogeneous regions were created. Next EVT, was applied in these series, through Generalized Extreme Value (GEV) and the Generalized Pareto Distribution (GPD). The goodness of fit of these distributions were evaluated by the application of the Kolmogorov-Smirnov test, which compares the cumulated empirical distributions with the theoretical ones. Finally, the composition technique was used to characterize the prevailing atmospheric patterns for the occurrence of IPE. The results suggest that the Brazilian Amazon has six pluvial homogeneous regions. It is expected more severe IPE to occur in the south and in the Amazon coast. More intense rainfall events are expected during the rainy or transitions seasons of each sub-region, with total daily precipitation of 146.1, 143.1 and 109.4 mm (GEV) and 201.6, 209.5 and 152.4 mm (GPD), at least once year, in the south, in the coast and in the northwest of the Brazilian Amazon, respectively. For the south Amazonia, the composition analysis revealed that IPE are associated with the configuration and formation of the South Atlantic Convergence Zone. Along the coast, intense precipitation events are associated with mesoscale systems, such Squall Lines. In Northwest Amazonia IPE are apparently associated with the Intertropical Convergence Zone and/or local convection.
Resumo:
The L-moments based index-flood procedure had been successfully applied for Regional Flood Frequency Analysis (RFFA) for the Island of Newfoundland in 2002 using data up to 1998. This thesis, however, considered both Labrador and the Island of Newfoundland using the L-Moments index-flood method with flood data up to 2013. For Labrador, the homogeneity test showed that Labrador can be treated as a single homogeneous region and the generalized extreme value (GEV) was found to be more robust than any other frequency distributions. The drainage area (DA) is the only significant variable for estimating the index-flood at ungauged sites in Labrador. In previous studies, the Island of Newfoundland has been considered as four homogeneous regions (A,B,C and D) as well as two Water Survey of Canada's Y and Z sub-regions. Homogeneous regions based on Y and Z was found to provide more accurate quantile estimates than those based on four homogeneous regions. Goodness-of-fit test results showed that the generalized extreme value (GEV) distribution is most suitable for the sub-regions; however, the three-parameter lognormal (LN3) gave a better performance in terms of robustness. The best fitting regional frequency distribution from 2002 has now been updated with the latest flood data, but quantile estimates with the new data were not very different from the previous study. Overall, in terms of quantile estimation, in both Labrador and the Island of Newfoundland, the index-flood procedure based on L-moments is highly recommended as it provided consistent and more accurate result than other techniques such as the regression on quantile technique that is currently used by the government.
Resumo:
Recent discussion regarding whether the noise that limits 2AFC discrimination performance is fixed or variable has focused either on describing experimental methods that presumably dissociate the effects of response mean and variance or on reanalyzing a published data set with the aim of determining how to solve the question through goodness-of-fit statistics. This paper illustrates that the question cannot be solved by fitting models to data and assessing goodness-of-fit because data on detection and discrimination performance can be indistinguishably fitted by models that assume either type of noise when each is coupled with a convenient form for the transducer function. Thus, success or failure at fitting a transducer model merely illustrates the capability (or lack thereof) of some particular combination of transducer function and variance function to account for the data, but it cannot disclose the nature of the noise. We also comment on some of the issues that have been raised in recent exchange on the topic, namely, the existence of additional constraints for the models, the presence of asymmetric asymptotes, the likelihood of history-dependent noise, and the potential of certain experimental methods to dissociate the effects of response mean and variance.
Resumo:
The standard difference model of two-alternative forced-choice (2AFC) tasks implies that performance should be the same when the target is presented in the first or the second interval. Empirical data often show “interval bias” in that percentage correct differs significantly when the signal is presented in the first or the second interval. We present an extension of the standard difference model that accounts for interval bias by incorporating an indifference zone around the null value of the decision variable. Analytical predictions are derived which reveal how interval bias may occur when data generated by the guessing model are analyzed as prescribed by the standard difference model. Parameter estimation methods and goodness-of-fit testing approaches for the guessing model are also developed and presented. A simulation study is included whose results show that the parameters of the guessing model can be estimated accurately. Finally, the guessing model is tested empirically in a 2AFC detection procedure in which guesses were explicitly recorded. The results support the guessing model and indicate that interval bias is not observed when guesses are separated out.
Resumo:
Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: a) increase the efficiency of the portfolio optimization process, b) implement large-scale optimizations, and c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH).
Resumo:
Background: Appetite and symptoms, conditions generally reported by the patients with cancer, are somewhat challenging for professionals to measure directly in clinical routine (latent conditions). Therefore, specific instruments are required for this purpose. This study aimed to perform a cultural adaptation of the Cancer Appetite and Symptom Questionnaire (CASQ), into Portuguese and evaluate its psychometric properties on a sample of Brazilian cancer patients. Methods: This is a validation study with Brazilian cancer patients. The face, content, and construct (factorial and convergent) validities of the Cancer Appetite and Symptom Questionnaire, the study tool, were estimated. Further, a confirmatory factor analysis (CFA) was conducted. The ratio of chi-square and degrees of freedom (χ2/df), comparative fit index (CFI), goodness of fit index (GFI) and root mean square error of approximation (RMSEA) were used for fit model assessment. In addition, the reliability of the instrument was estimated using the composite reliability (CR) and Cronbach’s alpha coefficient (α), and the invariance of the model in independent samples was estimated by a multigroup analysis (Δχ2). Results: Participants included 1,140 cancer patients with a mean age of 53.95 (SD = 13.25) years; 61.3% were women. After the CFA of the original CASQ structure, 2 items with inadequate factor weights were removed. Four correlations between errors were included to provide adequate fit to the sample (χ2/df = 8.532, CFI = .94, GFI = .95, and RMSEA = .08). The model exhibited a low convergent validity (AVE = .32). The reliability was adequate (CR = .82 α = .82). The refined model showed strong invariance in two independent samples (Δχ2: λ: p = .855; i: p = .824; Res: p = .390). A weak stability was obtained between patients undergoing chemotherapy and radiotherapy (Δχ2: λ: p = .155; i: p < .001; Res: p < .001), and between patients undergoing chemotherapy combined with radiotherapy and palliative care (Δχ2: λ: p = .058; i: p < .001; Res: p < .001). Conclusion: The Portuguese version of the CASQ had good face and construct validity and reliability. However, the CASQ still presented invariance in independent samples of Brazilian patients with cancer. However, the tool has low convergent validity and weak invariance in samples with different treatments.
Resumo:
Doutoramento em Gestão.
Resumo:
The Posttraumatic Growth Inventory (PTGI) is frequently used to assess positive changes following a traumatic event. The aim of the study is to examine the factor structure and the latent mean invariance of PTGI. A sample of 205 (M age = 54.3, SD = 10.1) women diagnosed with breast cancer and 456 (M age = 34.9, SD = 12.5) adults who had experienced a range of adverse life events were recruited to complete the PTGI and a socio-demographic questionnaire. We use Confirmatory Factor Analysis (CFA) to test the factor-structure and multi-sample CFA to examine the invariance of the PTGI between the two groups. The goodness of fit for the five-factor model is satisfactory for breast cancer sample (χ2(175) = 396.265; CFI = .884; NIF = .813; RMSEA [90% CI] = .079 [.068, .089]), and good for non-clinical sample (χ2(172) = 574.329; CFI = .931; NIF = .905; RMSEA [90% CI] = .072 [.065, .078]). The results of multi-sample CFA show that the model fit indices of the unconstrained model are equal but the model that uses constrained factor loadings is not invariant across groups. The findings provide support for the original five-factor structure and for the multidimensional nature of posttraumatic growth (PTG). Regarding invariance between both samples, the factor structure of PTGI and other parameters (i.e., factor loadings, variances, and co-variances) are not invariant across the sample of breast cancer patients and the non-clinical sample.
Resumo:
A non-linear least-squares methodology for simultaneously estimating parameters of selectivity curves with a pre-defined functional form, across size classes and mesh sizes, using catch size frequency distributions, was developed based on the model of Kirkwood and Walker [Kirkwood, G.P., Walker, T.L, 1986. Gill net selectivities for gummy shark, Mustelus antarcticus Gunther, taken in south-eastern Australian waters. Aust. J. Mar. Freshw. Res. 37, 689-697] and [Wulff, A., 1986. Mathematical model for selectivity of gill nets. Arch. Fish Wiss. 37, 101-106]. Observed catches of fish of size class I in mesh m are modeled as a function of the estimated numbers of fish of that size class in the population and the corresponding selectivities. A comparison was made with the maximum likelihood methodology of [Kirkwood, G.P., Walker, T.I., 1986. Gill net selectivities for gummy shark, Mustelus antarcticus Gunther, taken in south-eastern Australian waters. Aust. J. Mar. Freshw. Res. 37, 689-697] and [Wulff, A., 1986. Mathematical model for selectivity of gill nets. Arch. Fish Wiss; 37, 101-106], using simulated catch data with known selectivity curve parameters, and two published data sets. The estimated parameters and selectivity curves were generally consistent for both methods, with smaller standard errors for parameters estimated by non-linear least-squares. The proposed methodology is a useful and accessible alternative which can be used to model selectivity in situations where the parameters of a pre-defined model can be assumed to be functions of gear size; facilitating statistical evaluation of different models and of goodness of fit. (C) 1998 Elsevier Science B.V.
Resumo:
Introducción. Los trabajadores de los talleres de reparación de automóviles conviven diariamente con la exposición a los solventes orgánicos, exposición que se convierte en un riesgo para su salud que generalmente en el corto plazo se presenta como déficits de concentración, memoria y tiempo de reacción y en el largo plazo produciendo graves repercusiones clínicas como efectos mutagénicos y carcinogénicos. Objetivo. Caracterizar las condiciones higiénicas y de seguridad de trabajadores ocupacionalmente expuestos a solventes orgánicos y determinar los niveles ambientales de benceno, tolueno y xileno (BTX) en talleres de lámina y pintura automotriz de la ciudad de Bogotá. Materiales y métodos. Se hizo un estudio de corte transversal en 60 trabajadores que laboran expuestos a solventes orgánicos en talleres de reparación automotriz en Bogotá. Se realizó una encuesta con variables sociodemográficas, laborales y se determinaron los niveles en aire de benceno, tolueno y xileno. Para los muestreos ambientales, las bombas se colocaron en una posición fija representativa del ambiente general, con el objeto de conocer la distribución de los solventes en el área de trabajo. Se realizó un análisis descriptivo por conteos de frecuencia, medidas de tendencia central y dispersión. Se utilizó prueba de bondad de ajuste para distribución normal (Kolmogorov-Smirnov o Shapiro Wilk), prueba t Student para comparación de medias, o en su defecto prueba U de Mann Whitney para comparación de medianas. Para identificar la relación entre las características sociodemográficas y ocupacionales con la exposición a BTX, se utilizaron pruebas de asociación Chi cuadrado o análisis de correlación según la naturaleza de las variables. El nivel de significancia para cada prueba fue 0.05. Resultados. La edad promedio de los trabajadores fue de 43 años y un tiempo total de exposición a solventes de 20 años. Respecto al uso de protección corporal, 45 (75%) de los trabajadores manifestaron que usaban uniforme, mientras que 14 (23,3%) usaban ropa de calle durante la jornada laboral. El 46,7% manifestaron usar protección respiratoria. La concentración de benceno en aire fue entre 0,1 y 0,45 mg/l (mediana de 0,31 mg/l; DE 0,13 mg/l); la de tolueno estuvo entre 8,25 y 27,22 mg/l (mediana de 14,5 mg/l; DE 6.99 mg/l) y la de xileno entre 19,34 y 150,15 mg/l (mediana de 70,12 mg/l; DE 40,82 mg/l). Conclusión. Los pintores de automóviles están expuestos a niveles elevados de solventes en los lugares de trabajo y no cuentan con condiciones de higiene y seguridad industrial adecuados. Un gran número de pintores son informales lo que les impide el acceso a los beneficios del Sistema de Seguridad Social Integral.
Resumo:
Despite the success of the ΛCDM model in describing the Universe, a possible tension between early- and late-Universe cosmological measurements is calling for new independent cosmological probes. Amongst the most promising ones, gravitational waves (GWs) can provide a self-calibrated measurement of the luminosity distance. However, to obtain cosmological constraints, additional information is needed to break the degeneracy between parameters in the gravitational waveform. In this thesis, we exploit the latest LIGO-Virgo-KAGRA Gravitational Wave Transient Catalog (GWTC-3) of GW sources to constrain the background cosmological parameters together with the astrophysical properties of Binary Black Holes (BBHs), using information from their mass distribution. We expand the public code MGCosmoPop, previously used for the application of this technique, by implementing a state-of-the-art model for the mass distribution, needed to account for the presence of non-trivial features, i.e. a truncated power law with two additional Gaussian peaks, referred to as Multipeak. We then analyse GWTC-3 comparing this model with simpler and more commonly adopted ones, both in the case of fixed and varying cosmology, and assess their goodness-of-fit with different model selection criteria, and their constraining power on the cosmological and population parameters. We also start to explore different sampling methods, namely Markov Chain Monte Carlo and Nested Sampling, comparing their performances and evaluating the advantages of both. We find concurring evidence that the Multipeak model is favoured by the data, in line with previous results, and show that this conclusion is robust to the variation of the cosmological parameters. We find a constraint on the Hubble constant of H0 = 61.10+38.65−22.43 km/s/Mpc (68% C.L.), which shows the potential of this method in providing independent constraints on cosmological parameters. The results obtained in this work have been included in [1].
Resumo:
The ecosystem services provided by bees are very important. Factors as habitat fragmentation, intensive agriculture and climate change are contributing to the decline of bee populations. The use of remote sensing could be a useful tool for the recognition of sites with a high diversity, before performing a more expensive survey in the field. In this study the ability of Unmanned Aerial Vehicles (UAV) images to estimate biodiversity at local scale has been analysed testing the concept of the Height Variation Hypothesis (HVH). This approach states that, the higher the vegetation height heterogeneity (HH) measured by remote sensing information, the higher the vertical complexity and the higher vegetation species diversity. In this thesis the concept has been brought to a higher level, in order to understand if the vegetation HH can be considered a proxy also of bee species diversity and abundance. We tested this approach collecting field data on bees/flowers and RGB images through an UAV campaign in 30 grasslands in the South of the Netherlands. The Canopy Height Model (CHM) were derived through the photogrammetry technique "Structure from Motion" (SfM) with resolutions of 10cm, 25cm, 50cm. Successively, the HH assessed on the CHM using the Rao's Q heterogeneity index was correlated to the field data (bee abundance, diversity and bee/flower species richness). The correlations were all positive and significant. The highest R2 values were found when the HH was calculated at 10cm and correlated to bee species richness (R2 = 0.41) and Shannon’s H index (R2 = 0.38). Using a lower spatial resolution the goodness of fit slightly decreases. For flower species richness the R2 ranged between 0.36 to 0.39. Our results suggest that methods based on the concept behind the HVH, in this case deriving information of HH from UAV data, can be developed into valuable tools for large-scale, standardized and cost-effective monitoring of flower diversity and of the habitat quality for bees.
Resumo:
This paper focuses on the study of the factorial structure of an inventory to estimate the subjective perception of insecurity and fear of crime. Made from the review of the literature on the subject and the results obtained in previous works, this factor structure shows that this attitude towards insecurity and fear of crime is identified through a number of latent factors which are schematically summarized in (a) personal safety, (b) the perception of personal and social control, (c) the presence of threatening people or situations, (d) the processes of identity and space appropriation, (e) satisfaction with the environment, and (f) the environmental and the use of space. Such factors are relevant dimensions to analyze the phenomenon. Method: A sample of 571 participants in a neighborhood of Barcelona was evaluated with the proposed inventory, which yielded data from the distributions of all the items provided. The administration was conducted by researchers specially trained for it and the results were analyzed by using standard procedures in the confirmatory factor analysis (CFA) from the hypothesized theoretical structure. The analysis was performed by decatypes according to the different response scales prepared in the inventory and their ordinal nature, and by estimating the polychoric correlation coefficients. The results show an acceptable fit of the proposed model, an appropriate behavior of the residuals and statistically significant estimates of the factor loadings. This would indicate the goodness of the proposed factor structure.