877 resultados para Attitudes, Persuasion, Confidence, Voice, Elaboration Likelihood Model
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In the composition of this work are present two parts. The first part contains the theory used. The second part contains the two articles. The first article examines two models of the class of generalized linear models for analyzing a mixture experiment, which studied the effect of different diets consist of fat, carbohydrate, and fiber on tumor expression in mammary glands of female rats, given by the ratio mice that had tumor expression in a particular diet. Mixture experiments are characterized by having the effect of collinearity and smaller sample size. In this sense, assuming normality for the answer to be maximized or minimized may be inadequate. Given this fact, the main characteristics of logistic regression and simplex models are addressed. The models were compared by the criteria of selection of models AIC, BIC and ICOMP, simulated envelope charts for residuals of adjusted models, odds ratios graphics and their respective confidence intervals for each mixture component. It was concluded that first article that the simplex regression model showed better quality of fit and narrowest confidence intervals for odds ratio. The second article presents the model Boosted Simplex Regression, the boosting version of the simplex regression model, as an alternative to increase the precision of confidence intervals for the odds ratio for each mixture component. For this, we used the Monte Carlo method for the construction of confidence intervals. Moreover, it is presented in an innovative way the envelope simulated chart for residuals of the adjusted model via boosting algorithm. It was concluded that the Boosted Simplex Regression model was adjusted successfully and confidence intervals for the odds ratio were accurate and lightly more precise than the its maximum likelihood version.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The contemporary individual finds on the Internet and especially on the Web facilitating conditions to build a basic infrastructure based on the concept of commons. He also finds favorable conditions which allow him to collaborate and share resources for the creation, use, reuse, access and dissemination of information. However, he also faces obstacles such as Copyright (Law 9610/98 in Brazil). An alternative is Creative Commons which not only allows the elaboration, use and dissemination of information under legal conditions but also function as a facilitator for the development of informational commons. This paper deals with this scenario.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Educação - FCT
Resumo:
In this action research study I focused on my eighth grade pre-algebra students’ abilities to attack problems with enthusiasm and self confidence whether they completely understand the concepts or not. I wanted to teach them specific strategies and introduce and use precise vocabulary as a part of the problem solving process in hopes that I would see students’ confidence improve as they worked with mathematics. I used non-routine problems and concept-related open-ended problems to teach and model problem solving strategies. I introduced and practiced communication with specific and precise vocabulary with the goal of increasing student confidence and lowering student anxiety when they were faced with mathematics problem solving. I discovered that although students were working more willingly on problem solving and more inclined to attempt word problems using the strategies introduced in class, they were still reluctant to use specific vocabulary as they communicated to solve problems. As a result of this research, my style of teaching problem solving will evolve so that I focus more specifically on strategies and use precise vocabulary. I will spend more time introducing strategies and necessary vocabulary at the beginning of the year and continue to focus on strategies and process in order to lower my students’ anxiety and thus increase their self confidence when it comes to doing mathematics, especially problem solving.
Resumo:
The purpose of this study was to investigate the effectiveness of implementing the Self-Regulated Strategy Development (SRSD) model of instruction (Graham & Harris, 2005; Harris & Graham, 1996) on the writing skills and writing self-regulation, attitudes, self-efficacy, and knowledge of 6 first grade students. A multiple-baseline design across participants with multiple probes (Kazdin, 2010) was used to test the effectiveness of the SRSD instructional intervention. Each participant was taught an SRSD story writing strategy as well as self-regulation strategies. All students wrote stories in response to picture prompts during the baseline, instruction, independent performance, and maintenance phases. Stories were assessed for essential story components, length, and overall quality. All participants also completed a writing attitude scale, a writing self-efficacy scale, and participated in brief interviews during the baseline and independent performance phases. Results indicated that SRSD can be beneficial for average first grade writers. Participants wrote stories that contained more essential components, were longer, and of better quality after SRSD instruction. Participants also showed some improvement in writing self-efficacy from pre- to post-instruction. All of the students maintained positive writing attitudes throughout the study.
Resumo:
Evaluations of measurement invariance provide essential construct validity evidence. However, the quality of such evidence is partly dependent upon the validity of the resulting statistical conclusions. The presence of Type I or Type II errors can render measurement invariance conclusions meaningless. The purpose of this study was to determine the effects of categorization and censoring on the behavior of the chi-square/likelihood ratio test statistic and two alternative fit indices (CFI and RMSEA) under the context of evaluating measurement invariance. Monte Carlo simulation was used to examine Type I error and power rates for the (a) overall test statistic/fit indices, and (b) change in test statistic/fit indices. Data were generated according to a multiple-group single-factor CFA model across 40 conditions that varied by sample size, strength of item factor loadings, and categorization thresholds. Seven different combinations of model estimators (ML, Yuan-Bentler scaled ML, and WLSMV) and specified measurement scales (continuous, censored, and categorical) were used to analyze each of the simulation conditions. As hypothesized, non-normality increased Type I error rates for the continuous scale of measurement and did not affect error rates for the categorical scale of measurement. Maximum likelihood estimation combined with a categorical scale of measurement resulted in more correct statistical conclusions than the other analysis combinations. For the continuous and censored scales of measurement, the Yuan-Bentler scaled ML resulted in more correct conclusions than normal-theory ML. The censored measurement scale did not offer any advantages over the continuous measurement scale. Comparing across fit statistics and indices, the chi-square-based test statistics were preferred over the alternative fit indices, and ΔRMSEA was preferred over ΔCFI. Results from this study should be used to inform the modeling decisions of applied researchers. However, no single analysis combination can be recommended for all situations. Therefore, it is essential that researchers consider the context and purpose of their analyses.
Resumo:
The log-Burr XII regression model for grouped survival data is evaluated in the presence of many ties. The methodology for grouped survival data is based on life tables, where the times are grouped in k intervals, and we fit discrete lifetime regression models to the data. The model parameters are estimated by maximum likelihood and jackknife methods. To detect influential observations in the proposed model, diagnostic measures based on case deletion, so-called global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to these measures, the total local influence and influential estimates are also used. We conduct Monte Carlo simulation studies to assess the finite sample behavior of the maximum likelihood estimators of the proposed model for grouped survival. A real data set is analyzed using a regression model for grouped data.