989 resultados para Hatzan, Ada Rolston
Resumo:
This paper focuses on the construction of the narrative for elementary school students, searching to identify strategies they employed in the production of narrative texts representative of miniconto genre. Therefore, we take a sample forty texts produced by students of 6 and 9 years of basic education, twenty in the 6th year students (ten public school and ten private school) and twenty students in 9th grade (distributed similarly between public education and private). In general, we aim to understand the mechanisms by which producers build their narratives, as well as providing input for analysis of textual production of this genre. This research is based on Functional-Linguistic assumptions of the American side, inspired by Givón (2001), Thompson (2005), Hopper (1987), Bybee (2010), Traugott (2003), Martelotta (2008), Furtado da Cunha (2011), among others. In addition, from the theoretical framework presented by Labov (1972) about the narrative, coupled with Batoréo contribution (1998), we observed the recurring elements in the structure of narratives under study: abstract, orientation, complication, resolution, evaluation and coda. Also approached, but that in a complementary way, the notion of gender presented in Marcuschi (2002). This is a research quantitative and qualitative, with descriptive and analytical-interpretive bias. In corpus analysis, we consider the following categories: gender discourse miniconto; compositional structure of the narrative; informativeness (discursive progression, thematic coherence and narrative, topical-referential distribution); informative relevance (figure / ground). At the end of the work, our initial hypothesis of the better performance of students in 9th grade, compared to 6, and the particular context of education in relation to the public context, not confirmed, since, in the comparative study revealed that the groups have similar performance as the construction of the narrative, making use of the same strategies in its construction.
Resumo:
This paper focuses on the construction of the narrative for elementary school students, searching to identify strategies they employed in the production of narrative texts representative of miniconto genre. Therefore, we take a sample forty texts produced by students of 6 and 9 years of basic education, twenty in the 6th year students (ten public school and ten private school) and twenty students in 9th grade (distributed similarly between public education and private). In general, we aim to understand the mechanisms by which producers build their narratives, as well as providing input for analysis of textual production of this genre. This research is based on Functional-Linguistic assumptions of the American side, inspired by Givón (2001), Thompson (2005), Hopper (1987), Bybee (2010), Traugott (2003), Martelotta (2008), Furtado da Cunha (2011), among others. In addition, from the theoretical framework presented by Labov (1972) about the narrative, coupled with Batoréo contribution (1998), we observed the recurring elements in the structure of narratives under study: abstract, orientation, complication, resolution, evaluation and coda. Also approached, but that in a complementary way, the notion of gender presented in Marcuschi (2002). This is a research quantitative and qualitative, with descriptive and analytical-interpretive bias. In corpus analysis, we consider the following categories: gender discourse miniconto; compositional structure of the narrative; informativeness (discursive progression, thematic coherence and narrative, topical-referential distribution); informative relevance (figure / ground). At the end of the work, our initial hypothesis of the better performance of students in 9th grade, compared to 6, and the particular context of education in relation to the public context, not confirmed, since, in the comparative study revealed that the groups have similar performance as the construction of the narrative, making use of the same strategies in its construction.
Resumo:
This work presents the numerical analysis of nonlinear trusses summited to thermomechanical actions with Finite Element Method (FEM). The proposed formulation is so-called positional FEM and it is based on the minimum potential energy theorem written according to nodal positions, instead of displacements. The study herein presented considers the effects of geometric and material nonlinearities. Related to dynamic problems, a comparison between different time integration algorithms is performed. The formulation is extended to impact problems between trusses and rigid wall, where the nodal positions are constrained considering nullpenetration condition. In addition, it is presented a thermodynamically consistent formulation, based on the first and second law of thermodynamics and the Helmholtz free-energy for analyzing dynamic problems of truss structures with thermoelastic and thermoplastic behavior. The numerical results of the proposed formulation are compared with examples found in the literature.
Resumo:
Temporal-order judgment (TOJ) and simultaneity judgment (SJ) tasks are used to study differences in speed of processing across sensory modalities, stimulus types, or experimental conditions. Matthews and Welch (2015) reported that observed performance in SJ and TOJ tasks is superior when visual stimuli are presented in the left visual field (LVF) compared to the right visual field (RVF), revealing an LVF advantage presumably reflecting attentional influences. Because observed performance reflects the interplay of perceptual and decisional processes involved in carrying out the tasks, analyses that separate out these influences are needed to determine the origin of the LVF advantage. We re-analyzed the data of Matthews and Welch (2015) using a model of performance in SJ and TOJ tasks that separates out these influences. Parameter estimates capturing the operation of perceptual processes did not differ between hemifields by these analyses, whereas parameter estimates capturing the operation of decisional processes differed. In line with other evidence, perceptual processing also did not differ between SJ and TOJ tasks. Thus, the LVF advantage occurs with identical speeds of processing in both visual hemifields. If attention is responsible for the LVF advantage, it does not exert its influence via prior entry.
Resumo:
Omnibus tests of significance in contingency tables use statistics of the chi-square type. When the null is rejected, residual analyses are conducted to identify cells in which observed frequencies differ significantly from expected frequencies. Residual analyses are thus conditioned on a significant omnibus test. Conditional approaches have been shown to substantially alter type I error rates in cases involving t tests conditional on the results of a test of equality of variances, or tests of regression coefficients conditional on the results of tests of heteroscedasticity. We show that residual analyses conditional on a significant omnibus test are also affected by this problem, yielding type I error rates that can be up to 6 times larger than nominal rates, depending on the size of the table and the form of the marginal distributions. We explored several unconditional approaches in search for a method that maintains the nominal type I error rate and found out that a bootstrap correction for multiple testing achieved this goal. The validity of this approach is documented for two-way contingency tables in the contexts of tests of independence, tests of homogeneity, and fitting psychometric functions. Computer code in MATLAB and R to conduct these analyses is provided as Supplementary Material.
Resumo:
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Resumo:
Morgan, Dillenburger, Raphael, and Solomon have shown that observers can use different response strategies when unsure of their answer, and, thus, they can voluntarily shift the location of the psychometric function estimated with the method of single stimuli (MSS; sometimes also referred to as the single-interval, two-alternative method). They wondered whether MSS could distinguish response bias from a true perceptual effect that would also shift the location of the psychometric function. We demonstrate theoretically that the inability to distinguish response bias from perceptual effects is an inherent shortcoming of MSS, although a three-response format including also an "undecided" response option may solve the problem under restrictive assumptions whose validity cannot be tested with MSS data. We also show that a proper two-alternative forced-choice (2AFC) task with the three-response format is free of all these problems so that bias and perceptual effects can easily be separated out. The use of a three-response 2AFC format is essential to eliminate a confound (response bias) in studies of perceptual effects and, hence, to eliminate a threat to the internal validity of research in this area.
Resumo:
Research on the perception of temporal order uses either temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks, in both of which two stimuli are presented with some temporal delay and observers must judge the order of presentation. Results generally differ across tasks, raising concerns about whether they measure the same processes. We present a model including sensory and decisional parameters that places these tasks in a common framework that allows studying their implications on observed performance. TOJ tasks imply specific decisional components that explain the discrepancy of results obtained with TOJ and SJ tasks. The model is also tested against published data on audiovisual temporal-order judgments, and the fit is satisfactory, although model parameters are more accurately estimated with SJ tasks. Measures of latent point of subjective simultaneity and latent sensitivity are defined that are invariant across tasks by isolating the sensory parameters governing observed performance, whereas decisional parameters vary across tasks and account for observed differences across them. Our analyses concur with other evidence advising against the use of TOJ tasks in research on perception of temporal order.
Resumo:
Recent studies have reported that flanking stimuli broaden the psychometric function and lower detection thresholds. In the present study, we measured psychometric functions for detection and discrimination with and without flankers to investigate whether these effects occur throughout the contrast continuum. Our results confirm that lower detection thresholds with flankers are accompanied by broader psychometric functions. Psychometric functions for discrimination reveal that discrimination thresholds with and without flankers are similar across standard levels, and that the broadening of psychometric functions with flankers disappears as standard contrast increases, to the point that psychometric functions at high standard levels are virtually identical with or without flankers. Threshold-versus-contrast (TvC) curves with flankers only differ from TvC curves without flankers in occasional shallower dippers and lower branches on the left of the dipper, but they run virtually superimposed at high standard levels. We discuss differences between our results and other results in the literature, and how they are likely attributed to the differential vulnerability of alternative psychophysical procedures to the effects of presentation order. We show that different models of flanker facilitation can fit the data equally well, which stresses that succeeding at fitting a model does not validate it in any sense.
Resumo:
Lapid, Ulrich, and Rammsayer (2008) reported that estimates of the difference limen (DL) from a two-alternative forced choice (2AFC) task are higher than those obtained from a reminder task. This article reanalyzes their data in order to correct an error in their estimates of the DL from 2AFC data. We also extend the psychometric functions fitted to data from both tasks to incorporate an extra parameter that has been shown to allow obtaining accurate estimates of the DL that are unaffected by lapses. Contrary to Lapid et al.'s conclusion, our reanalysis shows that DLs estimated with the 2AFC task are only minimally (and not always significantly) larger than those estimated with the reminder task. We also show that their data are contaminated by response bias, and that the small remaining difference between DLs estimated with 2AFC and reminder tasks can be reasonably attributed to the differential effects that response bias has in either task as they were defined in Lapid et al.'s experiments. Finally, we discuss a novel approach presented by Ulrich and Vorberg (2009) for fitting psychometric functions to 2AFC discrimination data.
Resumo:
Fixed-step-size (FSS) and Bayesian staircases are widely used methods to estimate sensory thresholds in 2AFC tasks, although a direct comparison of both types of procedure under identical conditions has not previously been reported. A simulation study and an empirical test were conducted to compare the performance of optimized Bayesian staircases with that of four optimized variants of FSS staircase differing as to up-down rule. The ultimate goal was to determine whether FSS or Bayesian staircases are the best choice in experimental psychophysics. The comparison considered the properties of the estimates (i.e. bias and standard errors) in relation to their cost (i.e. the number of trials to completion). The simulation study showed that mean estimates of Bayesian and FSS staircases are dependable when sufficient trials are given and that, in both cases, the standard deviation (SD) of the estimates decreases with number of trials, although the SD of Bayesian estimates is always lower than that of FSS estimates (and thus, Bayesian staircases are more efficient). The empirical test did not support these conclusions, as (1) neither procedure rendered estimates converging on some value, (2) standard deviations did not follow the expected pattern of decrease with number of trials, and (3) both procedures appeared to be equally efficient. Potential factors explaining the discrepancies between simulation and empirical results are commented upon and, all things considered, a sensible recommendation is for psychophysicists to run no fewer than 18 and no more than 30 reversals of an FSS staircase implementing the 1-up/3-down rule.
Resumo:
Threshold estimation with sequential procedures is justifiable on the surmise that the index used in the so-called dynamic stopping rule has diagnostic value for identifying when an accurate estimate has been obtained. The performance of five types of Bayesian sequential procedure was compared here to that of an analogous fixed-length procedure. Indices for use in sequential procedures were: (1) the width of the Bayesian probability interval, (2) the posterior standard deviation, (3) the absolute change, (4) the average change, and (5) the number of sign fluctuations. A simulation study was carried out to evaluate which index renders estimates with less bias and smaller standard error at lower cost (i.e. lower average number of trials to completion), in both yes–no and two-alternative forced-choice (2AFC) tasks. We also considered the effect of the form and parameters of the psychometric function and its similarity with themodel function assumed in the procedure. Our results show that sequential procedures do not outperform fixed-length procedures in yes–no tasks. However, in 2AFC tasks, sequential procedures not based on sign fluctuations all yield minimally better estimates than fixed-length procedures, although most of the improvement occurs with short runs that render undependable estimates and the differences vanish when the procedures run for a number of trials (around 70) that ensures dependability. Thus, none of the indices considered here (some of which are widespread) has the diagnostic value that would justify its use. In addition, difficulties of implementation make sequential procedures unfit as alternatives to fixed-length procedures.
Resumo:
Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model
Resumo:
Este proyecto consiste en el desarrollo de un sistema para simular misiones de rescate usando equipos de robots donde cada robot tiene sus propios objetivos y debe coordinarse con el resto de sus compañeros para realizar con existo la misión de rescate en escenarios dinámicos. El escenario se caracteriza por contener: - Agentes Robot: son las entidades del sistema encargado de tareas relacionadas con el rescate, como por ejemplo, explorar el terreno o rescatar a una víctima. Se organizan de forma jerárquica, esto es, hay un jefe encargado de asignar tareas a los demás robots, que serán subordinados. - Víctimas: son los objetivos a rescatar en la misión. Tienen una identificación, una localización y una esperanza de vida. -Obstáculos: delimitan una zona por la que el robot no puede pasar. Simulan la existencia de paredes, rocas, árboles…, es decir, cualquier tipo de estructura existente en un escenario real. - Zona segura: marca un punto del mapa adonde los robots moverán a las víctimas en el rescate. Representa lo que en un rescate real sería un campamento u hospital. El sistema permite: - Crear y gestionar escenarios de simulación - Definir equipos de robots con diferentes miembros, diferentes objetivos y comportamientos. - Definir modelos organizativos en los equipos y estrategias de coordinación. - Realizar los objetivos individuales y de grupo para salvar a las víctimas llevándolas al sitio seguro esquivando los obstáculos. - Realizar experimentos de simulación: probar distintas configuraciones de equipo con un número variable de robots, varias víctimas en lugares diferentes y escenarios independientes. Se ha partido del proyecto ROSACE(Robots et Systèmes AutoCommunicants Embarqués / Robots y sistemas embebidos autocomunicantes), que está construido sobre la herramienta ICARO, que es una Infraestructura Ligera de Componentes Software Java basada en Agentes y Recursos y Organizaciones para el desarrollo de aplicaciones distribuidas. El punto de partida ya implementaba una versión preliminar del proyecto capaz de organizar objetivos entre los robots y que consigan ir a la localización objetivo. El presente proyecto utiliza el patrón arquitectónico de ROSACE y parte de su infraestructura pero desarrolla un sistema original con nuevas herramientas para definir y gestionar escenarios, disponer de un modelo más realista del comportamiento de los robots y controlar el proceso de simulación para incluir posibles fallos de los robots y para el estudio individual y colectivo de los miembros de los equipos.
Resumo:
Current interest in measuring quality of life is generating interest in the construction of computerized adaptive tests (CATs) with Likert-type items. Calibration of an item bank for use in CAT requires collecting responses to a large number of candidate items. However, the number is usually too large to administer to each subject in the calibration sample. The concurrent anchor-item design solves this problem by splitting the items into separate subtests, with some common items across subtests; then administering each subtest to a different sample; and finally running estimation algorithms once on the aggregated data array, from which a substantial number of responses are then missing. Although the use of anchor-item designs is widespread, the consequences of several configuration decisions on the accuracy of parameter estimates have never been studied in the polytomous case. The present study addresses this question by simulation, comparing the outcomes of several alternatives on the configuration of the anchor-item design. The factors defining variants of the anchor-item design are (a) subtest size, (b) balance of common and unique items per subtest, (c) characteristics of the common items, and (d) criteria for the distribution of unique items across subtests. The results of this study indicate that maximizing accuracy in item parameter recovery requires subtests of the largest possible number of items and the smallest possible number of common items; the characteristics of the common items and the criterion for distribution of unique items do not affect accuracy.