800 resultados para task performance
Resumo:
This study was designed to address questions regarding the effects of sex and leadership style on teacher perceptions of principal effectiveness. On a researcher-designed instrument, middle school teachers rated the effectiveness of a scenario principal's response in several situations. The responses reflected varying levels of Task and Relationship Behavior.^ The design incorporated two between subjects factors (Teacher Sex and Principal Sex) and one within subjects factor (Leadership Style) which was treated as a repeated measure. An analysis of variance revealed no significant effects except for Leadership Style. Overall, High Task/High Relationship behavior rated significantly higher and Low Task/Low Relationship rated significantly lower than the others. The null hypothesis concerning differences could not be rejected and the stated research hypotheses were not supported.^ Additional analyses of variance were conducted substituting subject demographic variables for Teacher Sex in the research design. No significant interactions or main effects other than Leadership Style were noted when either Age or Ethnicity were substituted.^ A significant two-way interaction was noted for Teacher Experience and Leadership Style (p =.0316). Less experienced teachers rated principal's performance lower when exhibiting High Task/Low Relationship style than did more experienced teachers. A significant three-way interaction was noted for Administrative Aspiration x Principal Sex x Leadership Style (p =.0294). Teachers who indicated an intent to enter administration differed more on their ratings between male and female principals exhibiting mixed styles of High Task/Low Relationship and Low Task/High Relationship than did teachers who indicated no or undecided.^ Sex of the teacher appears less important than sex of the principal on performance ratings. Results suggest further study of the effects of teacher experience and teacher administrative aspiration on perceptions of principal effectiveness. ^
Resumo:
Experimental evidence suggests that derived relational responding (DRR) may provide a behavioral model of complex language phenomena. This study assigned 72 students to groups based upon their performance on a complex relational task. It was found that performance on DRR relates to scores on the WAIS-III.
Resumo:
Conceptual database design is an unusually difficult and error-prone task for novice designers. This study examined how two training approaches---rule-based and pattern-based---might improve performance on database design tasks. A rule-based approach prescribes a sequence of rules for modeling conceptual constructs, and the action to be taken at various stages while developing a conceptual model. A pattern-based approach presents data modeling structures that occur frequently in practice, and prescribes guidelines on how to recognize and use these structures. This study describes the conceptual framework, experimental design, and results of a laboratory experiment that employed novice designers to compare the effectiveness of the two training approaches (between-subjects) at three levels of task complexity (within subjects). Results indicate an interaction effect between treatment and task complexity. The rule-based approach was significantly better in the low-complexity and the high-complexity cases; there was no statistical difference in the medium-complexity case. Designer performance fell significantly as complexity increased. Overall, though the rule-based approach was not significantly superior to the pattern-based approach in all instances, it out-performed the pattern-based approach at two out of three complexity levels. The primary contributions of the study are (1) the operationalization of the complexity construct to a degree not addressed in previous studies; (2) the development of a pattern-based instructional approach to database design; and (3) the finding that the effectiveness of a particular training approach may depend on the complexity of the task.
Resumo:
This study was designed to address questions regarding the effects of sex and leadership style on teacher perceptions of principal effectiveness. On a researcher-designed instrument, middle school teachers rated the effectiveness of a scenario principal's response in several situations. The responses reflected varying levels of Task and Relationship Behavior. The design incorporated two between subjects factors (Teacher Sex and Principal Sex) and one within subjects factor (Leadership Style) which was treated as a repeated measure. An analysis of variance revealed no significant effects except for Leadership Style. Overall, High Task/High Relationship behavior rated significantly higher and Low Task/Low Relationship rated significantly lower than the others. The null hypothesis concerning differences could not be rejected and the stated research hypotheses were not supported. Additional analyses of variance were conducted substituting subject demographic variables for Teacher Sex in the research design. No significant interactions or main effects other than Leadership Style were noted when either Age or Ethnicity were substituted. A significant two-way interaction was noted for Teacher Experience and Leadership Style (p = .0316). Less experienced teachers rated principal's performance lower when exhibiting High Task/Low Relationship style than did more experienced teachers. A significant three-way interaction was noted for Administrative Aspiration x Principal Sex x Leadership Style (p = .0294). Teachers who indicated an intent to enter administration differed more on their ratings between male and female principals exhibiting mixed styles of High Task/Low Relationship and Low Task/High Relationship than did teachers who indicated no or undecided. Sex of the teacher appears less important than sex of the principal on performance ratings. Results suggest further study of the effects of teacher experience and teacher administrative aspiration on perceptions of principal effectiveness.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
A comprehensive approach to sport expertise should consider the entire situation that is comprised of the person, the task, the environment, and the complex interplay of these components (Hackfort, 1986). Accordingly, the Developmental Model of Sport Participation (Côté, Baker, & Abernethy, 2007; Côté & Fraser-Thomas, 2007) provides a comprehensive framework for sport expertise that outlines different pathways of involvement in sport. In pathways one and two, early sampling serves as the foundation for both elite and recreational sport participation. Early sampling is based on two main elements of childhood sport participation: 1) involvement in various sports and 2) participation in deliberate play. In contrast, pathway three shows the course to elite performance through early specialization in one sport. Early specialization implies a focused involvement on one sport and a large number of deliberate practice activities with the goal of improving sport skills and performance during childhood. This paper proposes seven postulates regarding the role that sampling and deliberate play, as opposed to specialization and deliberate practice, can have during childhood in promoting continued participation and elite performance in sport.
Resumo:
The question as to whether people totally blind since infancy process allocentric or ‘external’ spatial information like the sighted has caused considerable debate within the literature. Due to the extreme rarity of the population, researchers have often included individuals with Retinopathy of Prematurity (RoP – over oxygenation at birth) within the sample. However, RoP is inextricably confounded with prematurity per se. Prematurity, without visual disability, has been associated with spatial processing difficulties. In this experiment, blindfolded sighted and two groups of functionally totally blind participants heard text descriptions from a survey (allocentric) or route (egocentric) perspective. One blind group lost their sight due to retinopathy of prematurity (RoP – over oxygenation at birth) and a second group before 24 months of age. The accuracy of participants’ mental representations derived from the text descriptions were assessed via questions and maps. The RoP participants had lower scores than the sighted and early blind, who performed similarly. In other words, it was not visual impairment alone that resulted in impaired allocentric spatial performance in this task, but visual impairment together with RoP. This finding may help explain the contradictions within the existing literature on the role of vision in allocentric spatial processing.
Resumo:
The article presents a study of a CEFR B2-level reading subtest that is part of the Slovenian national secondary school leaving examination in English as a foreign language, and compares the test-taker actual performance (objective difficulty) with the test-taker and expert perceptions of item difficulty (subjective difficulty). The study also analyses the test-takers’ comments on item difficulty obtained from a while-reading questionnaire. The results are discussed in the framework of the existing research in the fields of (the assessment of) reading comprehension, and are addressed with regard to their implications for item-writing, FL teaching and curriculum development.
Resumo:
Person re-identification involves recognizing a person across non-overlapping camera views, with different pose, illumination, and camera characteristics. We propose to tackle this problem by training a deep convolutional network to represent a person’s appearance as a low-dimensional feature vector that is invariant to common appearance variations encountered in the re-identification problem. Specifically, a Siamese-network architecture is used to train a feature extraction network using pairs of similar and dissimilar images. We show that use of a novel multi-task learning objective is crucial for regularizing the network parameters in order to prevent over-fitting due to the small size the training dataset. We complement the verification task, which is at the heart of re-identification, by training the network to jointly perform verification, identification, and to recognise attributes related to the clothing and pose of the person in each image. Additionally, we show that our proposed approach performs well even in the challenging cross-dataset scenario, which may better reflect real-world expected performance.
Resumo:
Recent evidence has highlighted the important role that number ordering skills play in arithmetic abilities (e.g., Lyons & Beilock, 2011). In fact, Lyons et al. (2014) demonstrated that although at the start of formal mathematics education number comparison skills are the best predictors of arithmetic performance, from around the age of 10, number ordering skills become the strongest numerical predictors of arithmetic abilities. In the current study we demonstrated that number comparison and ordering skills were both significantly related to arithmetic performance in adults, and the effect size was greater in the case of ordering skills. Additionally, we found that the effect of number comparison skills on arithmetic performance was partially mediated by number ordering skills. Moreover, performance on comparison and ordering tasks involving the months of the year was also strongly correlated with arithmetic skills, and participants displayed similar (canonical or reverse) distance effects on the comparison and ordering tasks involving months as when the tasks included numbers. This suggests that the processes responsible for the link between comparison and ordering skills and arithmetic performance are not specific to the domain of numbers. Finally, a factor analysis indicated that performance on comparison and ordering tasks loaded on a factor which included performance on a number line task and self-reported spatial thinking styles. These results substantially extend previous research on the role of order processing abilities in mental arithmetic.
Resumo:
Safety on public transport is a major concern for the relevant authorities. We
address this issue by proposing an automated surveillance platform which combines data from video, infrared and pressure sensors. Data homogenisation and integration is achieved by a distributed architecture based on communication middleware that resolves interconnection issues, thereby enabling data modelling. A common-sense knowledge base models and encodes knowledge about public-transport platforms and the actions and activities of passengers. Trajectory data from passengers is modelled as a time-series of human activities. Common-sense knowledge and rules are then applied to detect inconsistencies or errors in the data interpretation. Lastly, the rationality that characterises human behaviour is also captured here through a bottom-up Hierarchical Task Network planner that, along with common-sense, corrects misinterpretations to explain passenger behaviour. The system is validated using a simulated bus saloon scenario as a case-study. Eighteen video sequences were recorded with up to six passengers. Four metrics were used to evaluate performance. The system, with an accuracy greater than 90% for each of the four metrics, was found to outperform a rule-base system and a system containing planning alone.
Resumo:
La valutazione strumentale del cammino è solitamente svolta chiedendo ai soggetti semplicemente di camminare (ST). Tale condizione non rappresenta la quotidianità. Infatti, nella vita di tutti i giorni la locomozione richiede di adattarsi alle necessità individuali e il coinvolgimento di attività cognitive. I paradigmi di Dual-Task (DT) sono utilizzati per valutare i cambiamenti nella strategia di controllo del cammino in situazioni di vita quotidiana. In particolare, gli indici di performance, di variabilità e di stabilità, utilizzati nella valutazione del controllo motorio, potrebbero essere utili per valutare le interferenze cognitive durante il cammino. L’obiettivo del lavoro è di valutare come tali indici cambiano durante il Dual-Task. Sono stati reclutati 16 studenti, giovani e sani, della Facoltà di Ingegneria Biomedica di Cesena, ai quali è stato chiesto di compiere un cammino rettilineo di 250 m, senza ostacoli, all’aperto, in due condizioni: svolgendo la sola attività di cammino (ST); aggiungendo al precedente task, una sottrazione consecutiva di 7 ad alta voce, partendo da un numero casuale (DT). Tramite tre sensori inerziali tri-assiali, posti sul tronco (L5) e sulle caviglie, sono stati acquisiti i segnali di accelerazione e velocità angolare. Dopo aver calcolato, a partire da tali dati, indici di performance (numero di passi, cadence, velocità e tempo di esecuzione del test), di variabilità (Standard Deviation, Coefficient of Variation, Index of the Variance, Nonstationary Index, Poincare 4 Plots) e di stabilità (Harmonic Ratio e Index of Harmonicity), nelle due condizioni (ST e DT), è stata eseguita un’analisi statistica tra i due task. Le analisi statistiche condotte su tali indici hanno evidenziato che il DT influenza prevalentemente gli indici di performance (numero di passi, cadence, velocità e tempo di esecuzione del test) e in grado minore gli indici di variabilità e stabilità.
Resumo:
This paper discusses areas for future research opportunities by addressing accounting issues faced by management accountants practicing in hospitality organizations. Specifically, the article focuses on the use of the uniform system of accounts by operating properties, the usefulness of allocating support costs to operated departments, extending our understanding of operating costs and performance measurement systems and the certification of practicing accountants.