769 resultados para Human performance
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
The duration of movements made to intercept moving targets decreases and movement speed increases when interception requires greater temporal precision. Changes in target size and target speed can have the same effect on required temporal precision, but the response to these changes differs: changes in target speed elicit larger changes in response speed. A possible explanation is that people attempt to strike the target in a central zone that does not vary much with variation in physical target size: the effective size of the target is relatively constant over changes in physical size. Three experiments are reported that test this idea. Participants performed two tasks: (1) strike a moving target with a bat moved perpendicular to the path of the target; (2) press on a force transducer when the target was in a location where it could be struck by the bat. Target speed was varied and target size held constant in experiment 1. Target speed and size were co-varied in experiment 2, keeping the required temporal precision constant. Target size was varied and target speed held constant in experiment 3 to give the same temporal precision as experiment 1. Duration of hitting movements decreased and maximum movement speed increased with increases in target speed and/or temporal precision requirements in all experiments. The effects were largest in experiment 1 and smallest in experiment 3. Analysis of a measure of effective target size (standard deviation of strike locations on the target) failed to support the hypothesis that performance differences could be explained in terms of effective size rather than actual physical size. In the pressing task, participants produced greater peak forces and shorter force pulses when the temporal precision required was greater, showing that the response to increasing temporal precision generalizes to different responses. It is concluded that target size and target speed have independent effects on performance.
Resumo:
Air Traffic Control Laboratory Simulator (ATC-lab) is a new low- and medium-fidelity task environment that simulates air traffic control. ATC-lab allows the researcher to study human performance of tasks under tightly controlled experimental conditions in a dynamic, spatial environment. The researcher can create standardized air traffic scenarios by manipulating a wide variety of parameters. These include temporal and spatial variables. There are two main versions of ATC-lab. The medium-fidelity simulator provides a simplified version of en route air traffic control, requiring participants to visually search a screen and both recognize and resolve conflicts so that adequate separation is maintained between all aircraft. The low-fidelity simulator presents pairs of aircraft in isolation, controlling the participant's focus of attention, which provides a more systematic measurement of conflict recognition and resolution performance. Preliminary studies have demonstrated that ATC-lab is a flexible tool for applied cognition research.
Resumo:
Considerando a acelerada expansão da Educação a Distância (EaD) no Brasil e os desafios que ainda enfrenta, torna-se importante dedicar um olhar cuidadoso sobre as questões envolvidas. Do ponto de vista da Psicologia, o foco no indivíduo que trabalha com EaD é fundamental. Neste contexto, destaca-se o trabalho do tutor, um recorte deste complexo mundo do trabalho em EaD, alvo deste estudo. O desempenho profissional do tutor depende de fatores tecnológicos, ambientais e psicossociais. Neste este estudo, o objetivo foi verificar o impacto que as crenças de autoeficácia no trabalho, a percepção de suporte social e o engajamento no trabalho exercem sobre o desempenho de tutores de disciplinas oferecidas a distância. A pesquisa teve caráter transversal e foi desenvolvida em uma universidade brasileira com sede no Estado de São Paulo. Os 227 tutores participantes atuavam em diferentes localidades do Brasil; 62% eram mulheres, 65% casados, 66% possuíam idades entre 25 e 45 anos e 97% cursou pelo menos especialização. A coleta dos dados foi realizada por meio eletrônico. Foram aplicadas escalas válidas e confiáveis de autoeficácia, engajamento e percepção de suporte social no trabalho, além de um questionário de dados sociodemográficos. Foi ainda realizada pesquisa documental para levantar informações sobre desempenho. Cálculos de médias, desvios-padrão, medianas e quartis revelaram que os tutores possuem bons níveis de autoeficácia, engajamento no trabalho e desempenho. Mais de 75% deles percebem ter acesso a informações suficientes e importantes, bem como contar com relacionamentos confiáveis e afetivos no trabalho, enquanto metade percebe dispor de bons insumos materiais, financeiros, técnicos e gerenciais. Resultados de análises de variância revelaram não haver diferenças entre desempenho de grupos que exercem outras atividades profissionais além da tutoria e os que não exercem, nem entre grupos que possuem e não possuem formação específica para atuar em tutoria em EaD. Resultados de regressões lineares múltiplas revelaram que as crenças de autoeficácia no trabalho, a percepção de suporte social e o engajamento no trabalho não explicam significantemente a variância do desempenho de tutores. Os resultados foram discutidos, sustentados principalmente na pequena variabilidade das notas de desempenho, considerando que 98,8% dos tutores obtiveram pontuação superior à média do instrumento de avaliação utilizado pela instituição, o que pode revelar dificuldades no processo de avaliação ou problemas relativos à validade do instrumento. Discutiram-se, além disso, questões relacionadas ao desempenho humano, como fenômeno complexo e multidimensional, buscando abordar o papel das variáveis do estudo em sua determinação, à luz da literatura especializada. Ao final, foram apresentadas implicações metodológicas, teóricas e práticas, bem como limitações do estudo e agenda de pesquisa.
Resumo:
The overall aim of this study was to examine experimentally the effects of noise upon short-term memory tasks in the hope of shedding further light upon the apparently inconsistent results of previous research in the area. Seven experiments are presented. The first chapter of the thesis comprised a comprehensive review of the literature on noise and human performance while in the second chapter some theoretical questions concerning the effects of noise were considered in more detail follovred by a more detailed examination of the effects of noise upon memory. Chapter 3 described an experiment which examined the effects of noise on attention allocation in short-term memory as a function of list length. The results provided only weak evidence of increased selectivity in noise. In further chapters no~effects Here investigated in conjunction vrith various parameters of short-term memory tasks e.g. the retention interval, presentation rate. The results suggested that noise effects were significantly affected by the length of the retention interval but not by the rate of presentation. Later chapters examined the possibility of differential noise effects on the mode of recall (recall v. recognition) and the type of presentation (sequential v. simultaneous) as well as an investigation of the effect of varying the point of introduction of the noise and the importance of individual differences in noise research. The results of this study were consistent with the hypothesis that noise at presentation facilitates phonemic coding. However, noise during recall appeared to affect the retrieval strategy adopted by the subject.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
With increasing prevalence and capabilities of autonomous systems as part of complex heterogeneous manned-unmanned environments (HMUEs), an important consideration is the impact of the introduction of automation on the optimal assignment of human personnel. The US Navy has implemented optimal staffing techniques before in the 1990's and 2000's with a "minimal staffing" approach. The results were poor, leading to the degradation of Naval preparedness. Clearly, another approach to determining optimal staffing is necessary. To this end, the goal of this research is to develop human performance models for use in determining optimal manning of HMUEs. The human performance models are developed using an agent-based simulation of the aircraft carrier flight deck, a representative safety-critical HMUE. The Personnel Multi-Agent Safety and Control Simulation (PMASCS) simulates and analyzes the effects of introducing generalized maintenance crew skill sets and accelerated failure repair times on the overall performance and safety of the carrier flight deck. A behavioral model of four operator types (ordnance officers, chocks and chains, fueling officers, plane captains, and maintenance operators) is presented here along with an aircraft failure model. The main focus of this work is on the maintenance operators and aircraft failure modeling, since they have a direct impact on total launch time, a primary metric for carrier deck performance. With PMASCS I explore the effects of two variables on total launch time of 22 aircraft: 1) skill level of maintenance operators and 2) aircraft failure repair times while on the catapult (referred to as Phase 4 repair times). It is found that neither introducing a generic skill set to maintenance crews nor introducing a technology to accelerate Phase 4 aircraft repair times improves the average total launch time of 22 aircraft. An optimal manning level of 3 maintenance crews is found under all conditions, the point at which any additional maintenance crews does not reduce the total launch time. An additional discussion is included about how these results change if the operations are relieved of the bottleneck of installing the holdback bar at launch time.
Resumo:
INTRODUCTION Zero-G parabolic flight reproduces the weightlessness of space for short periods of time. However motion sickness may affect some fliers. The aim was to assess the extent of this problem and to find possible predictors and modifying factors. METHODS Airbus Zero-G flights consist of 31 parabolas performed in blocks. Each parabola consisted of 20s 0g sandwiched by 20s hypergravity of 1.5-1.8g. The survey covered n=246 person-flights (193 Males 53 Females), aged (M+/-SD) 36.0+/-11.3 years. An anonymous questionnaire included motion sickness rating (1=OK to 6=Vomiting), Motion Sickness Susceptibility Questionnaire (MSSQ), anti-motion sickness medication, prior Zero-G experience, anxiety level, and other characteristics. RESULTS Participants had lower MSSQ percentile scores 27.4+/-28.0 than the population norm of 50. Motion sickness was experienced by 33% and 12% vomited. Less motion sickness was predicted by older age, greater prior Zero-G flight experience, medication with scopolamine, lower MSSQ scores, but not gender nor anxiety. Sickness ratings in fliers pre-treated with scopolamine (1.81+/-1.58) were lower than for non-medicated fliers (2.93+/-2.16), and incidence of vomiting in fliers using scopolamine treatment was reduced by half to a third. Possible confounding factors including age, sex, flight experience, MSSQ, could not account for this. CONCLUSION Motion sickness affected one third of Zero-G fliers, despite being intrinsically less motion sickness susceptible compared to the general population. Susceptible individuals probably try to avoid such a provocative environment. Risk factors for motion sickness included younger age and higher MSSQ scores. Protective factors included prior Zero-G flight experience (habituation) and anti-motion sickness medication.
Resumo:
Entrepreneurship education has emerged as one popular research domain in academic fields given its aim at enhancing and developing certain entrepreneurial qualities of undergraduates that change their state of behavior, even their entrepreneurial inclination and finally may result in the formation of new businesses as well as new job opportunities. This study attempts to investigate the Colombian student´s entrepreneurial qualities and the influence of entrepreneurial education during their studies.
Resumo:
El presente trabajo se realizó con el objetivo de tener una visión completa de las teorías del liderazgo, teniendo de este una concepción como proceso y poder examinar las diversas formas de aplicación en las organizaciones contemporáneas. El tema es enfocado desde la perspectiva organizacional, un mundo igualmente complejo, sin desconocer su importancia en otros ámbitos como la educación, la política o la dirección del estado. Su enfoque tiene que ver con el estudio académico del cual es la culminación y se enmarca dentro de la perspectiva constitucional de la Carta Política Colombiana que reconoce la importancia capital que tienen la actividad económica y la iniciativa privada en la constitución de empresas. Las diversas visiones del liderazgo han sido aplicadas de distintas maneras en las organizaciones contemporáneas y han generado diversos resultados. Hoy, no es posible pensar en una organización que no haya definido su forma de liderazgo y en consecuencia, confluyen en el campo empresarial multitud de teorías, sin que pueda afirmarse que una sola de ellas permita el manejo adecuado y el cumplimiento de los objetivos misionales. Por esta razón se ha llegado a concebir el liderazgo como una función compleja, en un mundo donde las organizaciones mismas se caracterizan no solo por la complejidad de sus acciones y de su conformación, sino también porque esta característica pertenece también al mundo de la globalización. Las organizaciones concebidas como máquinas que en sentido metafórico logran reconstituirse sus estructuras a medida que están en interacción con otras en el mundo globalizado. Adaptarse a las cambiantes circunstancias hace de las organizaciones conglomerados en permanente dinámica y evolución. En este ámbito puede decirse que el liderazgo es también complejo y que es el liderazgo transformacional el que más se acerca al sentido de la complejidad.
Resumo:
Este trabajo exploratorio estudia al movimiento político Mesa de la Unidad Democrática (MUD), creada con el fin de oponerse la Gobierno socialista existente en venezuela. La crítica que este documento realiza, parte desde el punto de vista de la Ciencia de la Complejidad. Algunos conceptos clave de sistemas complejos han sido utilizados para explicar el funcionamiento y organización de la MUD, esto con el objetivo de generar un diagnóstico integral de los problemas que enfrenta, y evidenciar las nuevas percepciones sobre comportamientos perjudiciales que el partido tiene actualmente. Con el enfoque de la complejidad se pretende ayudar a comprender mejor el contexto que enmarca al partido y, para, finalmente aportar una serie de soluciones a los problemas de cohesión que presen