988 resultados para Abstract test


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES Cognitive fluctuation (CF) is a common feature of dementia and a core diagnostic symptom for dementia with Lewy bodies (DLB). CF remains difficult to accurately and reliably detect clinically. This study aimed to develop a psychometric test that could be used by clinicians to facilitate the identification of CF and improve the recognition and diagnosis of DLB and Parkinson disease, and to improve differential diagnosis of other dementias. METHODS We compiled a 17-item psychometric test for identifying CF and applied this measure in a cross-sectional design. Participants were recruited from the North East of England, and assessments were made in individuals' homes. We recruited people with four subtypes of dementia and a healthy comparison group, and all subjects were administered this pilot scale together with other standard ratings. The psychometric properties of the scale were examined with exploratory factor analysis. We also examined the ability of individual items to identify CF to discriminate between dementia subtypes. The sensitivity and specificity of discriminating items were explored along with validity and reliability analyses. RESULTS Participants comprised 32 comparison subjects, 30 people with Alzheimer disease, 30 with vascular dementia, 29 with DLB, and 32 with dementia associated with Parkinson disease. Four items significantly discriminated between dementia groups and showed good levels of sensitivity (range: 78.6%-80.3%) and specificity (range: 73.9%-79.3%). The scale had very good levels of test-retest (Cronbach's alpha: 0.82) and interrater (0.81) reliabilities. The four items loaded onto three different factors. These items were: 1) marked differences in functioning during the daytime; 2) daytime somnolence; 3) daytime drowsiness; and 4) altered levels of consciousness during the day. CONCLUSIONS We identified four items that provide valid, sensitive, and specific questions for reliably identifying CF and distinguishing the Lewy body dementias from other major causes of dementia (Alzheimer disease and vascular dementia).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models based on degradation are powerful and useful tools to evaluate the reliability of those devices in which failure happens because of degradation in the performance parameters. This paper presents a procedure for assessing the reliability of concentrator photovoltaic (CPV) modules operating outdoors in real-time conditions. With this model, the main reliability functions are predicted. This model has been applied to a real case with a module composed of GaAs single-junction solar cells and total internal reflection (TIR) optics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La difusión de TV3D actual utiliza formatos como el Side-by-Side o Top-and-Bottom, en los que cada par de imágenes, correspondiente a las vistas de los ojos derecho e izquierdo, se encapsula con la mitad de la resolución espacial en una sola imagen. Estas imágenes se muestran de manera casi simultánea de forma que el ojo humano compone una imagen con profundidad que se asemeja a la visión binocular natural. Desde hace un par de años las principales plataformas de televisión han empezado a crear canales con contenido 3D. La televisión 3D (TV3D) se ha introducido en los hogares gracias a los televisores estereoscópicos. Estos televisores, que son compatibles con los formatos antes mencionados, extraen de cada imagen sus dos vistas, recuperan la resolución original y presentan cada vista alternativamente en la pantalla, generando al mismo tiempo una señal de sincronismo para las gafas activas, creando de esta forma la sensación tridimensional de las imágenes. En este PFC se pretende realizar el diseño VHDL de un cambiador de formato que genere en tiempo real la secuencia de imágenes correspondiente a los ojos derecho e izquierdo, con resolución completa, a partir de una secuencia codificada en formato tipo Top-and-Bottom y el banco de test para su prueba. Este circuito se implementará como un periférico del procesador NIOS II de Altera. El diseño podría utilizarse como parte de un sistema que permita la visualización de las actuales emisiones de televisión 3D en un televisor convencional. La tecnología de referencia que se utilizará serán las FPGAs, más concretamente la tarjeta Cyclone III FPGA Starter Kit (EP3C25 FPGA) de Altera, junto a una tarjeta de ampliación de Microtronix con entrada y salida HDMI para video y audio. Además se pretende crear la documentación necesaria para el desarrollo de futuros trabajos relacionados con la televisión 3D. ABSTRACT Current TV3D broadcasting uses formats as Side-by-Side or Top-and-Bottom, where every single pair of images, corresponding to left and right eyes views, are encapsulated with half spatial resolution in one single image. These images are almost simultaneously displayed so that the human eye forms an image with depth resembling naturally binocular vision. From a couple of years the major TV platforms have begun to create 3D content channels. 3D Television (3DTV) has been introduced in homes through stereoscopic televisions. These televisions, which are compatible with the above formats, each image is extracted from the two views, and recover the original resolution and displays alternately each view in screen, while generating a synchronization signal for active glasses, thereby creating the three-dimensional sensation of the images. The main objective in this PFC is to make the design of an exchanger VHDL format in real time to generate the image sequence corresponding to the right and left eyes, with full resolution from an encoded sequence type format Top-and-Bottom and test bench for testing. This circuit is implemented as a Altera NIOS II processor peripheral.The design could be used as part of a system enabling the display of current television broadcasts 3D on a conventional television. The reference technology that will be use are FPGAs, more specifically Cyclone III FPGA Starter Card Kit (EP3C25 FPGA) Altera, along with an expansion card Microtronix with HDMI input and output video and audio. It also aims to create documentation for the development of future works related to 3D TV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las pruebas de software (Testing) son en la actualidad la técnica más utilizada para la validación y la evaluación de la calidad de un programa. El testing está integrado en todas las metodologías prácticas de desarrollo de software y juega un papel crucial en el éxito de cualquier proyecto de software. Desde las unidades de código más pequeñas a los componentes más complejos, su integración en un sistema de software y su despliegue a producción, todas las piezas de un producto de software deben ser probadas a fondo antes de que el producto de software pueda ser liberado a un entorno de producción. La mayor limitación del testing de software es que continúa siendo un conjunto de tareas manuales, representando una buena parte del coste total de desarrollo. En este escenario, la automatización resulta fundamental para aliviar estos altos costes. La generación automática de casos de pruebas (TCG, del inglés test case generation) es el proceso de generar automáticamente casos de prueba que logren un alto recubrimiento del programa. Entre la gran variedad de enfoques hacia la TCG, esta tesis se centra en un enfoque estructural de caja blanca, y más concretamente en una de las técnicas más utilizadas actualmente, la ejecución simbólica. En ejecución simbólica, el programa bajo pruebas es ejecutado con expresiones simbólicas como argumentos de entrada en lugar de valores concretos. Esta tesis se basa en un marco general para la generación automática de casos de prueba dirigido a programas imperativos orientados a objetos (Java, por ejemplo) y basado en programación lógica con restricciones (CLP, del inglés constraint logic programming). En este marco general, el programa imperativo bajo pruebas es primeramente traducido a un programa CLP equivalente, y luego dicho programa CLP es ejecutado simbólicamente utilizando los mecanismos de evaluación estándar de CLP, extendidos con operaciones especiales para el tratamiento de estructuras de datos dinámicas. Mejorar la escalabilidad y la eficiencia de la ejecución simbólica constituye un reto muy importante. Es bien sabido que la ejecución simbólica resulta impracticable debido al gran número de caminos de ejecución que deben ser explorados y a tamaño de las restricciones que se deben manipular. Además, la generación de casos de prueba mediante ejecución simbólica tiende a producir un número innecesariamente grande de casos de prueba cuando es aplicada a programas de tamaño medio o grande. Las contribuciones de esta tesis pueden ser resumidas como sigue. (1) Se desarrolla un enfoque composicional basado en CLP para la generación de casos de prueba, el cual busca aliviar el problema de la explosión de caminos interprocedimiento analizando de forma separada cada componente (p.ej. método) del programa bajo pruebas, almacenando los resultados y reutilizándolos incrementalmente hasta obtener resultados para el programa completo. También se ha desarrollado un enfoque composicional basado en especialización de programas (evaluación parcial) para la herramienta de ejecución simbólica Symbolic PathFinder (SPF). (2) Se propone una metodología para usar información del consumo de recursos del programa bajo pruebas para guiar la ejecución simbólica hacia aquellas partes del programa que satisfacen una determinada política de recursos, evitando la exploración de aquellas partes del programa que violan dicha política. (3) Se propone una metodología genérica para guiar la ejecución simbólica hacia las partes más interesantes del programa, la cual utiliza abstracciones como generadores de trazas para guiar la ejecución de acuerdo a criterios de selección estructurales. (4) Se propone un nuevo resolutor de restricciones, el cual maneja eficientemente restricciones sobre el uso de la memoria dinámica global (heap) durante ejecución simbólica, el cual mejora considerablemente el rendimiento de la técnica estándar utilizada para este propósito, la \lazy initialization". (5) Todas las técnicas propuestas han sido implementadas en el sistema PET (el enfoque composicional ha sido también implementado en la herramienta SPF). Mediante evaluación experimental se ha confirmado que todas ellas mejoran considerablemente la escalabilidad y eficiencia de la ejecución simbólica y la generación de casos de prueba. ABSTRACT Testing is nowadays the most used technique to validate software and assess its quality. It is integrated into all practical software development methodologies and plays a crucial role towards the success of any software project. From the smallest units of code to the most complex components and their integration into a software system and later deployment; all pieces of a software product must be tested thoroughly before a software product can be released. The main limitation of software testing is that it remains a mostly manual task, representing a large fraction of the total development cost. In this scenario, test automation is paramount to alleviate such high costs. Test case generation (TCG) is the process of automatically generating test inputs that achieve high coverage of the system under test. Among a wide variety of approaches to TCG, this thesis focuses on structural (white-box) TCG, where one of the most successful enabling techniques is symbolic execution. In symbolic execution, the program under test is executed with its input arguments being symbolic expressions rather than concrete values. This thesis relies on a previously developed constraint-based TCG framework for imperative object-oriented programs (e.g., Java), in which the imperative program under test is first translated into an equivalent constraint logic program, and then such translated program is symbolically executed by relying on standard evaluation mechanisms of Constraint Logic Programming (CLP), extended with special treatment for dynamically allocated data structures. Improving the scalability and efficiency of symbolic execution constitutes a major challenge. It is well known that symbolic execution quickly becomes impractical due to the large number of paths that must be explored and the size of the constraints that must be handled. Moreover, symbolic execution-based TCG tends to produce an unnecessarily large number of test cases when applied to medium or large programs. The contributions of this dissertation can be summarized as follows. (1) A compositional approach to CLP-based TCG is developed which overcomes the inter-procedural path explosion by separately analyzing each component (method) in a program under test, stowing the results as method summaries and incrementally reusing them to obtain whole-program results. A similar compositional strategy that relies on program specialization is also developed for the state-of-the-art symbolic execution tool Symbolic PathFinder (SPF). (2) Resource-driven TCG is proposed as a methodology to use resource consumption information to drive symbolic execution towards those parts of the program under test that comply with a user-provided resource policy, avoiding the exploration of those parts of the program that violate such policy. (3) A generic methodology to guide symbolic execution towards the most interesting parts of a program is proposed, which uses abstractions as oracles to steer symbolic execution through those parts of the program under test that interest the programmer/tester most. (4) A new heap-constraint solver is proposed, which efficiently handles heap-related constraints and aliasing of references during symbolic execution and greatly outperforms the state-of-the-art standard technique known as lazy initialization. (5) All techniques above have been implemented in the PET system (and some of them in the SPF tool). Experimental evaluation has confirmed that they considerably help towards a more scalable and efficient symbolic execution and TCG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este documento se detalla, la planificación y elaboración de un paquete que respeta el estándar S4 de programación en lenguaje R. El paquete consiste en una serie de métodos y clases para la generación de exámenes tipos test y soluciones a partir de un archivo xls, que hace las funciones de una base de datos. El diseño propuesto está orientado a objetos y desarrolla un conjunto de clases que representan los contenidos de una prueba de evaluación tipo test: enunciados, peguntas y respuestas. Se ha realizado una implementación sencilla de un prototipo con las funciones básicas necesarias para generar los tests. Además se ha generado la documentación necesaria para crear el paquete, esto significa que cada método tiene una página de ayuda, que se podrá consultar desde un terminal con R, dicha documentación incluye ejemplos de ejecución de cada método.---ABSTRACT---In this document is detailed the elaboration and development of a package that meets the standard S4 of programming language R. This package consists of a group of methods and classes used for the generation of test exams and their solutions starting from a xls format file wich plays the role of a data base at the same time. These classes have been grouped in a way that the user could have a complete and easy vision of them. This division has been done by using data storage and functions whose tasks are more or less the same. Furthermore, the necessary documentation to create this package has also been generated, that means that every method has a help page wich can be called from a R terminal if necessary. This documentation has examples of the execution of every method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se va a realizar un estudio de la codificación de imágenes sobre el estándar HEVC (high-effiency video coding). El proyecto se va a centrar en el codificador híbrido, más concretamente sobre la aplicación de la transformada inversa del coseno que se realiza tanto en codificador como en el descodificador. La necesidad de codificar vídeo surge por la aparición de la secuencia de imágenes como señales digitales. El problema principal que tiene el vídeo es la cantidad de bits que aparecen al realizar la codificación. Como consecuencia del aumento de la calidad de las imágenes, se produce un crecimiento exponencial de la cantidad de información a codificar. La utilización de las transformadas al procesamiento digital de imágenes ha aumentado a lo largo de los años. La transformada inversa del coseno se ha convertido en el método más utilizado en el campo de la codificación de imágenes y video. Las ventajas de la transformada inversa del coseno permiten obtener altos índices de compresión a muy bajo coste. La teoría de las transformadas ha mejorado el procesamiento de imágenes. En la codificación por transformada, una imagen se divide en bloques y se identifica cada imagen a un conjunto de coeficientes. Esta codificación se aprovecha de las dependencias estadísticas de las imágenes para reducir la cantidad de datos. El proyecto realiza un estudio de la evolución a lo largo de los años de los distintos estándares de codificación de video. Se analiza el codificador híbrido con más profundidad así como el estándar HEVC. El objetivo final que busca este proyecto fin de carrera es la realización del núcleo de un procesador específico para la ejecución de la transformada inversa del coseno en un descodificador de vídeo compatible con el estándar HEVC. Es objetivo se logra siguiendo una serie de etapas, en las que se va añadiendo requisitos. Este sistema permite al diseñador hardware ir adquiriendo una experiencia y un conocimiento más profundo de la arquitectura final. ABSTRACT. A study about the codification of images based on the standard HEVC (high-efficiency video coding) will be developed. The project will be based on the hybrid encoder, in particular, on the application of the inverse cosine transform, which is used for the encoder as well as for the decoder. The necessity of encoding video arises because of the appearance of the sequence of images as digital signals. The main problem that video faces is the amount of bits that appear when making the codification. As a consequence of the increase of the quality of the images, an exponential growth on the quantity of information that should be encoded happens. The usage of transforms to the digital processing of images has increased along the years. The inverse cosine transform has become the most used method in the field of codification of images and video. The advantages of the inverse cosine transform allow to obtain high levels of comprehension at a very low price. The theory of the transforms has improved the processing of images. In the codification by transform, an image is divided in blocks and each image is identified to a set of coefficients. This codification takes advantage of the statistic dependence of the images to reduce the amount of data. The project develops a study of the evolution along the years of the different standards in video codification. In addition, the hybrid encoder and the standard HEVC are analyzed more in depth. The final objective of this end of degree project is the realization of the nucleus from a specific processor for the execution of the inverse cosine transform in a decoder of video that is compatible with the standard HEVC. This objective is reached following a series of stages, in which requirements are added. This system allows the hardware designer to acquire a deeper experience and knowledge of the final architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From a set of gonioapparent automotive samples from different manufacturers we selected 28 low-chroma color pairs with relatively small color differences predominantly in lightness. These color pairs were visually assessed with a gray scale at six different viewing angles by a panel of 10 observers. Using the Standardized Residual Sum of Squares (STRESS) index, the results of our visual experiment were tested against predictions made by 12 modern color-difference formulas. From a weighted STRESS index accounting for the uncertainty in visual assessments, the best prediction of our whole experiment was achieved using AUDI2000, CAM02-SCD, CAM02-UCS and OSA-GP-Euclidean color-difference formulas, which were no statistically significant different among them. A two-step optimization of the original AUDI2000 color-difference formula resulted in a modified AUDI2000 formula which performed both, significantly better than the original formula and below the experimental inter-observer variability. Nevertheless the proposal of a new revised AUDI2000 color-difference formula requires additional experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Includes abstract.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Professional language assessment is a new concept that has great potential to benefit Internationally Educated Professionals and the communities they serve. This thesis reports on a qualitative study that examined the responses of 16 Canadian English Language Benchmark Assessment for Nurses (CELBAN) test-takers on the topic of their perceptions of the CELBAN test-taking experience in Ontario in the winter of 2015. An Ontario organization involved in registering participants distributed an e-mail through their listserv. Thematic analyses of focus group and interview transcripts identified 7 themes from the data. These themes were used to inform conclusions to the following questions: (1) How do IENs characterize their assessment experience? (2) How do IENs describe the testing constructs measured by the CELBAN? (3) What, if any, potential sources of construct irrelevant variance (CIV) do the test-takers describe based on their assessment experience? (4) Do IENs feel that the CELBAN tasks provide a good reflection of the types of communicative tasks required of a nurse? Overall, participants reported positive experiences with the CELBAN as an assessment of their language skills, and noted some instances in which they felt some factors external to the assessment impacted their demonstration of their knowledge and skill. Lastly, some test-takers noted the challenge of completing the CELBAN where the types of communicative nursing tasks included in the assessment differed from nursing tasks typical of an IENs country or origin. The findings are discussed in relation to literature on high-stakes large-scale assessment and IEPs, and a set of recommendations are offered to future CELBAN administration. These recommendations include (1) the provision of a webpage listing all licensure requirements (2) monitoring of CELBAN location and dates in relation to the wider certification timeline for applicants (3) The provision of additional CELBAN preparatory materials (4) Minor changes to the CELBAN administrative protocols. Given that the CELBAN is a relatively new assessment format and its widespread use for high-stakes decisions (a component of nursing certification and licensure), research validating IEN-test-taker responses to construct representation and construct irrelevant variance is critical to our understanding of the role of competency testing for IENs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DANTAS, Rodrigo Assis Neves; NÓBREGA, Walkíria Gomes da; MORAIS FILHO, Luiz Alves; MACÊDO, Eurides Araújo Bezerra de ; FONSECA , Patrícia de Cássia Bezerra; ENDERS, Bertha Cruz; MENEZES, Rejane Maria Paiva de; TORRES , Gilson de Vasconcelos. Paradigms in health care and its relationship to the nursing theories: an analytical test . Revista de Enfermagem UFPE on line. v.4,n.2, p.16-24.abr/jun. 2010. Disponível em < http://www.ufpe.br/revistaenfermagem/index.php/revista>.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estereopsia define-se como a perceção de profundidade baseada na disparidade retiniana. A estereopsia global depende do processamento de estímulos de pontos aleatórios e a estereopsia local depende da perceção de contornos. O objetivo deste estudo é correlacionar três testes de estereopsia: TNO®, StereoTAB® e Fly Stereo Acuity Test® e verificar a sensibilidade e correlação entre eles, tendo o TNO® como gold standard. Incluíram-se 49 estudantes da Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL) entre os 18 e 26 anos. As variáveis ponto próximo de convergência (ppc), vergências, sintomatologia e correção ótica foram correlacionadas com os três testes. Os valores médios (desvios-padrão) de estereopsia foram: TNO® = 87,04’’ ±84,09’’; FlyTest® = 38,18’’ ±34,59’’; StereoTAB® = 124,89’’ ±137,38’’. Coeficiente de determinação: TNO® e StereoTAB® com R2=0,6 e TNO® e FlyTest® com R2=0,2. O coeficiente de correlação de Pearson mostra uma correlação positiva de entre o TNO® e o StereoTAB® (r=0,784 com α=0,01). O coeficiente de associação de Phi mostrou uma relação positiva forte entre o TNO® e StereoTAB® (Φ=0,848 com α=0,01). Na curva ROC, o StereoTAB® possui uma área sob a curva maior que o FlyTest®, apresentando valor de sensibilidade de 92,3% para uma especificidade de 94,4%, tornando-o num teste sensível e com bom poder discriminativo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DANTAS, Rodrigo Assis Neves; NÓBREGA, Walkíria Gomes da; MORAIS FILHO, Luiz Alves; MACÊDO, Eurides Araújo Bezerra de ; FONSECA , Patrícia de Cássia Bezerra; ENDERS, Bertha Cruz; MENEZES, Rejane Maria Paiva de; TORRES , Gilson de Vasconcelos. Paradigms in health care and its relationship to the nursing theories: an analytical test . Revista de Enfermagem UFPE on line. v.4,n.2, p.16-24.abr/jun. 2010. Disponível em < http://www.ufpe.br/revistaenfermagem/index.php/revista>.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To estimate the prevalence and factors associated with the performance of mammography and pap smear test in women from the city of Maringá, Paraná. Methods: Population-based cross-sectional study conducted with 345 women aged over 20 years in the period from March 2011 to April 2012. An interview was carried out using a questionnaire proposed by the Ministry of Health, which addressed sociodemographic characteristics, risk factors for chronic noncommunicable diseases and issues related to mammographic and pap screening. Data were analyzed using bivariate analysis, crude analysis with odds ratio (OR) and chi-squared test using Epi Info 3.5.1 program; multivariate analysis using logistic regression was performed using the software Statistica 7.1, with a significance level of 5% and a confidence interval of 95%. Results: The mean age of the women was 52.19 (±5.27) years. The majority (56.5%) had from 0 to 8 years of education. Additionally, 84.6% (n=266) of the women underwent pap smear and 74.3% (n=169) underwent mammography. The lower performance of pap smear test was associated with women with 9-11 years of education (p=0.01), and the lower performance of mammography was associated with women without private health insurance (p<0.01). Conclusion: The coverage of mammography and pap smear test was satisfactory among the women from Maringá, Paraná. Low education level and women who depended on the public health system presented lower performance of mammography.