980 resultados para Manual test
Resumo:
The aim of this study was to analyze, through Vickers hardness test and photoelasticity analysis, pre-bent areas, manually bent areas, and areas without bends of 10-mm advancement pre-bent titanium plates (Leibinger system). The work was divided into three groups: group I-region without bend, group II-region of 90° manual bend, and group III-region of 90° pre-fabricated bends. All the materials were evaluated through hardness analysis by the Vickers hardness test, stress analysis by residual images obtained in a polariscope, and photoelastic analysis by reflection during the manual bending. The data obtained from the hardness tests were statistically analyzed using ANOVA and Tukey's tests at a significance level of 5 %. The pre-bent plate (group III) showed hardness means statistically significantly higher (P < 0.05) than those of the other groups (I-region without bends, II-90° manually bent region). Through the study of photoelastic reflection, it was possible to identify that the stress gradually increased, reaching a pink color (1.81 δ / λ), as the bending was performed. A general analysis of the results showed that the bent plate region of pre-bent titanium presented the best results.
Resumo:
Intestinal parasitosis is highly prevalent worldwide, being among the main causes of illness and death in humans. Currently, laboratory diagnosis of the intestinal parasites is accomplished through manual technical procedures, mostly developed decades ago, which justifies the development of more sensitive and practical techniques. Therefore, the main objective of this study was to develop, evaluate, and validate a new parasitological technique referred to as TF-Test Modified, in comparison to three conventional parasitological techniques: TF-Test Conventional; Rugai, Mattos & Brisola; and Helm Test/Kato-Katz. For this realization, we collected stool samples from 457 volunteers located in endemic areas of Campinas, São Paulo, Brazil, and statistically compared the techniques. Intestinal protozoa and helminths were detected qualitatively in 42.23% (193/457) of the volunteers by TF-Test Modified technique, against 36.76% (168/457) by TF-Test Conventional, 5.03% (23/457) by Helm Test/Kato-Katz, and 4.16% (19/457) by Rugai, Mattos & Brisola. Furthermore, the new technique presented almost perfect kappa agreement in all evaluated parameters with 95% (P < 0.05) of estimation. The current study showed that the TF-Test Modified technique can be comprehensively used in the diagnosis of intestinal protozoa and helminths, and its greater diagnostic sensitivity should help improving the quality of laboratory diagnosis, population surveys, and control of intestinal parasites.
Resumo:
Intestinal parasitosis is highly prevalent worldwide, being among the main causes of illness and death in humans. Currently, laboratory diagnosis of the intestinal parasites is accomplished through manual technical procedures, mostly developed decades ago, which justifies the development of more sensitive and practical techniques. Therefore, the main objective of this study was to develop, evaluate, and validate a new parasitological technique referred to as TF-Test Modified, in comparison to three conventional parasitological techniques: TF-Test Conventional; Rugai, Mattos & Brisola; and Helm Test/Kato-Katz. For this realization, we collected stool samples from 457 volunteers located in endemic areas of Campinas, São Paulo, Brazil, and statistically compared the techniques. Intestinal protozoa and helminths were detected qualitatively in 42.23% (193/457) of the volunteers by TF-Test Modified technique, against 36.76% (168/457) by TF-Test Conventional, 5.03% (23/457) by Helm Test/Kato-Katz, and 4.16% (19/457) by Rugai, Mattos & Brisola. Furthermore, the new technique presented “almost perfect kappa” agreement in all evaluated parameters with 95% (P < 0.05) of estimation. The current study showed that the TF-Test Modified technique can be comprehensively used in the diagnosis of intestinal protozoa and helminths, and its greater diagnostic sensitivity should help improving the quality of laboratory diagnosis, population surveys, and control of intestinal parasites.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
La caratterizzazione di sedimenti contaminati è un problema complesso, in questo lavoro ci si è occupati di individuare una metodologia di caratterizzazione che tenesse conto sia delle caratteristiche della contaminazione, con analisi volte a determinare il contenuto totale di contaminanti, sia della mobilità degli inquinanti stessi. Una adeguata strategia di caratterizzazione può essere applicata per la valutazione di trattamenti di bonifica, a questo scopo si è valutato il trattamento di soil washing, andando ad indagare le caratteristiche dei sedimenti dragati e del materiale in uscita dal processo, sabbie e frazione fine, andando inoltre a confrontare le caratteristiche della sabbia in uscita con quelle delle sabbie comunemente usate per diverse applicazioni. Si è ritenuto necessario indagare la compatibilità dal punto di vista chimico, granulometrico e morfologico. Per indagare la mobilità si è scelto di applicare i test di cessione definiti sia a livello internazionale che italiano (UNI) e quindi si sono sviluppate le tecnologie necessarie alla effettuazione di test di cessione in modo efficace, automatizzando la gestione del test a pHstat UNI CEN 14997. Questo si è reso necessario a causa della difficoltà di gestire il test manualmente, per via delle tempistiche difficilmente attuabili da parte di un operatore. Le condizioni redox influenzano la mobilità degli inquinanti, in particolare l’invecchiamento all’aria di sedimenti anossici provoca variazioni sensibili nello stato d’ossidazione di alcune componenti, incrementandone la mobilità, si tratta quindi di un aspetto da considerare quando si individuano le adeguate condizioni di stoccaggio-smaltimento, si è eseguita a questo scopo una campagna sperimentale.
Resumo:
BACKGROUND Impaired manual dexterity is frequent and disabling in patients with multiple sclerosis (MS), affecting activities of daily living (ADL) and quality of life. OBJECTIVE We aimed to evaluate the effectiveness of a standardized, home-based training program to improve manual dexterity and dexterity-related ADL in MS patients. METHODS This was a randomized, rater-blinded controlled trial. Thirty-nine MS patients acknowledging impaired manual dexterity and having a pathological Coin Rotation Task (CRT), Nine Hole Peg Test (9HPT) or both were randomized 1:1 into two standardized training programs, the dexterity training program and the theraband training program. Patients trained five days per week in both programs over a period of 4 weeks. Primary outcome measures performed at baseline and after 4 weeks were the CRT, 9HPT and a dexterous-related ADL questionnaire. Secondary outcome measures were the Chedoke Arm and Hand Activity Inventory (CAHAI-8) and the JAMAR test. RESULTS The dexterity training program resulted in significant improvements in almost all outcome measures at study end compared with baseline. The theraband training program resulted in mostly non-significant improvements. CONCLUSION The home-based dexterity training program significantly improved manual dexterity and dexterity-related ADL in moderately disabled MS patients. Trial Registration NCT01507636.
Resumo:
Las pruebas de software (Testing) son en la actualidad la técnica más utilizada para la validación y la evaluación de la calidad de un programa. El testing está integrado en todas las metodologías prácticas de desarrollo de software y juega un papel crucial en el éxito de cualquier proyecto de software. Desde las unidades de código más pequeñas a los componentes más complejos, su integración en un sistema de software y su despliegue a producción, todas las piezas de un producto de software deben ser probadas a fondo antes de que el producto de software pueda ser liberado a un entorno de producción. La mayor limitación del testing de software es que continúa siendo un conjunto de tareas manuales, representando una buena parte del coste total de desarrollo. En este escenario, la automatización resulta fundamental para aliviar estos altos costes. La generación automática de casos de pruebas (TCG, del inglés test case generation) es el proceso de generar automáticamente casos de prueba que logren un alto recubrimiento del programa. Entre la gran variedad de enfoques hacia la TCG, esta tesis se centra en un enfoque estructural de caja blanca, y más concretamente en una de las técnicas más utilizadas actualmente, la ejecución simbólica. En ejecución simbólica, el programa bajo pruebas es ejecutado con expresiones simbólicas como argumentos de entrada en lugar de valores concretos. Esta tesis se basa en un marco general para la generación automática de casos de prueba dirigido a programas imperativos orientados a objetos (Java, por ejemplo) y basado en programación lógica con restricciones (CLP, del inglés constraint logic programming). En este marco general, el programa imperativo bajo pruebas es primeramente traducido a un programa CLP equivalente, y luego dicho programa CLP es ejecutado simbólicamente utilizando los mecanismos de evaluación estándar de CLP, extendidos con operaciones especiales para el tratamiento de estructuras de datos dinámicas. Mejorar la escalabilidad y la eficiencia de la ejecución simbólica constituye un reto muy importante. Es bien sabido que la ejecución simbólica resulta impracticable debido al gran número de caminos de ejecución que deben ser explorados y a tamaño de las restricciones que se deben manipular. Además, la generación de casos de prueba mediante ejecución simbólica tiende a producir un número innecesariamente grande de casos de prueba cuando es aplicada a programas de tamaño medio o grande. Las contribuciones de esta tesis pueden ser resumidas como sigue. (1) Se desarrolla un enfoque composicional basado en CLP para la generación de casos de prueba, el cual busca aliviar el problema de la explosión de caminos interprocedimiento analizando de forma separada cada componente (p.ej. método) del programa bajo pruebas, almacenando los resultados y reutilizándolos incrementalmente hasta obtener resultados para el programa completo. También se ha desarrollado un enfoque composicional basado en especialización de programas (evaluación parcial) para la herramienta de ejecución simbólica Symbolic PathFinder (SPF). (2) Se propone una metodología para usar información del consumo de recursos del programa bajo pruebas para guiar la ejecución simbólica hacia aquellas partes del programa que satisfacen una determinada política de recursos, evitando la exploración de aquellas partes del programa que violan dicha política. (3) Se propone una metodología genérica para guiar la ejecución simbólica hacia las partes más interesantes del programa, la cual utiliza abstracciones como generadores de trazas para guiar la ejecución de acuerdo a criterios de selección estructurales. (4) Se propone un nuevo resolutor de restricciones, el cual maneja eficientemente restricciones sobre el uso de la memoria dinámica global (heap) durante ejecución simbólica, el cual mejora considerablemente el rendimiento de la técnica estándar utilizada para este propósito, la \lazy initialization". (5) Todas las técnicas propuestas han sido implementadas en el sistema PET (el enfoque composicional ha sido también implementado en la herramienta SPF). Mediante evaluación experimental se ha confirmado que todas ellas mejoran considerablemente la escalabilidad y eficiencia de la ejecución simbólica y la generación de casos de prueba. ABSTRACT Testing is nowadays the most used technique to validate software and assess its quality. It is integrated into all practical software development methodologies and plays a crucial role towards the success of any software project. From the smallest units of code to the most complex components and their integration into a software system and later deployment; all pieces of a software product must be tested thoroughly before a software product can be released. The main limitation of software testing is that it remains a mostly manual task, representing a large fraction of the total development cost. In this scenario, test automation is paramount to alleviate such high costs. Test case generation (TCG) is the process of automatically generating test inputs that achieve high coverage of the system under test. Among a wide variety of approaches to TCG, this thesis focuses on structural (white-box) TCG, where one of the most successful enabling techniques is symbolic execution. In symbolic execution, the program under test is executed with its input arguments being symbolic expressions rather than concrete values. This thesis relies on a previously developed constraint-based TCG framework for imperative object-oriented programs (e.g., Java), in which the imperative program under test is first translated into an equivalent constraint logic program, and then such translated program is symbolically executed by relying on standard evaluation mechanisms of Constraint Logic Programming (CLP), extended with special treatment for dynamically allocated data structures. Improving the scalability and efficiency of symbolic execution constitutes a major challenge. It is well known that symbolic execution quickly becomes impractical due to the large number of paths that must be explored and the size of the constraints that must be handled. Moreover, symbolic execution-based TCG tends to produce an unnecessarily large number of test cases when applied to medium or large programs. The contributions of this dissertation can be summarized as follows. (1) A compositional approach to CLP-based TCG is developed which overcomes the inter-procedural path explosion by separately analyzing each component (method) in a program under test, stowing the results as method summaries and incrementally reusing them to obtain whole-program results. A similar compositional strategy that relies on program specialization is also developed for the state-of-the-art symbolic execution tool Symbolic PathFinder (SPF). (2) Resource-driven TCG is proposed as a methodology to use resource consumption information to drive symbolic execution towards those parts of the program under test that comply with a user-provided resource policy, avoiding the exploration of those parts of the program that violate such policy. (3) A generic methodology to guide symbolic execution towards the most interesting parts of a program is proposed, which uses abstractions as oracles to steer symbolic execution through those parts of the program under test that interest the programmer/tester most. (4) A new heap-constraint solver is proposed, which efficiently handles heap-related constraints and aliasing of references during symbolic execution and greatly outperforms the state-of-the-art standard technique known as lazy initialization. (5) All techniques above have been implemented in the PET system (and some of them in the SPF tool). Experimental evaluation has confirmed that they considerably help towards a more scalable and efficient symbolic execution and TCG.
Resumo:
Manual for the Spanish translation of the General aptitude test battery (GATB), developed by the California State Employment Service.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Office of Driver and Pedestrian Research, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.