858 resultados para Basophil Degranulation Test -- methods
Does the 6-minute walk test predicts functional capacity in a sample of elderly women? A pilot study
Resumo:
Introduction: Functional capacity is the capacity to conduct daily activities in an independent way. It can be estimated with the 6-minutes’ walk test (6MWT) and other validated functional tests. Objectives: Verify associations between functional capacity measured with two different instruments (6MWT and Composite Physical Function (CPF) scale) and levels of physical activity and between those and characterization variables. Methods: This sample consisted of 30 apparently healthy elderly women from Loures municipality. Essentially they should be independent and community-dwelling. Characterization data were collected, containing characterization of physical activity levels and anthropometric data. Functional capacity was assessed with CPF scale and distance walked by the 6MWT. Results were analysed using a SPSS v21.0 through correlation tests. Results: The walked distance in 6MWT was positively associated with height (r = 0.406; p = 0.026), physical activity level (r = 0.594; p = 0.001) and functional capacity (r = 0.682; p = 0.000). For each point more obtained in CPF, the distance walked increases on average by 7.5 meters. Relatively to sedentary participants, being insufficiently active increases, on average, the distance walked in 85.8 meters; and being active increases, on average, the distance walked in 108.8 meters. No other associations were observed in our sample. Conclusion: Based on the collected sample, walked distance in 6MWT has a high correlation with results in CPF scale, so this test can be used to predict functional capacity. More attention should be taken to promote strategies to increase walking in older adults.
Resumo:
Purpose: Stereopsis is the perception of depth based on retinal disparity. Global stereopsis depends on the process of random dot stimuli and local stereopsis depends on contour perception. The aim of this study was to correlate 3 stereopsis tests: TNO®, StereoTA B®, and Fly Stereo Acuity Test® and to study the sensitivity and correlation between them, using TNO® as the gold standard. Other variables as near convergence point, vergences, symptoms and optical correction were correlated with the 3 tests. Materials and Methods: Forty-nine students from Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL), aged 18-26 years old were included. Results: The stereopsis mean (standard-deviation-SD) values in each test were: TNO® = 87.04” ±84.09”; FlyTest® = 38.18” ±34.59”; StereoTA B® = 124.89’’ ±137.38’’. About the coefficient of determination: TNO® and StereoTA B® with R2 = 0.6 e TNO® and FlyTest® with R2 =0.2. Pearson correlation coefficient shows a positive correlation between TNO® and StereoTA B® (r = 0.784 with α = 0.01). Phi coefficient shows a strong and positive association between TNO® and StereoTA B® (Φ = 0.848 with α = 0.01). In the ROC Curve, the StereoTA B® has an area under the curve bigger than the FlyTest® with a sensivity of 92.3% for 94.4% of specificity, so it means that the test is sensitive with a good discriminative power. Conclusion: We conclude that the use of Stereopsis tests to study global Stereopsis are an asset for clinical use. This type of test is more sensitive, revealing changes in Stereopsis when it is actually changed, unlike the test Stereopsis, which often indicates normal Stereopsis, camouflaging a Stereopsis change. We noted also that the StereoTA B ® is very sensitive and despite being a digital application, possessed good correlation with the TNO®.
Resumo:
Habitat fragmentation and the consequently the loss of connectivity between populations can reduce the individuals interchange and gene flow, increasing the chances of inbreeding, and the increase the risk of local extinction. Landscape genetics is providing more and better tools to identify genetic barriers.. To our knowledge, no comparison of methods in terms of consistency has been made with observed data and species with low dispersal ability. The aim of this study is to examine the consistency of the results of five methods to detect barriers to gene flow in a Mediterranean pine vole population Microtus duodecimcostatus: F-statistics estimations, Non-Bayesian clustering, Bayesian clustering, Boundary detection and Simple/Partial Mantel tests. All methods were consistent in detecting the stream as a non-genetic barrier. However, no consistency in results among the methods were found regarding the role of the highway as a genetic barrier. Fst, Bayesian clustering assignment test and Partial Mantel test identifyed the highway as a filter to individual interchange. The Mantel tests were the most sensitive method. Boundary detection method (Monmonier’s Algorithm) and Non-Bayesian approaches did not detect any genetic differentiation of the pine vole due to the highway. Based on our findings we recommend that the genetic barrier detection in low dispersal ability populations should be analyzed with multiple methods such as Mantel tests, Bayesian clustering approaches because they show more sensibility in those scenarios and with boundary detection methods by having the aim of detect drastic changes in a variable of interest between the closest individuals. Although simulation studies highlight the weaknesses and the strengths of each method and the factors that promote some results, tests with real data are needed to increase the effectiveness of genetic barrier detection.
Resumo:
With the continued miniaturization and increasing performance of electronic devices, new technical challenges have arisen. One such issue is delamination occurring at critical interfaces inside the device. This major reliability issue can occur during the manufacturing process or during normal use of the device. Proper evaluation of the adhesion strength of critical interfaces early in the product development cycle can help reduce reliability issues and time-to-market of the product. However, conventional adhesion strength testing is inherently limited in the face of package miniaturization, which brings about further technical challenges to quantify design integrity and reliability. Although there are many different interfaces in today's advanced electronic packages, they can be generalized into two main categories: 1) rigid to rigid connections with a thin flexible polymeric layer in between, or 2) a thin film membrane on a rigid structure. Knowing that every technique has its own advantages and disadvantages, multiple testing methods must be enhanced and developed to be able to accommodate all the interfaces encountered for emerging electronic packaging technologies. For evaluating the adhesion strength of high adhesion strength interfaces in thin multilayer structures a novel adhesion test configuration called “single cantilever adhesion test (SCAT)” is proposed and implemented for an epoxy molding compound (EMC) and photo solder resist (PSR) interface. The test method is then shown to be capable of comparing and selecting the stronger of two potential EMC/PSR material sets. Additionally, a theoretical approach for establishing the applicable testing domain for a four-point bending test method was presented. For evaluating polymeric films on rigid substrates, major testing challenges are encountered for reducing testing scatter and for factoring in the potentially degrading effect of environmental conditioning on the material properties of the film. An advanced blister test with predefined area test method was developed that considers an elasto-plastic analytical solution and implemented for a conformal coating used to prevent tin whisker growth. The advanced blister testing with predefined area test method was then extended by employing a numerical method for evaluating the adhesion strength when the polymer’s film properties are unknown.
Resumo:
Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.
Resumo:
The Graphical User Interface (GUI) is an integral component of contemporary computer software. A stable and reliable GUI is necessary for correct functioning of software applications. Comprehensive verification of the GUI is a routine part of most software development life-cycles. The input space of a GUI is typically large, making exhaustive verification difficult. GUI defects are often revealed by exercising parts of the GUI that interact with each other. It is challenging for a verification method to drive the GUI into states that might contain defects. In recent years, model-based methods, that target specific GUI interactions, have been developed. These methods create a formal model of the GUI’s input space from specification of the GUI, visible GUI behaviors and static analysis of the GUI’s program-code. GUIs are typically dynamic in nature, whose user-visible state is guided by underlying program-code and dynamic program-state. This research extends existing model-based GUI testing techniques by modelling interactions between the visible GUI of a GUI-based software and its underlying program-code. The new model is able to, efficiently and effectively, test the GUI in ways that were not possible using existing methods. The thesis is this: Long, useful GUI testcases can be created by examining the interactions between the GUI, of a GUI-based application, and its program-code. To explore this thesis, a model-based GUI testing approach is formulated and evaluated. In this approach, program-code level interactions between GUI event handlers will be examined, modelled and deployed for constructing long GUI testcases. These testcases are able to drive the GUI into states that were not possible using existing models. Implementation and evaluation has been conducted using GUITAR, a fully-automated, open-source GUI testing framework.
Resumo:
Due to trends in aero-design, aeroelasticity becomes increasingly important in modern turbomachines. Design requirements of turbomachines lead to the development of high aspect ratio blades and blade integral disc designs (blisks), which are especially prone to complex modes of vibration. Therefore, experimental investigations yielding high quality data are required for improving the understanding of aeroelastic effects in turbomachines. One possibility to achieve high quality data is to excite and measure blade vibrations in turbomachines. The major requirement for blade excitation and blade vibration measurements is to minimize interference with the aeroelastic effects to be investigated. Thus in this paper, a non-contact-and thus low interference-experimental set-up for exciting and measuring blade vibrations is proposed and shown to work. A novel acoustic system excites rotor blade vibrations, which are measured with an optical tip-timing system. By performing measurements in an axial compressor, the potential of the acoustic excitation method for investigating aeroelastic effects is explored. The basic principle of this method is described and proven through the analysis of blade responses at different acoustic excitation frequencies and at different rotational speeds. To verify the accuracy of the tip-timing system, amplitudes measured by tip-timing are compared with strain gage measurements. They are found to agree well. Two approaches to vary the nodal diameter (ND) of the excited vibration mode by controlling the acoustic excitation are presented. By combining the different excitable acoustic modes with a phase-lag control, each ND of the investigated 30 blade rotor can be excited individually. This feature of the present acoustic excitation system is of great benefit to aeroelastic investigations and represents one of the main advantages over other excitation methods proposed in the past. In future studies, the acoustic excitation method will be used to investigate aeroelastic effects in high-speed turbomachines in detail. The results of these investigations are to be used to improve the aeroelastic design of modern turbomachines.
Resumo:
International audience
Resumo:
The dinoflagellates of Alexandrium genus are known to be producers of paralytic shellfish toxins that regularly impact the shellfish aquaculture industry and fisheries. Accurate detection of Alexandrium including A. minutum is crucial for environmental monitoring and sanitary issues. In this study, we firstly developed a quantitative lateral flow immunoassay (LFIA) using super-paramagnetic nanobeads for A. minutum whole cells. This dipstick assay relies on two distinct monoclonal antibodies used in a sandwich format and directed against surface antigens of this organism. No sample preparation is required. Either frozen or live cells can be detected and quantified. The specificity and sensitivity are assessed by using phytoplankton culture and field samples spiked with a known amount of cultured A. minutum cells. This LFIA is shown to be highly specific for A. minutum and able to detect reproducibly 105 cells/L within 30 min. The test is applied to environmental samples already characterized by light microscopy counting. No significant difference is observed between the cell densities obtained by these two methods. This handy super-paramagnetic lateral flow immnunoassay biosensor can greatly assist water quality monitoring programs as well as ecological research.
Resumo:
Objective: To estimate the prevalence and factors associated with the performance of mammography and pap smear test in women from the city of Maringá, Paraná. Methods: Population-based cross-sectional study conducted with 345 women aged over 20 years in the period from March 2011 to April 2012. An interview was carried out using a questionnaire proposed by the Ministry of Health, which addressed sociodemographic characteristics, risk factors for chronic noncommunicable diseases and issues related to mammographic and pap screening. Data were analyzed using bivariate analysis, crude analysis with odds ratio (OR) and chi-squared test using Epi Info 3.5.1 program; multivariate analysis using logistic regression was performed using the software Statistica 7.1, with a significance level of 5% and a confidence interval of 95%. Results: The mean age of the women was 52.19 (±5.27) years. The majority (56.5%) had from 0 to 8 years of education. Additionally, 84.6% (n=266) of the women underwent pap smear and 74.3% (n=169) underwent mammography. The lower performance of pap smear test was associated with women with 9-11 years of education (p=0.01), and the lower performance of mammography was associated with women without private health insurance (p<0.01). Conclusion: The coverage of mammography and pap smear test was satisfactory among the women from Maringá, Paraná. Low education level and women who depended on the public health system presented lower performance of mammography.
Resumo:
Analysis methods for electrochemical etching baths consisting of various concentrations of hydrofluoric acid (HF) and an additional organic surface wetting agent are presented. These electrolytes are used for the formation of meso- and macroporous silicon. Monitoring the etching bath composition requires at least one method each for the determination of the HF concentration and the organic content of the bath. However, it is a precondition that the analysis equipment withstands the aggressive HF. Titration and a fluoride ion-selective electrode are used for the determination of the HF and a cuvette test method for the analysis of the organic content, respectively. The most suitable analysis method is identified depending on the components in the electrolyte with the focus on capability of resistance against the aggressive HF.
Resumo:
Verbal fluency is the ability to produce a satisfying sequence of spoken words during a given time interval. The core of verbal fluency lies in the capacity to manage the executive aspects of language. The standard scores of the semantic verbal fluency test are broadly used in the neuropsychological assessment of the elderly, and different analytical methods are likely to extract even more information from the data generated in this test. Graph theory, a mathematical approach to analyze relations between items, represents a promising tool to understand a variety of neuropsychological states. This study reports a graph analysis of data generated by the semantic verbal fluency test by cognitively healthy elderly (NC), patients with Mild Cognitive Impairment – subtypes amnestic(aMCI) and amnestic multiple domain (a+mdMCI) - and patients with Alzheimer’s disease (AD). Sequences of words were represented as a speech graph in which every word corresponded to a node and temporal links between words were represented by directed edges. To characterize the structure of the data we calculated 13 speech graph attributes (SGAs). The individuals were compared when divided in three (NC – MCI – AD) and four (NC – aMCI – a+mdMCI – AD) groups. When the three groups were compared, significant differences were found in the standard measure of correct words produced, and three SGA: diameter, average shortest path, and network density. SGA sorted the elderly groups with good specificity and sensitivity. When the four groups were compared, the groups differed significantly in network density, except between the two MCI subtypes and NC and aMCI. The diameter of the network and the average shortest path were significantly different between the NC and AD, and between aMCI and AD. SGA sorted the elderly in their groups with good specificity and sensitivity, performing better than the standard score of the task. These findings provide support for a new methodological frame to assess the strength of semantic memory through the verbal fluency task, with potential to amplify the predictive power of this test. Graph analysis is likely to become clinically relevant in neurology and psychiatry, and may be particularly useful for the differential diagnosis of the elderly.
Resumo:
Abstract: Quantitative Methods (QM) is a compulsory course in the Social Science program in CEGEP. Many QM instructors assign a number of homework exercises to give students the opportunity to practice the statistical methods, which enhances their learning. However, traditional written exercises have two significant disadvantages. The first is that the feedback process is often very slow. The second disadvantage is that written exercises can generate a large amount of correcting for the instructor. WeBWorK is an open-source system that allows instructors to write exercises which students answer online. Although originally designed to write exercises for math and science students, WeBWorK programming allows for the creation of a variety of questions which can be used in the Quantitative Methods course. Because many statistical exercises generate objective and quantitative answers, the system is able to instantly assess students’ responses and tell them whether they are right or wrong. This immediate feedback has been shown to be theoretically conducive to positive learning outcomes. In addition, the system can be set up to allow students to re-try the problem if they got it wrong. This has benefits both in terms of student motivation and reinforcing learning. Through the use of a quasi-experiment, this research project measured and analysed the effects of using WeBWorK exercises in the Quantitative Methods course at Vanier College. Three specific research questions were addressed. First, we looked at whether students who did the WeBWorK exercises got better grades than students who did written exercises. Second, we looked at whether students who completed more of the WeBWorK exercises got better grades than students who completed fewer of the WeBWorK exercises. Finally, we used a self-report survey to find out what students’ perceptions and opinions were of the WeBWorK and the written exercises. For the first research question, a crossover design was used in order to compare whether the group that did WeBWorK problems during one unit would score significantly higher on that unit test than the other group that did the written problems. We found no significant difference in grades between students who did the WeBWorK exercises and students who did the written exercises. The second research question looked at whether students who completed more of the WeBWorK exercises would get significantly higher grades than students who completed fewer of the WeBWorK exercises. The straight-line relationship between number of WeBWorK exercises completed and grades was positive in both groups. However, the correlation coefficients for these two variables showed no real pattern. Our third research question was investigated by using a survey to elicit students’ perceptions and opinions regarding the WeBWorK and written exercises. Students reported no difference in the amount of effort put into completing each type of exercise. Students were also asked to rate each type of exercise along six dimensions and a composite score was calculated. Overall, students gave a significantly higher score to the written exercises, and reported that they found the written exercises were better for understanding the basic statistical concepts and for learning the basic statistical methods. However, when presented with the choice of having only written or only WeBWorK exercises, slightly more students preferred or strongly preferred having only WeBWorK exercises. The results of this research suggest that the advantages of using WeBWorK to teach Quantitative Methods are variable. The WeBWorK system offers immediate feedback, which often seems to motivate students to try again if they do not have the correct answer. However, this does not necessarily translate into better performance on the written tests and on the final exam. What has been learned is that the WeBWorK system can be used by interested instructors to enhance student learning in the Quantitative Methods course. Further research may examine more specifically how this system can be used more effectively.
Resumo:
International audience