945 resultados para Computer programs - Testing
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
One of the current challenges in model-driven engineering is enabling effective collaborative modelling. Two common approaches are either storing the models in a central repository, or keeping them under a traditional file-based version control system and build a centralized index for model-wide queries. Either way, special attention must be paid to the nature of these repositories and indexes as networked services: they should remain responsive even with an increasing number of concurrent clients. This paper presents an empirical study on the impact of certain key decisions on the scalability of concurrent model queries, using an Eclipse Connected Data Objects model repository and a Hawk model index. The study evaluates the impact of the network protocol, the API design and the internal caching mechanisms and analyzes the reasons for their varying performance.
Resumo:
Background: Diagnostic decision-making is made through a combination of Systems 1 (intuition or pattern-recognition) and Systems 2 (analytic) thinking. The purpose of this study was to use the Cognitive Reflection Test (CRT) to evaluate and compare the level of Systems 1 and 2 thinking among medical students in pre-clinical and clinical programs. Methods: The CRT is a three-question test designed to measure the ability of respondents to activate metacognitive processes and switch to System 2 (analytic) thinking where System 1 (intuitive) thinking would lead them astray. Each CRT question has a correct analytical (System 2) answer and an incorrect intuitive (System 1) answer. A group of medical students in Years 2 & 3 (pre-clinical) and Years 4 (in clinical practice) of a 5-year medical degree were studied. Results: Ten percent (13/128) of students had the intuitive answers to the three questions (suggesting they generally relied on System 1 thinking) while almost half (44%) answered all three correctly (indicating full analytical, System 2 thinking). Only 3-13% had incorrect answers (i.e. that were neither the analytical nor the intuitive responses). Non-native English speaking students (n = 11) had a lower mean number of correct answers compared to native English speakers (n = 117: 1.0 s 2.12 respectfully: p < 0.01). As students progressed through questions 1 to 3, the percentage of correct System 2 answers increased and the percentage of intuitive answers decreased in both the pre-clinical and clinical students. Conclusions: Up to half of the medical students demonstrated full or partial reliance on System 1 (intuitive) thinking in response to these analytical questions. While their CRT performance has no claims to make as to their future expertise as clinicians, the test may be used in helping students to understand the importance of awareness and regulation of their thinking processes in clinical practice.
Resumo:
The paper describes the experimental genesis of a series of prints that examines how sight influences perception, designed to engage an audience of varying visual abilities.
Resumo:
Objectives: to evaluate the cognitive learning of nursing students in neonatal clinical evaluation from a blended course with the use of computer and laboratory simulation; to compare the cognitive learning of students in a control and experimental group testing the laboratory simulation; and to assess the extracurricular blended course offered on the clinical assessment of preterm infants, according to the students. Method: a quasi-experimental study with 14 Portuguese students, containing pretest, midterm test and post-test. The technologies offered in the course were serious game e-Baby, instructional software of semiology and semiotechnique, and laboratory simulation. Data collection tools developed for this study were used for the course evaluation and characterization of the students. Nonparametric statistics were used: Mann-Whitney and Wilcoxon. Results: the use of validated digital technologies and laboratory simulation demonstrated a statistically significant difference (p = 0.001) in the learning of the participants. The course was evaluated as very satisfactory for them. The laboratory simulation alone did not represent a significant difference in the learning. Conclusions: the cognitive learning of participants increased significantly. The use of technology can be partly responsible for the course success, showing it to be an important teaching tool for innovation and motivation of learning in healthcare.
Resumo:
Context: Even though dry-land S&C training is a common practice in swimming, there are countless uncertainties over it effects in performance of age group swimmers. Objective: To investigate the effects of dry-land S&C programs in swimming performance of age group swimmers. Participants: A total of 21 male competitive swimmers (12.7±0.7 years) were randomly assigned to the Control Group (n=7) and experimental GR1 and GR2 (n=7 for each group). Intervention: Control group performed a 10-week training period of swim training alone, GR1 followed a 6-week dry-land S&C program based on sets/repetitions plus a 4-week swim training program alone and GR2 followed a 6-week dry-land S&C program focused on explosiveness, plus a 4-week program of swim training alone. Results: For the dry-land tests a time effect was observed between week 0 and week 6 for vertical jump (p<0.01) in both experimental groups, and for the GR2 ball throwing (p<0.01), with moderate-strong effect sizes. The time*group analyses showed that for performance in 50 m, differences were significant, with the GR2 presenting higher improvements than their counterparts (F=4.156; ƿ=0.007; η2=0.316) at week 10. Conclusions: The results suggest that 6 weeks of a complementary dry-land S&C training may lead to improvements in dry-land strength. Furthermore, a 4-week adaptation period was mandatory to achieve beneficial transfer for aquatic performance. Additional benefits may occur if coaches plan the dry-land S&C training focusing on explosiveness.
Resumo:
Frame. Assessing the difficulty of source texts and parts thereof is important in CTIS, whether for research comparability, for didactic purposes or setting price differences in the market. In order to empirically measure it, Campbell & Hale (1999) and Campbell (2000) developed the Choice Network Analysis (CNA) framework. Basically, the CNA’s main hypothesis is that the more translation options (a group of) translators have to render a given source text stretch, the higher the difficulty of that text stretch will be. We will call this the CNA hypothesis. In a nutshell, this research project puts the CNA hypothesis to the test and studies whether it does actually measure difficulty. Data collection. Two groups of participants (n=29) of different profiles and from two universities in different countries had three translation tasks keylogged with Inputlog, and filled pre- and post-translation questionnaires. Participants translated from English (L2) into their L1s (Spanish or Italian), and worked—first in class and then at home—using their own computers, on texts ca. 800–1000 words long. Each text was translated in approximately equal halves in two 1-hour sessions, in three consecutive weeks. Only the parts translated at home were considered in the study. Results. A very different picture emerged from data than that which the CNA hypothesis might predict: there was no prevalence of disfluent task segments when there were many translation options, nor was a prevalence of fluent task segments associated to fewer translation options. Indeed, there was no correlation between the number of translation options (many and few) and behavioral fluency. Additionally, there was no correlation between pauses and both behavioral fluency and typing speed. The discussed theoretical flaws and the empirical evidence lead to the conclusion that the CNA framework does not and cannot measure text and translation difficulty.
Resumo:
Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising.
Resumo:
Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.
Resumo:
Caffeine has already been used as an indicator of anthropogenic impacts, especially the ones related to the disposal of sewage in water bodies. In this work, the presence of caffeine has been correlated with the estrogenic activity of water samples measured using the BLYES assay. After testing 96 surface water samples, it was concluded that caffeine can be used to prioritize samples to be tested for estrogenic activity in water quality programs evaluating emerging contaminants with endocrine disruptor activity.
Resumo:
Losses of horticulture product in Brazil are significant and among the main causes are the use of inappropriate boxes and the absence of a cold chain. A project for boxes is proposed, based on computer simulations, optimization and experimental validation, trying to minimize the amount of wood associated with structural and ergonomic aspects and the effective area of the openings. Three box prototypes were designed and built using straight laths with different configurations and areas of openings (54% and 36%). The cooling efficiency of Tommy Atkins mango (Mangifera Indica L.) was evaluated by determining the cooling time for fruit packed in the wood models and packed in the commercially used cardboard boxes, submitted to cooling in a forced-air system, at a temperature of 6ºC and average relative humidity of 85.4±2.1%. The Finite Element Method was applied, for the dimensioning and structural optimization of the model with the best behavior in relation to cooling. All wooden boxes with fruit underwent vibration testing for two hours (20 Hz). There was no significant difference in average cooling time in the wooden boxes (36.08±1.44 min); however, the difference was significant in comparison to the cardboard boxes (82.63±29.64 min). In the model chosen for structural optimization (36% effective area of openings and two side laths), the reduction in total volume of material was 60% and 83% in the cross section of the columns. There was no indication of mechanical damage in the fruit after undergoing the vibration test. Computer simulations and structural study may be used as a support tool for developing projects for boxes, with geometric, ergonomic and thermal criteria.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
In about 50% of first trimester spontaneous abortion the cause remains undetermined after standard cytogenetic investigation. We evaluated the usefulness of array-CGH in diagnosing chromosome abnormalities in products of conception from first trimester spontaneous abortions. Cell culture was carried out in short- and long-term cultures of 54 specimens and cytogenetic analysis was successful in 49 of them. Cytogenetic abnormalities (numerical and structural) were detected in 22 (44.89%) specimens. Subsequent, array-CGH based on large insert clones spaced at ~1 Mb intervals over the whole genome was used in 17 cases with normal G-banding karyotype. This revealed chromosome aneuplodies in three additional cases, giving a final total of 51% cases in which an abnormal karyotype was detected. In keeping with other recently published works, this study shows that array-CGH detects abnormalities in a further ~10% of spontaneous abortion specimens considered to be normal using standard cytogenetic methods. As such, array-CGH technique may present a suitable complementary test to cytogenetic analysis in cases with a normal karyotype.