957 resultados para Test Set
Resumo:
The study of short implants is relevant to the biomechanics of dental implants, and research on crown increase has implications for the daily clinic. The aim of this study was to analyze the biomechanical interactions of a singular implant-supported prosthesis of different crown heights under vertical and oblique force, using the 3-D finite element method. Six 3-D models were designed with Invesalius 3.0, Rhinoceros 3D 4.0, and Solidworks 2010 software. Each model was constructed with a mandibular segment of bone block, including an implant supporting a screwed metal-ceramic crown. The crown height was set at 10, 12.5, and 15 mm. The applied force was 200 N (axial) and 100 N (oblique). We performed an ANOVA statistical test and Tukey tests; p < 0.05 was considered statistically significant. The increase of crown height did not influence the stress distribution on screw prosthetic (p > 0.05) under axial load. However, crown heights of 12.5 and 15 mm caused statistically significant damage to the stress distribution of screws and to the cortical bone (p <0.001) under oblique load. High crown to implant (C/I) ratio harmed microstrain distribution on bone tissue under axial and oblique loads (p < 0.001). Crown increase was a possible deleterious factor to the screws and to the different regions of bone tissue. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Pós-graduação em Fisioterapia - FCT
Resumo:
Effects of roads on wildlife and its habitat have been measured using metrics, such as the nearest road distance, road density, and effective mesh size. In this work we introduce two new indices: (1) Integral Road Effect (IRE), which measured the sum effects of points in a road at a fixed point in the forest; and (2) Average Value of the Infinitesimal Road Effect (AVIRE), which measured the average of the effects of roads at this point. IRE is formally defined as the line integral of a special function (the infinitesimal road effect) along the curves that model the roads, whereas AVIRE is the quotient of IRE by the length of the roads. Combining tools of ArcGIS software with a numerical algorithm, we calculated these and other road and habitat cover indices in a sample of points in a human-modified landscape in the Brazilian Atlantic Forest, where data on the abundance of two groups of small mammals (forest specialists and habitat generalists) were collected in the field. We then compared through the Akaike Information Criterion (AIC) a set of candidate regression models to explain the variation in small mammal abundance, including models with our two new road indices (AVIRE and IRE) or models with other road effect indices (nearest road distance, mesh size, and road density), and reference models (containing only habitat indices, or only the intercept without the effect of any variable). Compared to other road effect indices, AVIRE showed the best performance to explain abundance of forest specialist species, whereas the nearest road distance obtained the best performance to generalist species. AVIRE and habitat together were included in the best model for both small mammal groups, that is, higher abundance of specialist and generalist small mammals occurred where there is lower average road effect (less AVIRE) and more habitat. Moreover, AVIRE was not significantly correlated with habitat cover of specialists and generalists differing from the other road effect indices, except mesh size, which allows for separating the effect of roads from the effect of habitat on small mammal communities. We suggest that the proposed indices and GIS procedures could also be useful to describe other spatial ecological phenomena, such as edge effect in habitat fragments. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The objective of the present work was to propose a method for testing the contribution of each level of the factors in a genotypes x environments (GxE) interaction using multi-environment trials analyses by means of an F test. The study evaluated a data set, with twenty genotypes and thirty-four environments, in a block design with four replications. The sum of squares within rows (genotypes) and columns (environments) of the GxE matrix was simulated, generating 10000 experiments to verify the empirical distribution. Results indicate a noncentral chi-square distribution for rows and columns of the GxE interaction matrix, which was also verified by the Kolmogorov-Smirnov test and Q-Q plot. Application of the F test identified the genotypes and environments that contributed the most to the GxE interaction. In this way, geneticists can select good genotypes in their studies.
Resumo:
A long-standing problem when testing from a deterministic finite state machine is to guarantee full fault coverage even if the faults introduce extra states in the implementations. It is well known that such tests should include the sequences in a traversal set which contains all input sequences of length defined by the number of extra states. This paper suggests the SPY method, which helps reduce the length of tests by distributing sequences of the traversal set and reducing test branching. It is also demonstrated that an additional assumption about the implementation under test relaxes the requirement of the complete traversal set. The results of the experimental comparison of the proposed method with an existing method indicate that the resulting reduction can reach 40%. Experimental results suggest that the additional assumption about the implementation can help in further reducing the test suite length. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
This paper addresses the m-machine no-wait flow shop problem where the set-up time of a job is separated from its processing time. The performance measure considered is the total flowtime. A new hybrid metaheuristic Genetic Algorithm-Cluster Search is proposed to solve the scheduling problem. The performance of the proposed method is evaluated and the results are compared with the best method reported in the literature. Experimental tests show superiority of the new method for the test problems set, regarding the solution quality. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this study was to evaluate the push-out bond strength of fiberglass resin reinforced bonded with five ionomer cements. Also, the interface between cement and dentin was inspected by means of SEM. Fifty human canines were chose after rigorous scrutiny process, endodontically treated and divided randomly into five groups (n = 3) according to cement tested: Group I – Ionoseal (VOCO), Group II – Fugi I (GC), Group III – Fugi II Improved (GC), Group IV – Rely X Luting 2 (3M ESPE), Group V – Ketac Cem (3M ESPE). The post-space was prepared to receive a fiberglass post, which was tried before cementation process. No dentin or post surface pretreatment was carried out. After post bonding, all roots were cross-sectioned to acquire 3 thin-slices (1 mm) from three specific regions of tooth (cervical, medium and apical). A Universal test machine was used to carry out the push-out test with cross-head speed set to 0.5mm/mim. All failed specimens were observed under optical microscope to identify the failure mode. Representative specimens from each group was inspected under SEM. The data were analyzed by Kolmogorov-Smirnov and Levene’s tests and by two-way ANOVA, and Tukey’s port hoc test at a significance level of 5%. It was compared the images obtained for determination of types of failures more occurred in different levels. SEM inspection displayed that all cements filled the space between post and dentin, however, some imperfections such bubles and voids were noticed in all groups in some degree of extension. The push-out bond strength showed that cement Ketac Cem presented significant higher results when compared to the Ionoseal (P = 0.02). There were no statistical significant differences among other cements.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
The Time-Of-Flight (TOF) detector of ALICE is designed to identify charged particles produced in Pb--Pb collisions at the LHC to address the physics of strongly-interacting matter and the Quark-Gluon Plasma (QGP). The detector is based on the Multigap Resistive Plate Chamber (MRPC) technology which guarantees the excellent performance required for a large time-of-flight array. The construction and installation of the apparatus in the experimental site have been completed and the detector is presently fully operative. All the steps which led to the construction of the TOF detector were strictly followed by a set of quality assurance procedures to enable high and uniform performance and eventually the detector has been commissioned with cosmic rays. This work aims at giving a detailed overview of the ALICE TOF detector, also focusing on the tests performed during the construction phase. The first data-taking experience and the first results obtained with cosmic rays during the commissioning phase are presented as well and allow to confirm the readiness state of the TOF detector for LHC collisions.
Resumo:
In the last years, the importance of locating people and objects and communicating with them in real time has become a common occurrence in every day life. Nowadays, the state of the art of location systems for indoor environments has not a dominant technology as instead occurs in location systems for outdoor environments, where GPS is the dominant technology. In fact, each location technology for indoor environments presents a set of features that do not allow their use in the overall application scenarios, but due its characteristics, it can well coexist with other similar technologies, without being dominant and more adopted than the others indoor location systems. In this context, the European project SELECT studies the opportunity of collecting all these different features in an innovative system which can be used in a large number of application scenarios. The goal of this project is to realize a wireless system, where a network of fixed readers able to query one or more tags attached to objects to be located. The SELECT consortium is composed of European institutions and companies, including Datalogic S.p.A. and CNIT, which deal with software and firmware development of the baseband receiving section of the readers, whose function is to acquire and process the information received from generic tagged objects. Since the SELECT project has an highly innovative content, one of the key stages of the system design is represented by the debug phase. This work aims to study and develop tools and techniques that allow to perform the debug phase of the firmware of the baseband receiving section of the readers.
Resumo:
The subject of the presented thesis is the accurate measurement of time dilation, aiming at a quantitative test of special relativity. By means of laser spectroscopy, the relativistic Doppler shifts of a clock transition in the metastable triplet spectrum of ^7Li^+ are simultaneously measured with and against the direction of motion of the ions. By employing saturation or optical double resonance spectroscopy, the Doppler broadening as caused by the ions' velocity distribution is eliminated. From these shifts both time dilation as well as the ion velocity can be extracted with high accuracy allowing for a test of the predictions of special relativity. A diode laser and a frequency-doubled titanium sapphire laser were set up for antiparallel and parallel excitation of the ions, respectively. To achieve a robust control of the laser frequencies required for the beam times, a redundant system of frequency standards consisting of a rubidium spectrometer, an iodine spectrometer, and a frequency comb was developed. At the experimental section of the ESR, an automated laser beam guiding system for exact control of polarisation, beam profile, and overlap with the ion beam, as well as a fluorescence detection system were built up. During the first experiments, the production, acceleration and lifetime of the metastable ions at the GSI heavy ion facility were investigated for the first time. The characterisation of the ion beam allowed for the first time to measure its velocity directly via the Doppler effect, which resulted in a new improved calibration of the electron cooler. In the following step the first sub-Doppler spectroscopy signals from an ion beam at 33.8 %c could be recorded. The unprecedented accuracy in such experiments allowed to derive a new upper bound for possible higher-order deviations from special relativity. Moreover future measurements with the experimental setup developed in this thesis have the potential to improve the sensitivity to low-order deviations by at least one order of magnitude compared to previous experiments; and will thus lead to a further contribution to the test of the standard model.
Resumo:
Among the many applications of microarray technology, one of the most popular is the identification of genes that are differentially expressed in two conditions. A common statistical approach is to quantify the interest of each gene with a p-value, adjust these p-values for multiple comparisons, chose an appropriate cut-off, and create a list of candidate genes. This approach has been criticized for ignoring biological knowledge regarding how genes work together. Recently a series of methods, that do incorporate biological knowledge, have been proposed. However, many of these methods seem overly complicated. Furthermore, the most popular method, Gene Set Enrichment Analysis (GSEA), is based on a statistical test known for its lack of sensitivity. In this paper we compare the performance of a simple alternative to GSEA.We find that this simple solution clearly outperforms GSEA.We demonstrate this with eight different microarray datasets.
Resumo:
Physical fitness can be evaluated in competitive and school sports with different field tests under different conditions and goals. To produce valid results, a field test must be practical and reach high standards of test criteria (objectivity, reliability, validity). The purpose of this study was to investigate the test criteria and the practicability of a group of field tests called «SUISSE Sport Test Konzept Basis Feldtestbatterie». For 20-m sprint, ventral trunk muscle test, standing long jump, 2-kg medicine ball shot, obstacle course and cooper-test, test quality and practicability were evaluated. 221 children and adolescents from competitive sports and different school levels took part in the study. According to school level, they were divided into 3 groups (P: 7–11.5 y, S1: 11.6–15.5 y, S2: 15.6–21.8 y). Objectivity was tested for time or distance measurement in all tests as well as for error rating in obstacle test. For reliability measurement, 162 subjects performed the field tests twice within a few weeks. For validity results of standing long jump were compared with counter movement jump performance on a force plate. Correlation analysis was performed and level of significance was set for p < 0.05. For accuracy standard error was calculated. All tests achieved sufficient to excellent objectiv - ity with correlation-coefficient (r) lying between 0.85 and 0.99. Reliability was very good (r = 0.84–0.97). In cooper- and trunk test, reliability was higher for athletes than for pupils (trunk test: r = 0.95 vs. r = 0.62, cooper-test: r = 0.90 vs. r = 0.78). In those tests the reliability decreases with increasing age (cooper-test: P: r = 0.84, S1: r = 0.69, S2: r = 0.52; trunk-test: P: r = 0.69, S1: r = 0.71; S2: r = 0.39). Validity for standing long jump was good (r = 0.75–0.86). The standard error of the mean was between 4–8%, with the exception for cooper-test (athletes: 6%, pupils: 11%) and trunk test (athletes: 14%, pupils: 46%). The results show that the evaluated group of field tests is a practicable, objective and reliable tool to determine physical skills in young athletes as well as in a scholar setting over a broad age range. Most of the tests achieved the test criteria with the grades good to excellent. The lower coefficient of reliability for cooper- and trunk test by the pupils could be explained by motivational problems in this setting. For up to 20 subjects, a tester can accomplish the tests within 3 h. Finally, age-dependent grades were elaborated
Resumo:
Writing unit tests for legacy systems is a key maintenance task. When writing tests for object-oriented programs, objects need to be set up and the expected effects of executing the unit under test need to be verified. If developers lack internal knowledge of a system, the task of writing tests is non-trivial. To address this problem, we propose an approach that exposes side effects detected in example runs of the system and uses these side effects to guide the developer when writing tests. We introduce a visualization called Test Blueprint, through which we identify what the required fixture is and what assertions are needed to verify the correct behavior of a unit under test. The dynamic analysis technique that underlies our approach is based on both tracing method executions and on tracking the flow of objects at runtime. To demonstrate the usefulness of our approach we present results from two case studies.
Resumo:
In this note, we show that an extension of a test for perfect ranking in a balanced ranked set sample given by Li and Balakrishnan (2008) to the multi-cycle case turns out to be equivalent to the test statistic proposed by Frey et al. (2007). This provides an alternative interpretation and motivation for their test statistic.