996 resultados para FILE SIZE DETERMINATION
Resumo:
PURPOSE: The aim of this study was to investigate the influence of cervical preflaring in determining the initial apical file (IAF) in the palatal roots of maxillary molars, and to determine the morphologic shape of the canal 1 mm short of the apex. METHODS: After preparing standard access cavities the group 1 received the IAF without cervical preflaring (WCP). In groups 2 to 5, preflaring was performed with Gates-Glidden (GG), Anatomic Endodontics Technology (AET), GT Rotary Files (GT) and LA Axxes (LA), respectively. Each canal was sized using manual K-files, starting with size 08 files, and making passive movements until the WL was reached. File sizes were increased until a binding sensation was felt at the WL. The IAF area and the area of the root canal were measured to verify the percentage occupied by the IAF inside the canal in each sample by SEM. The morphologic shape of the root canal was classified as circular, oval or flattened. Statistical analysis was performed by ANOVA/Tukey test (P < 0.05). RESULTS: The decreasing percentages occupied by the IAF inside the canal were: LA>GT=AET>GG>WCP. The morphologic shape was predominantly oval. CONCLUSION: The type of cervical preflaring used interferes in the determination of IAF.
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
Stallard (1998, Biometrics 54, 279-294) recently used Bayesian decision theory for sample-size determination in phase II trials. His design maximizes the expected financial gains in the development of a new treatment. However, it results in a very high probability (0.65) of recommending an ineffective treatment for phase III testing. On the other hand, the expected gain using his design is more than 10 times that of a design that tightly controls the false positive error (Thall and Simon, 1994, Biometrics 50, 337-349). Stallard's design maximizes the expected gain per phase II trial, but it does not maximize the rate of gain or total gain for a fixed length of time because the rate of gain depends on the proportion: of treatments forwarding to the phase III study. We suggest maximizing the rate of gain, and the resulting optimal one-stage design becomes twice as efficient as Stallard's one-stage design. Furthermore, the new design has a probability of only 0.12 of passing an ineffective treatment to phase III study.
Resumo:
Objective: This ex vivo study evaluated the effect of pre-flaring and file size on the accuracy of the Root ZX and Novapex electronic apex locators (EALs). Material and methods: The actual working length (WL) was set 1 mm short of the apical foramen in the palatal root canals of 24 extracted maxillary molars. The teeth were embedded in an alginate mold, and two examiners performed the electronic measurements using #10, #15, and #20 K-files. The files were inserted into the root canals until the "0.0" or "APEX" signals were observed on the LED or display screens for the Novapex and Root ZX, respectively, retracting to the 1.0 mark. The measurements were repeated after the pre-flaring using the S1 and SX Pro-Taper instruments. Two measurements were performed for each condition and the means were used. Intra-class correlation coefficients (ICCs) were calculated to verify the intra-and inter-examiner agreement. The mean differences between the WL and electronic length values were analyzed by the three-way ANOVA test (p<0.05). Results: ICCs were high (>0.8) and the results demonstrated a similar accuracy for both EALs (p>0.05). Statistically significant accurate measurements were verified in the pre-flared canals, except for the Novapex using a #20 K-file. Conclusions: The tested EALs showed acceptable accuracy, whereas the pre-flaring procedure revealed a more significant effect than the used file size.
Resumo:
Abstract Background Myocardial contrast echocardiography has been used for determination of infarct size (IS) in experimental models. However, with intermittent harmonic imaging, IS seems to be underestimated immediately after reperfusion due to areas with preserved, yet dysfunctional, microvasculature. The use of exogenous vasodilators showed to be useful to unmask these infarcted areas with depressed coronary flow reserve. This study was undertaken to assess the value of adenosine for IS determination in an open-chest canine model of coronary occlusion and reperfusion, using real-time myocardial contrast echocardiography (RTMCE). Methods Nine dogs underwent 180 minutes of coronary occlusion followed by reperfusion. PESDA (Perfluorocarbon-Exposed Sonicated Dextrose Albumin) was used as contrast agent. IS was determined by RTMCE before and during adenosine infusion at a rate of 140 mcg·Kg-1·min-1. Post-mortem necrotic area was determined by triphenyl-tetrazolium chloride (TTC) staining. Results IS determined by RTMCE was 1.98 ± 1.30 cm2 and increased to 2.58 ± 1.53 cm2 during adenosine infusion (p = 0.004), with good correlation between measurements (r = 0.91; p < 0.01). The necrotic area determined by TTC was 2.29 ± 1.36 cm2 and showed no significant difference with IS determined by RTMCE before or during hyperemia. A slight better correlation between RTMCE and TTC measurements was observed during adenosine (r = 0.99; p < 0.001) then before it (r = 0.92; p = 0.0013). Conclusion RTMCE can accurately determine IS in immediate period after acute myocardial infarction. Adenosine infusion results in a slight better detection of actual size of myocardial damage.
Resumo:
Since instrumentation of the apical foramen has been suggested for cleaning and disinfection of the cemental canal, selection of the file size and position of the apical foramen have challenging steps. This study analyzed the influence of apical foramen lateral opening and file size can exert on cemental canal instrumentation. Thirty-four human maxillary central incisors were divided in two groups: Group 1 (n=17), without flaring, and Group 2 (n=17), with flaring with LA Axxess burs. K-files of increasing diameters were progressively inserted into the canal until binding at the apical foramen was achieved and tips were visible and bonded with ethyl cyanoacrylate adhesive. Roots/files set were cross-sectioned 5 mm from the apex. Apices were examined by scanning electron microscopy at ×140 and digital images were captured. Data were analyzed statistically by Student’s t test and Fisher’s exact test at 5% significance level. SEM micrographs showed that 19 (56%) apical foramina emerged laterally to the root apex, whereas 15 (44%) coincided with it. Significantly more difficulty to reach the apical foramen was noted in Group 2. Results suggest that the larger the foraminal file size, the more difficult the apical foramen instrumentation may be in laterally emerged cemental canals.
Resumo:
Combinatorial chemistry is gaining wide appeal as a technique for generating molecular diversity. Among the many combinatorial protocols, the split/recombine method is quite popular and particularly efficient at generating large libraries of compounds. In this process, polymer beads are equally divided into a series of pools and each pool is treated with a unique fragment; then the beads are recombined, mixed to uniformity, and redivided equally into a new series of pools for the subsequent couplings. The deviation from the ideal equimolar distribution of the final products is assessed by a special overall relative error, which is shown to be related to the Pearson statistic. Although the split/recombine sampling scheme is quite different from those used in analysis of categorical data, the Pearson statistic is shown to still follow a chi2 distribution. This result allows us to derive the required number of beads such that, with 99% confidence, the overall relative error is controlled to be less than a pregiven tolerable limit L1. In this paper, we also discuss another criterion, which determines the required number of beads so that, with 99% confidence, all individual relative errors are controlled to be less than a pregiven tolerable limit L2 (0 < L2 < 1).
Resumo:
Mode of access: Internet.
Resumo:
Aim: To assess the influence of cervical preparation on fracture susceptibility of roots. Material and methods: During root canal instrumentation, the cervical portions were prepared with different taper instruments: I: no cervical preparation; II: #30/.08; III: #30/.10; IV: #70/.12. The specimens were sealed with the following filling materials (n = 8), A: unfilled; B: Endofill/gutta-percha; C: AH Plus/gutta-percha; D: Epiphany SE/Resilon. For the fracture resistance test, a universal testing machine was used at 1 mm per minute. Results: anova demonstrated difference (P < 0.05) between taper instruments with a higher value for group I (205.3 +/- 77.5 N) followed by II (185.2 +/- 70.8 N), III (164.8 +/- 48.9 N), and IV (156.7 +/- 41.4 N). There was no difference (P > 0.05) between filling materials A (189.1 +/- 66.3 N), B (186.3 +/- 61.0 N), C (159.7 +/- 69.9 N), and D (176.9 +/- 55.2 N). Conclusions: Greater cervical wear using a #70/.12 file increased the root fracture susceptibility, and the tested filling materials were not able to restore resistance.
Resumo:
Digital forensic examiners often need to identify the type of a file or file fragment based only on the content of the file. Content-based file type identification schemes typically use a byte frequency distribution with statistical machine learning to classify file types. Most algorithms analyze the entire file content to obtain the byte frequency distribution, a technique that is inefficient and time consuming. This paper proposes two techniques for reducing the classification time. The first technique selects a subset of features based on the frequency of occurrence. The second speeds classification by sampling several blocks from the file. Experimental results demonstrate that up to a fifteen-fold reduction in file size analysis time can be achieved with limited impact on accuracy.
Resumo:
The present study reports an application of the searching combination moving window partial least squares (SCMWPLS) algorithm to the determination of ethenzamide and acetoaminophen in quaternary powdered samples by near infrared (NIR) spectroscopy. Another purpose of the study was to examine the instrumentation effects of spectral resolution and signal-to-noise ratio of the Buchi NIRLab N-200 FT-NIR spectrometer equipped with an InGaAs detector. The informative spectral intervals of NIR spectra of a series of quaternary powdered mixture samples were first located for ethenzamide and acetoaminophen by use of moving window partial least squares regression (MWPLSR). Then, these located spectral intervals were further optimised by SCMWPLS for subsequent partial least squares (PLS) model development. The improved results are attributed to both the less complex PLS models and to higher accuracy of predicted concentrations of ethenzamide and acetoaminophen in the optimised informative spectral intervals that are featured by NIR bands. At the same time, SCMWPLS is also demonstrated as a viable route for wavelength selection.
Resumo:
Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales self-similarity. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. We examine its effects through detailed transport-level simulations of multiple TCP streams in an internetwork. First, we show that in a "realistic" client/server network environment i.e., one with bounded resources and coupling among traffic sources competing for resources the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is not significantly affected by changes in network resources (bottleneck bandwidth and buffer capacity), network topology, the influence of cross-traffic, or the distribution of interarrival times. Second, we show that properties of the transport layer play an important role in preserving and modulating this relationship. In particular, the reliable transmission and flow control mechanisms of TCP (Reno, Tahoe, or Vegas) serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similar characteristics: although still bursty at short time scales, it has little long-range dependence. If flow-controlled, unreliable transport is employed, the degree of traffic self-similarity is positively correlated with the degree of throttling at the source. Third, in exploring the relationship between file sizes, transport protocols, and self-similarity, we are also able to show some of the performance implications of self-similarity. We present data on the relationship between traffic self-similarity and network performance as captured by performance measures including packet loss rate, retransmission rate, and queueing delay. Increased self-similarity, as expected, results in degradation of performance. Queueing delay, in particular, exhibits a drastic increase with increasing self-similarity. Throughput-related measures such as packet loss and retransmission rate, however, increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.