938 resultados para Point method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to rapid and continuous deforestation, recent bird surveys in the Atlantic Forest are following rapid assessment programs to accumulate significant amounts of data during short periods of time. During this study, two surveying methods were used to evaluate which technique rapidly accumulated most species (> 90% of the estimated empirical value) at lowland Atlantic Forests in the state of São Paulo, southeastern Brazil. Birds were counted during the 2008-2010 breeding seasons using 10-minute point counts and 10-species lists. Overall, point counting detected as many species as lists (79 vs. 83, respectively), and 88 points (14.7 h) detected 90% of the estimated species richness. Forty-one lists were insufficient to detect 90% of all species. However, lists accumulated species faster in a shorter time period, probably due to the nature of the point count method in which species detected while moving between points are not considered. Rapid assessment programs in these forests will rapidly detect more species using 10-species lists. Both methods shared 63% of all forest species, but this may be due to spatial and temporal mismatch between samplings of each method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boiling points (T B) of acyclic alkynes are predicted from their boiling point numbers (Y BP) with the relationship T B(K) = -16.802Y BP2/3 + 337.377Y BP1/3 - 437.883. In turn, Y BP values are calculated from structure using the equation Y BP = 1.726 + Ai + 2.779C + 1.716M3 + 1.564M + 4.204E3 + 3.905E + 5.007P - 0.329D + 0.241G + 0.479V + 0.967T + 0.574S. Here Ai depends on the substitution pattern of the alkyne and the remainder of the equation is the same as that reported earlier for alkanes. For a data set consisting of 76 acyclic alkynes, the correlation of predicted and literature T B values had an average absolute deviation of 1.46 K, and the R² of the correlation was 0.999. In addition, the calculated Y BP values can be used to predict the flash points of alkynes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In electrical impedance tomography, one tries to recover the conductivity inside a physical body from boundary measurements of current and voltage. In many practically important situations, the investigated object has known background conductivity but it is contaminated by inhomogeneities. The factorization method of Andreas Kirsch provides a tool for locating such inclusions. Earlier, it has been shown that under suitable regularity conditions positive (or negative) inhomogeneities can be characterized by the factorization technique if the conductivity or one of its higher normal derivatives jumps on the boundaries of the inclusions. In this work, we use a monotonicity argument to generalize these results: We show that the factorization method provides a characterization of an open inclusion (modulo its boundary) if each point inside the inhomogeneity has an open neighbourhood where the perturbation of the conductivity is strictly positive (or negative) definite. In particular, we do not assume any regularity of the inclusion boundary or set any conditions on the behaviour of the perturbed conductivity at the inclusion boundary. Our theoretical findings are verified by two-dimensional numerical experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since its discovery, top quark has represented one of the most investigated field in particle physics. The aim of this thesis is the reconstruction of hadronic top with high transverse momentum (boosted) with the Template Overlap Method (TOM). Because of the high energy, the decay products of boosted tops are partially or totally overlapped and thus they are contained in a single large radius jet (fat-jet). TOM compares the internal energy distributions of the candidate fat-jet to a sample of tops obtained by a MC simulation (template). The algorithm is based on the definition of an overlap function, which quantifies the level of agreement between the fat-jet and the template, allowing an efficient discrimination of signal from the background contributions. A working point has been decided in order to obtain a signal efficiency close to 90% and a corresponding background rejection at 70%. TOM performances have been tested on MC samples in the muon channel and compared with the previous methods present in literature. All the methods will be merged in a multivariate analysis to give a global top tagging which will be included in ttbar production differential cross section performed on the data acquired in 2012 at sqrt(s)=8 TeV in high phase space region, where new physics processes could be possible. Due to its peculiarity to increase the pT, the Template Overlap Method will play a crucial role in the next data taking at sqrt(s)=13 TeV, where the almost totality of the tops will be produced at high energy, making the standard reconstruction methods inefficient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that the early initiation of a specific antiinfective therapy is crucial to reduce the mortality in severe infection. Procedures culturing pathogens are the diagnostic gold standard in such diseases. However, these methods yield results earliest between 24 to 48 hours. Therefore, severe infections such as sepsis need to be treated with an empirical antimicrobial therapy, which is ineffective in an unknown fraction of these patients. Today's microbiological point of care tests are pathogen specific and therefore not appropriate for an infection with a variety of possible pathogens. Molecular nucleic acid diagnostics such as polymerase chain reaction (PCR) allow the identification of pathogens and resistances. These methods are used routinely to speed up the analysis of positive blood cultures. The newest PCR based system allows the identification of the 25 most frequent sepsis pathogens by PCR in parallel without previous culture in less than 6 hours. Thereby, these systems might shorten the time of possibly insufficient antiinfective therapy. However, these extensive tools are not suitable as point of care diagnostics. Miniaturization and automating of the nucleic acid based method is pending, as well as an increase of detectable pathogens and resistance genes by these methods. It is assumed that molecular PCR techniques will have an increasing impact on microbiological diagnostics in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By measuring the total crack lengths (TCL) along a gunshot wound channel simulated in ordnance gelatine, one can calculate the energy transferred by a projectile to the surrounding tissue along its course. Visual quantitative TCL analysis of cut slices in ordnance gelatine blocks is unreliable due to the poor visibility of cracks and the likely introduction of secondary cracks resulting from slicing. Furthermore, gelatine TCL patterns are difficult to preserve because of the deterioration of the internal structures of gelatine with age and the tendency of gelatine to decompose. By contrast, using computed tomography (CT) software for TCL analysis in gelatine, cracks on 1-cm thick slices can be easily detected, measured and preserved. In this, experiment CT TCL analyses were applied to gunshots fired into gelatine blocks by three different ammunition types (9-mm Luger full metal jacket, .44 Remington Magnum semi-jacketed hollow point and 7.62 × 51 RWS Cone-Point). The resulting TCL curves reflected the three projectiles' capacity to transfer energy to the surrounding tissue very accurately and showed clearly the typical energy transfer differences. We believe that CT is a useful tool in evaluating gunshot wound profiles using the TCL method and is indeed superior to conventional methods applying physical slicing of the gelatine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a kernel density correlation based nonrigid point set matching method and shows its application in statistical model based 2D/3D reconstruction of a scaled, patient-specific model from an un-calibrated x-ray radiograph. In this method, both the reference point set and the floating point set are first represented using kernel density estimates. A correlation measure between these two kernel density estimates is then optimized to find a displacement field such that the floating point set is moved to the reference point set. Regularizations based on the overall deformation energy and the motion smoothness energy are used to constraint the displacement field for a robust point set matching. Incorporating this non-rigid point set matching method into a statistical model based 2D/3D reconstruction framework, we can reconstruct a scaled, patient-specific model from noisy edge points that are extracted directly from the x-ray radiograph by an edge detector. Our experiment conducted on datasets of two patients and six cadavers demonstrates a mean reconstruction error of 1.9 mm

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Iterative Closest Point (ICP) is a widely exploited method for point registration that is based on binary point-to-point assignments, whereas the Expectation Conditional Maximization (ECM) algorithm tries to solve the problem of point registration within the framework of maximum likelihood with point-to-cluster matching. In this paper, by fulfilling the implementation of both algorithms as well as conducting experiments in a scenario where dozens of model points must be registered with thousands of observation points on a pelvis model, we investigated and compared the performance (e.g. accuracy and robustness) of both ICP and ECM for point registration in cases without noise and with Gaussian white noise. The experiment results reveal that the ECM method is much less sensitive to initialization and is able to achieve more consistent estimations of the transformation parameters than the ICP algorithm, since the latter easily sinks into local minima and leads to quite different registration results with respect to different initializations. Both algorithms can reach the high registration accuracy at the same level, however, the ICP method usually requires an appropriate initialization to converge globally. In the presence of Gaussian white noise, it is observed in experiments that ECM is less efficient but more robust than ICP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past 7 years, the enediyne anticancer antibiotics have been widely studied due to their DNA cleaving ability. The focus of these antibiotics, represented by kedarcidin chromophore, neocarzinostatin chromophore, calicheamicin, esperamicin A, and dynemicin A, is on the enediyne moiety contained within each of these antibiotics. In its inactive form, the moiety is benign to its environment. Upon suitable activation, the system undergoes a Bergman cycloaromatization proceeding through a 1,4-dehydrobenzene diradical intermediate. It is this diradical intermediate that is thought to cleave double-stranded dna through hydrogen atom abstraction. Semiempirical, semiempiricalci, Hartree–Fock ab initio, and mp2 electron correlation methods have been used to investigate the inactive hex-3-ene-1,5-diyne reactant, the 1,4-dehydrobenzene diradical, and a transition state structure of the Bergman reaction. Geometries calculated with different basis sets and by semiempirical methods have been used for single-point calculations using electron correlation methods. These results are compared with the best experimental and theoretical results reported in the literature. Implications of these results for computational studies of the enediyne anticancer antibiotics are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Engineering students continue to develop and show misconceptions due to prior knowledge and experiences (Miller, Streveler, Olds, Chi, Nelson, & Geist, 2007). Misconceptions have been documented in students’ understanding of heat transfer(Krause, Decker, Niska, Alford, & Griffin, 2003) by concept inventories (e.g., Jacobi,Martin, Mitchell, & Newell, 2003; Nottis, Prince, Vigeant, Nelson, & Hartsock, 2009). Students’ conceptual understanding has also been shown to vary by grade point average (Nottis et al., 2009). Inquiry-based activities (Nottis, Prince, & Vigeant, 2010) haveshown some success over traditional instructional methods (Tasoglu & Bakac, 2010) in altering misconceptions. The purpose of the current study was to determine whether undergraduate engineering students’ understanding of heat transfer concepts significantly changed after instruction with eight inquiry-based activities (Prince & Felder, 2007) supplementing instruction and whether students’ self reported GPA and prior knowledge, as measured by completion of specific engineering courses, affected these changes. The Heat and Energy Concept Inventory (Prince, Vigeant, & Nottis, 2010) was used to assess conceptual understanding. It was found that conceptual understanding significantly increased from pre- to post-test. It was also found that GPA had an effect on conceptual understanding of heat transfer; significant differences were found in post-test scores onthe concept inventory between GPA groups. However, there were mixed results when courses previously taken were analyzed. Future research should strive to analyze how prior knowledge effects conceptual understanding and aim to reduce the limitations of the current study such as, sampling method and methods of measuring GPA and priorknowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The Roche CARDIAC proBNP point-of-care (POC) test is the first test intended for the quantitative determination of N-terminal pro-brain natriuretic peptide (NT-proBNP) in whole blood as an aid in the diagnosis of suspected congestive heart failure, in the monitoring of patients with compensated left-ventricular dysfunction and in the risk stratification of patients with acute coronary syndromes. METHODS: A multicentre evaluation was carried out to assess the analytical performance of the POC NT-proBNP test at seven different sites. RESULTS: The majority of all coefficients of variation (CVs) obtained for within-series imprecision using native blood samples was below 10% for both 52 samples measured ten times and for 674 samples measured in duplicate. Using quality control material, the majority of CV values for day-to-day imprecision were below 14% for the low control level and below 13% for the high control level. In method comparisons for four lots of the POC NT-proBNP test with the laboratory reference method (Elecsys proBNP), the slope ranged from 0.93 to 1.10 and the intercept ranged from 1.8 to 6.9. The bias found between venous and arterial blood with the POC NT-proBNP method was < or =5%. All four lots of the POC NT-proBNP test investigated showed excellent agreement, with mean differences of between -5% and +4%. No significant interference was observed with lipaemic blood (triglyceride concentrations up to 6.3 mmol/L), icteric blood (bilirubin concentrations up to 582 micromol/L), haemolytic blood (haemoglobin concentrations up to 62 mg/L), biotin (up to 10 mg/L), rheumatoid factor (up to 42 IU/mL), or with 50 out of 52 standard or cardiological drugs in therapeutic concentrations. With bisoprolol and BNP, somewhat higher bias in the low NT-proBNP concentration range (<175 ng/L) was found. Haematocrit values between 28% and 58% had no influence on the test result. Interference may be caused by human anti-mouse antibodies (HAMA) types 1 and 2. No significant influence on the results with POC NT-proBNP was found using volumes of 140-165 muL. High NT-proBNP concentrations above the measuring range of the POC NT-proBNP test did not lead to false low results due to a potential high-dose hook effect. CONCLUSIONS: The POC NT-proBNP test showed good analytical performance and excellent agreement with the laboratory method. The POC NT-proBNP assay is therefore suitable in the POC setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ensuring water is safe at source and point-of-use is important in areas of the world where drinking water is collected from communal supplies. This report describes a study in rural Mali to determine the appropriateness of assumptions common among development organizations that drinking water will remain safe at point-of-use if collected from a safe (improved) source. Water was collected from ten sources (borehole wells with hand pumps, and hand-dug wells) and forty-five households using water from each source type. Water quality was evaluated seasonally (quarterly) for levels of total coliform, E.coli, and turbidity. Microbial testing was done using the 3M Petrifilm™ method. Turbidity testing was done using a turbidity tube. Microbial testing results were analyzed using statistical tests including Kruskal-Wallis, Mann Whitney, and analysis of variance. Results show that water from hand pumps did not contain total coliform or E.coli and had turbidity under 5 NTUs, whereas water from dug wells had high levels of bacteria and turbidity. However water at point-of-use (household) from hand pumps showed microbial contamination - at times being indistinguishable from households using dug wells - indicating a decline in water quality from source to point-of-use. Chemical treatment at point-of-use is suggested as an appropriate solution to eliminating any post-source contamination. Additionally, it is recommended that future work be done to modify existing water development strategies to consider water quality at point-of-use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To correlate the dimension of the visual field (VF) tested by Goldman kinetic perimetry with the extent of visibility of the highly reflective layer between inner and outer segments of photoreceptors (IOS) seen in optical coherence tomography (OCT) images in patients with retinitis pigmentosa (RP). METHODS: In a retrospectively designed cross-sectional study, 18 eyes of 18 patients with RP were examined with OCT and Goldmann perimetry using test target I4e and compared with 18 eyes of 18 control subjects. A-scans of raw scan data of Stratus OCT images (Carl Zeiss Meditec, AG, Oberkochen, Germany) were quantitatively analyzed for the presence of the signal generated by the highly reflective layer between the IOS in OCT images. Starting in the fovea, the distance to which this signal was detectable was measured. Visual fields were analyzed by measuring the distance from the center point to isopter I4e. OCT and visual field data were analyzed in a clockwise fashion every 30 degrees , and corresponding measures were correlated. RESULTS: In corresponding alignments, the distance from the center point to isopter I4e and the distance to which the highly reflective signal from the IOS can be detected correlate significantly (r = 0.75, P < 0.0001). The greater the distance in VF, the greater the distance measured in OCT. CONCLUSIONS: The authors hypothesize that the retinal structure from which the highly reflective layer between the IOS emanates is of critical importance for visual and photoreceptor function. Further research is warranted to determine whether this may be useful as an objective marker of progression of retinal degeneration in patients with RP.