860 resultados para constant comparative method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fast and automatic method for radiocarbon analysis of aerosol samples is presented. This type of analysis requires high number of sample measurements of low carbon masses, but accepts precisions lower than for carbon dating analysis. The method is based on online Trapping CO2 and coupling an elemental analyzer with a MICADAS AMS by means of a gas interface. It gives similar results to a previously validated reference method for the same set of samples. This method is fast and automatic and typically provides uncertainties of 1.5–5% for representative aerosol samples. It proves to be robust and reliable and allows for overnight and unattended measurements. A constant and cross contamination correction is included, which indicates a constant contamination of 1.4 ± 0.2 μg C with 70 ± 7 pMC and a cross contamination of (0.2 ± 0.1)% from the previous sample. A Real-time online coupling version of the method was also investigated. It shows promising results for standard materials with slightly higher uncertainties than the Trapping online approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The aim of this study was to evaluate the validity and the inter- and intra-examiner reliability of panoramic-radiograph-driven findings of different maxillary sinus anatomic variations and pathologies, which had initially been prediagnosed by cone beam computed tomography (CBCT). Methods: After pairs of two-dimensional (2D) panoramic and three-dimensional (3D) CBCT images of patients having received treatment at the outpatient department had been screened, the predefinition of 54 selected maxillary sinus conditions was initially performed on CBCT images by two blinded consultants individually using a questionnaire that defined ten different clinically relevant findings. Using the identic questionnaire, these consultants performed the evaluation of the panoramic radiographs at a later time point. The results were analyzed for inter-imaging differences in the evaluation of the maxillary sinus between 2D and 3D imaging methods. Additionally, two resident groups (first year and last year of training) performed two diagnostic runs of the panoramic radiographs and results were analyzed for inter- and intra-observer reliability. Results: There is a moderate risk for false diagnosis of findings of the maxillary sinus if only panoramic radiography is used. Based on the ten predefined conditions, solely maxillary bone cysts penetrating into the sinus were frequently detected differently comparing 2D to 3D diagnostics. Additionally, on panoramic radiographs, the inter-observer comparison demonstrated that basal septa were significantly often rated differently and the intra-observer comparison showed a significant lack in reliability in detecting maxillary bone cysts penetrating into the sinus. Conclusions: Panoramic radiography provides the most information on the maxillary sinus, and it may be an adequate imaging method. However, particular findings of the maxillary sinus in panoramic imaging may be based on a rather examiner-dependent assessment. Therefore, a persistent and precise evaluation of specific conditions of the maxillary sinus may only be possible using CBCT because it provides additional information compared to panoramic radiography. This might be relevant for consecutive surgical procedures; consequently, we recommend CBCT if a precise preoperative evaluation is mandatory. However, higher radiation dose and costs of 3D imaging need to be considered. Keywords: Panoramic radiography; Cone beam computed tomography; Maxillary sinus; Inter-imaging method differences; Inter-examiner reliability; Intra-examiner reliability

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The operator effect is a well-known methodological bias already quantified in some taphonomic studies. However, the replicability effect, i.e., the use of taphonomic attributes as a replicable scientific method, has not been taken into account to the present. Here, we quantified for the first time this replicability bias using different multivariate statistical techniques, testing if the operator effect is related to the replicability effect. We analyzed the results reported by 15 operators working on the same dataset. Each operator analyzed 30 biological remains (bivalve shells) from five different sites, considering the attributes fragmentation, edge rounding, corrasion, bioerosion and secondary color. The operator effect followed the same pattern reported in previous studies, characterized by a worse correspondence for those attributes having more than two levels of damage categories. However, the effect did not appear to have relation with the replicability effect, because nearly all operators found differences among sites. Despite the binary attribute bioerosion exhibited 83% of correspondence among operators it was the taphonomic attributes that showed the highest dispersion among operators (28%). Therefore, we conclude that binary attributes (despite showing a reduction of the operator effect) diminish replicability, resulting in different interpretations of concordant data. We found that a variance value of nearly 8% among operators, was enough to generate a different taphonomic interpretation, in a Q-mode cluster analysis. The results reported here showed that the statistical method employed influences the level of replicability and comparability of a study and that the availability of results may be a valid alternative to reduce bias.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, the aquatic eddy correlation (EC) technique has proven to be a powerful approach for non-invasive measurements of oxygen fluxes across the sediment water interface. Fundamental to the EC approach is the correlation of turbulent velocity and oxygen concentration fluctuations measured with high frequencies in the same sampling volume. Oxygen concentrations are commonly measured with fast responding electrochemical microsensors. However, due to their own oxygen consumption, electrochemical microsensors are sensitive to changes of the diffusive boundary layer surrounding the probe and thus to changes in the ambient flow velocity. The so-called stirring sensitivity of microsensors constitutes an inherent correlation of flow velocity and oxygen sensing and thus an artificial flux which can confound the benthic flux determination. To assess the artificial flux we measured the correlation between the turbulent flow velocity and the signal of oxygen microsensors in a sealed annular flume without any oxygen sinks and sources. Experiments revealed significant correlations, even for sensors designed to have low stirring sensitivities of ~0.7%. The artificial fluxes depended on ambient flow conditions and, counter intuitively, increased at higher velocities because of the nonlinear contribution of turbulent velocity fluctuations. The measured artificial fluxes ranged from 2 - 70 mmol m**-2 d**-1 for weak and very strong turbulent flow, respectively. Further, the stirring sensitivity depended on the sensor orientation towards the flow. Optical microsensors (optodes) that should not exhibit a stirring sensitivity were tested in parallel and did not show any significant correlation between O2 signals and turbulent flow. In conclusion, EC data obtained with electrochemical sensors can be affected by artificial flux and we recommend using optical microsensors in future EC-studies. Flume experiments were conducted in February 2013 at the Institute for Environmental Sciences, University of Koblenz-Landau Landau. Experiments were performed in a closed oval-shaped acrylic glass flume with cross-sectional width of 4 cm and height of 10 cm and total length of 54 cm. The fluid flow was induced by a propeller driven by a motor and mean flow velocities of up to 20 cm s-1 were generated by applying voltages between 0 V and 4 V DC. The flume was completely sealed with an acrylic glass cover. Oxygen sensors were inserted through rubber seal fittings and allowed positioning the sensors with inclinations to the main flow direction of ~60°, ~95° and ~135°. A Clark type electrochemical O2 microsensor with a low stirring sensitivity (0.7%) was tested and a fast-responding needle-type O2 optode (PyroScience GmbH, Germany) was used as reference as optodes should not be stirring sensitive. Instantaneous three-dimensional flow velocities were measured at 7.4 Hz using stereoscopic particle image velocimetry (PIV). The velocity at the sensor tip was extracted. The correlation of the fluctuating O2 sensor signals and the fluctuating velocities was quantified with a cross-correlation analysis. A significant cross-correlation is equivalent to a significant artificial flux. For a total of 18 experiments the flow velocity was adjusted between 1.7 and 19.2 cm s**-1, and 3 different orientations of the electrochemical sensor were tested with inclination angles of ~60°, ~95° and ~135° with respect to the main flow direction. In experiments 16-18, wavelike flow was induced, whereas in all other experiments the motor was driven by constant voltages. In 7 experiments, O2 was additionally measured by optodes. Although performed simultaneously with the electrochemical sensor, optode measurements are listed as separate experiments (denoted by the attached 'op' in the filename), because the velocity time series was extracted at the optode tip, located at a different position in the flume.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eutectic rods of Al2O3–Er3Al5O12 were grown by directional solidification using the laser-heated floating zone method at rates in the range 25–1500 mm/h. Their microstructure and mechanical properties (hardness, toughness and strength) were investigated as a function of the growth rate. A homogeneous and interpenetrated microstructure was found in most cases, and interphase spacing decreased with growth rate following the Hunt–Jackson law. Hardness increased slightly as the interphase spacing decreased while toughness was low and independent of the microstructure. The rods presented very high bending strength as a result of the homogeneous microstructure, and their strength increased rapidly as the interphase spacing decreased, reaching a maximum of 2.7 GPa for the rods grown at 750 mm/h. The bending strength remained constant up to 1300 K and decreased above this temperature. The relationship between the microstructure and the mechanical properties was established from the analysis of the microstructure and of the fracture mechanisms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biometrics applied to mobile devices are of great interest for security applications. Daily scenarios can benefit of a combination of both the most secure systems and most simple and extended devices. This document presents a hand biometric system oriented to mobile devices, proposing a non-intrusive, contact-less acquisition process where final users should take a picture of their hand in free-space with a mobile device without removals of rings, bracelets or watches. The main contribution of this paper is threefold: firstly, a feature extraction method is proposed, providing invariant hand measurements to previous changes; second contribution consists of providing a template creation based on hand geometric distances, requiring information from only one individual, without considering data from the rest of individuals within the database; finally, a proposal for template matching is proposed, minimizing the intra-class similarity and maximizing the inter-class likeliness. The proposed method is evaluated using three publicly available contact-less, platform-free databases. In addition, the results obtained with these databases will be compared to the results provided by two competitive pattern recognition techniques, namely Support Vector Machines (SVM) and k-Nearest Neighbour, often employed within the literature. Therefore, this approach provides an appropriate solution to adapt hand biometrics to mobile devices, with an accurate results and a non-intrusive acquisition procedure which increases the overall acceptance from the final user.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analytical solution of the two body problem perturbed by a constant tangential acceleration is derived with the aid of perturbation theory. The solution, which is valid for circular and elliptic orbits with generic eccentricity, describes the instantaneous time variation of all orbital elements. A comparison with high-accuracy numerical results shows that the analytical method can be effectively applied to multiple-revolution low-thrust orbit transfer around planets and in interplanetary space with negligible error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An application of the Finite Element Method (FEM) to the solution of a geometric problem is shown. The problem is related to curve fitting i.e. pass a curve trough a set of given points even if they are irregularly spaced. Situations where cur ves with cusps can be encountered in the practice and therefore smooth interpolatting curves may be unsuitable. In this paper the possibilities of the FEM to deal with this type of problems are shown. A particular example of application to road planning is discussed. In this case the funcional to be minimized should express the unpleasent effects of the road traveller. Some comparative numerical examples are also given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El hormigón estructural sigue siendo sin duda uno de los materiales más utilizados en construcción debido a su resistencia, rigidez y flexibilidad para diseñar estructuras. El cálculo de estructuras de hormigón, utilizando vigas y vigas-columna, es complejo debido a los fenómenos de acoplamiento entre esfuerzos y al comportamiento no lineal del material. Los modelos más empleados para su análisis son el de Bernoulli-Euler y el de Timoshenko, indicándose en la literatura la conveniencia de usar el segundo cuando la relación canto/luz no es pequeña o los elementos están fuertemente armados. El objetivo fundamental de esta tesis es el análisis de elementos viga y viga-columna en régimen no lineal con deformación por cortante, aplicando el concepto de Pieza Lineal Equivalente (PLE). Concepto éste que consiste básicamente en resolver el problema de una pieza en régimen no lineal, transformándolo en uno lineal equivalente, de modo que ambas piezas tengan la misma deformada y los mismos esfuerzos. Para ello, se hizo en primer lugar un estudio comparado de: las distintas propuestas que aplican la deformación por cortante, de los distintos modelos constitutivos y seccionales del hormigón estructural y de los métodos de cálculo no lineal aplicando el método de elementos finitos (MEF). Teniendo en cuenta que la resolución del problema no lineal se basa en la resolución de sucesivos problemas lineales empleando un proceso de homotopía, los problemas lineales de la viga y viga-columna de Timoshenko, se resuelven mediante MEF, utilizando soluciones nodalmente exactas (SNE) y acción repartida equivalente de cualquier orden. Se obtiene así, con muy pocos elementos finitos, una excelente aproximación de la solución, no sólo en los nodos sino en el interior de los elementos. Se introduce el concepto PLE para el análisis de una barra, de material no lineal, sometida a acciones axiales, y se extiende el mismo para el análisis no lineal de vigas y vigas-columna con deformación por cortante. Cabe señalar que para estos últimos, la solución de una pieza en régimen no lineal es igual a la de una en régimen lineal, cuyas rigideces son constantes a trozos, y donde además hay que añadir momentos y cargas puntuales ficticias en los nodos, así como, un momento distribuido ficticio en toda la pieza. Se han desarrollado dos métodos para el análisis: uno para problemas isostáticos y otro general, aplicable tanto a problemas isostáticos como hiperestáticos. El primero determina de entrada la PLE, realizándose a continuación el cálculo por MEF-SNE de dicha pieza, que ahora está en régimen lineal. El general utiliza una homotopía que transforma de manera iterativa, unas leyes constitutivas lineales en las leyes no lineales del material. Cuando se combina con el MEF, la pieza lineal equivalente y la solución del problema original quedan determinadas al final de todo el proceso. Si bien el método general es un procedimiento próximo al de Newton- Raphson, presenta sobre éste la ventaja de permitir visualizar las deformaciones de la pieza en régimen no lineal, de manera tanto cualitativa como cuantitativa, ya que es posible observar en cada paso del proceso la modificación de rigideces (a flexión y cortante) y asimismo la evolución de las acciones ficticias. Por otra parte, los resultados obtenidos comparados con los publicados en la literatura, indican que el concepto PLE ofrece una forma directa y eficiente para analizar con muy buena precisión los problemas asociados a vigas y vigas-columna en las que por su tipología los efectos del cortante no pueden ser despreciados. ABSTRACT The structural concrete clearly remains the most used material in construction due to its strength, rigidity and structural design flexibility. The calculation of concrete structures using beams and beam-column is complex as consequence of the coupling phenomena between stresses and of its nonlinear behaviour. The models most commonly used for analysis are the Bernoulli-Euler and Timoshenko. The second model is strongly recommended when the relationship thickness/span is not small or in case the elements are heavily reinforced. The main objective of this thesis is to analyse the beam and beam-column elements with shear deformation in nonlinear regime, applying the concept of Equivalent Linear Structural Element (ELSE). This concept is basically to solve the problem of a structural element in nonlinear regime, transforming it into an equivalent linear structural element, so that both elements have the same deformations and the same stresses. Firstly, a comparative study of the various proposals of applying shear deformation, of various constitutive and sectional models of structural concrete, and of the nonlinear calculation methods (using finite element methods) was carried out. Considering that the resolution of nonlinear problem is based on solving the successive linear problem, using homotopy process, the linear problem of Timoshenko beam and beam-columns is resolved by FEM, using the exact nodal solutions (ENS) and equivalent distributed load of any order. Thus, the accurate solution approximation can be obtained with very few finite elements for not only nodes, but also for inside of elements. The concept ELSE is introduced to analyse a bar of nonlinear material, subjected to axial forces. The same bar is then used for other nonlinear beam and beam-column analysis with shear deformation. It is noted that, for the last analyses, the solution of a structural element in nonlinear regime is equal to that of linear regime, in which the piecewise-stiffness is constant, the moments and fictitious point loads need to be added at nodes of each element, as well as the fictitious distributed moment on element. Two methods have been developed for analysis: one for isostatic problem and other more general, applicable for both isostatic and hiperstatic problem. The first method determines the ELSE, and then the calculation of this piece is performed by FEM-ENS that now is in linear regime. The general method uses the homotopy that transforms iteratively linear constitutive laws into nonlinear laws of material. When combined with FEM, the ELSE and the solution of the original problem are determined at the end of the whole process. The general method is well known as a procedure closed to Newton-Raphson procedure but presents an advantage that allows displaying deformations of the piece in nonlinear regime, in both qualitative and quantitative way. Since it is possible to observe the modification of stiffness (flexural and shear) in each step of process and also the evolution of the fictitious actions. Moreover, the results compared with those published in the literature indicate that the ELSE concept offers a direct and efficient way to analyze with very good accuracy the problems associated with beams and beams columns in which, by typology, the effects of shear cannot be neglected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an extensive and useful comparison of existing formulas to estimate wave forces on crown walls. The paper also provides valuable insights into crown wall behaviour, suggesting the use of formulas for prior sizing and recommending, in any case, tests on a physical model in order to confirm the final design. The authors helpfully advise to use more than one method to obtain results closer to reality, always taking into account the test conditions under which each formula was developed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ballast pick-up (or ballast train-induced-wind erosion (BTE)) phenomenon is a limiting factor for the maximum allowed operational train speed. The determination of the conditions for the initiation of the motion of the ballast stones due to the wind gust created by high-speed trains is critical to predict the start of ballast pick-up because, once the motion is initiated, a saltation-like chain reaction can take place. The aim of this paper is to present a model to evaluate the effect of a random aerodynamic impulse on stone motion initiation, and an experimental study performed to check the capability of the proposed model to classify trains by their effect on the ballast due to the flow generated by the trains. A measurement study has been performed at kp 69 + 500 on the Madrid – Barcelona High Speed Line. The obtained results show the feasibility of the proposed method, and contribute to a technique for BTE characterization, which can be relevant for the development of train interoperability standards

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's of the new mesh remain constant and equal to the initial FE mesh. In order to find the mesh producing the minimum of the selected objective function the steepest descent gradient technique has been applied as optimization algorithm. However this efficient technique has the drawback that demands a large computation power. Extensive application of this methodology to different 2-D elasticity problems leads to the conclusion that isometric isostatic meshes (ii-meshes) produce better results than the standard reasonably initial regular meshes used in practice. This conclusion seems to be independent on the objective function used for comparison. These ii-meshes are obtained by placing FE nodes along the isostatic lines, i.e. curves tangent at each point to the principal direction lines of the elastic problem to be solved and they should be regularly spaced in order to build regular elements. That means ii-meshes are usually obtained by iteration, i.e. with the initial FE mesh the elastic analysis is carried out. By using the obtained results of this analysis the net of isostatic lines can be drawn and in a first trial an ii-mesh can be built. This first ii-mesh can be improved, if it necessary, by analyzing again the problem and generate after the FE analysis the new and improved ii-mesh. Typically, after two first tentative ii-meshes it is sufficient to produce good FE results from the elastic analysis. Several example of this procedure are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tranformed-rule up and down psychophysical methods have gained great popularity, mainly because they combine criterion-free responses with an adaptive procedure allowing rapid determination of an average stimulus threshold at various criterion levels of correct responses. The statistical theory underlying the methods now in routine use is based on sets of consecutive responses with assumed constant probabilities of occurrence. The response rules requiring consecutive responses prevent the possibility of using the most desirable response criterion, that of 75% correct responses. The earliest transformed-rule up and down method, whose rules included nonconsecutive responses, did not contain this limitation but failed to become generally accepted, lacking a published theoretical foundation. Such a foundation is provided in this article and is validated empirically with the help of experiments on human subjects and a computer simulation. In addition to allowing the criterion of 75% correct responses, the method is more efficient than the methods excluding nonconsecutive responses in their rules.