946 resultados para Numerical error


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an architecture for machining process and production monitoring to be applied in machine tools with open Computer numerical control (CNC). A brief description of the advantages of using open CNC for machining process and production monitoring is presented with an emphasis on the CNC architecture using a personal computer (PC)-based human-machine interface. The proposed architecture uses the CNC data and sensors to gather information about the machining process and production. It allows the development of different levels of monitoring systems with mininium investment, minimum need for sensor installation, and low intrusiveness to the process. Successful examples of the utilization of this architecture in a laboratory environment are briefly described. As a Conclusion, it is shown that a wide range of monitoring solutions can be implemented in production processes using the proposed architecture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we discuss the use of photonic crystal fibers (PCFs) as discrete devices for simultaneous wideband dispersion compensation and Raman amplification. The performance of the PCFs in terms of gain, ripple, optical signal-to-noise ratio (OSNR) and required fiber length for complete dispersion compensation is compared with conventional dispersion compensating fibers (DCFs). The main goal is to determine the minimum PCF loss beyond which its performance surpasses a state-of-the-art DCF and justifies practical use in telecommunication systems. (C) 2009 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Shot peening is a cold-working mechanical process in which a shot stream is propelled against a component surface. Its purpose is to introduce compressive residual stresses on component surfaces for increasing the fatigue resistance. This process is widely applied in springs due to the cyclical loads requirements. This paper presents a numerical modelling of shot peening process using the finite element method. The results are compared with experimental measurements of the residual stresses, obtained by the X-rays diffraction technique, in leaf springs submitted to this process. Furthermore, the results are compared with empirical and numerical correlations developed by other authors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Perseus galaxy cluster is known to present multiple and misaligned pairs of cavities seen in X-rays, as well as twisted kiloparsec-scale jets at radio wavelengths; both morphologies suggest that the active galactic nucleus (AGN) jet is subject to precession. In this work, we performed three-dimensional hydrodynamical simulations of the interaction between a precessing AGN jet and the warm intracluster medium plasma, whose dynamics are coupled to a Navarro-Frenk-White dark matter gravitational potential. The AGN jet inflates cavities that become buoyantly unstable and rise up out of the cluster core. We found that under certain circumstances precession can originate multiple pairs of bubbles. For the physical conditions in the Perseus cluster, multiple pairs of bubbles are obtained for a jet precession opening angle >40 degrees acting for at least three precession periods, reproducing both radio and X-ray maps well. Based on such conditions, assuming that the Bardeen-Peterson effect is dominant, we studied the evolution of the precession opening angle of this system. We were able to constrain the ratio between the accretion disk and the black hole angular momenta as 0.7-1.4. We were also able to constrain the present precession angle to 30 degrees-40 degrees, as well as the approximate age of the inflated bubbles to 100-150 Myr.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. Cluster properties can be more distinctly studied in pairs of clusters, where we expect the effects of interactions to be strong. Aims. We here discuss the properties of the double cluster Abell 1758 at a redshift z similar to 0.279. These clusters show strong evidence for merging. Methods. We analyse the optical properties of the North and South cluster of Abell 1758 based on deep imaging obtained with the Canada-France-Hawaii Telescope (CFHT) archive Megaprime/Megacam camera in the g' and r' bands, covering a total region of about 1.05 x 1.16 deg(2), or 16.1 x 17.6 Mpc(2). Our X-ray analysis is based on archive XMM-Newton images. Numerical simulations were performed using an N-body algorithm to treat the dark-matter component, a semi-analytical galaxy-formation model for the evolution of the galaxies and a grid-based hydrodynamic code with a parts per million (PPM) scheme for the dynamics of the intra-cluster medium. We computed galaxy luminosity functions (GLFs) and 2D temperature and metallicity maps of the X-ray gas, which we then compared to the results of our numerical simulations. Results. The GLFs of Abell 1758 North are well fit by Schechter functions in the g' and r' bands, but with a small excess of bright galaxies, particularly in the r' band; their faint-end slopes are similar in both bands. In contrast, the GLFs of Abell 1758 South are not well fit by Schechter functions: excesses of bright galaxies are seen in both bands; the faint-end of the GLF is not very well defined in g'. The GLF computed from our numerical simulations assuming a halo mass-luminosity relation agrees with those derived from the observations. From the X-ray analysis, the most striking features are structures in the metal distribution. We found two elongated regions of high metallicity in Abell 1758 North with two peaks towards the centre. In contrast, Abell 1758 South shows a deficit of metals in its central regions. Comparing observational results to those derived from numerical simulations, we could mimic the most prominent features present in the metallicity map and propose an explanation for the dynamical history of the cluster. We found in particular that in the metal-rich elongated regions of the North cluster, winds had been more efficient than ram-pressure stripping in transporting metal-enriched gas to the outskirts. Conclusions. We confirm the merging structure of the North and South clusters, both at optical and X-ray wavelengths.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have numerically solved the Heisenberg-Langevin equations describing the propagation of quantized fields through an optically thick sample of atoms. Two orthogonal polarization components are considered for the field, and the complete Zeeman sublevel structure of the atomic transition is taken into account. Quantum fluctuations of atomic operators are included through appropriate Langevin forces. We have considered an incident field in a linearly polarized coherent state (driving field) and vacuum in the perpendicular polarization and calculated the noise spectra of the amplitude and phase quadratures of the output field for two orthogonal polarizations. We analyze different configurations depending on the total angular momentum of the ground and excited atomic states. We examine the generation of squeezing for the driving-field polarization component and vacuum squeezing of the orthogonal polarization. Entanglement of orthogonally polarized modes is predicted. Noise spectral features specific to (Zeeman) multilevel configurations are identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We revisit the scaling properties of a model for nonequilibrium wetting [Phys. Rev. Lett. 79, 2710 (1997)], correcting previous estimates of the critical exponents and providing a complete scaling scheme. Moreover, we investigate a special point in the phase diagram, where the model exhibits a roughening transition related to directed percolation. We argue that in the vicinity of this point evaporation from the middle of plateaus can be interpreted as an external field in the language of directed percolation. This analogy allows us to compute the crossover exponent and to predict the form of the phase transition line close to its terminal point.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate a conjecture on the cover times of planar graphs by means of large Monte Carlo simulations. The conjecture states that the cover time tau (G(N)) of a planar graph G(N) of N vertices and maximal degree d is lower bounded by tau (G(N)) >= C(d)N(lnN)(2) with C(d) = (d/4 pi) tan(pi/d), with equality holding for some geometries. We tested this conjecture on the regular honeycomb (d = 3), regular square (d = 4), regular elongated triangular (d = 5), and regular triangular (d = 6) lattices, as well as on the nonregular Union Jack lattice (d(min) = 4, d(max) = 8). Indeed, the Monte Carlo data suggest that the rigorous lower bound may hold as an equality for most of these lattices, with an interesting issue in the case of the Union Jack lattice. The data for the honeycomb lattice, however, violate the bound with the conjectured constant. The empirical probability distribution function of the cover time for the square lattice is also briefly presented, since very little is known about cover time probability distribution functions in general.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The confined flows in tubes with permeable surfaces arc associated to tangential filtration processes (microfiltration or ultrafiltration). The complexity of the phenomena do not allow for the development of exact analytical solutions, however, approximate solutions are of great interest for the calculation of the transmembrane outflow and estimate of the concentration, polarization phenomenon. In the present work, the generalized integral transform technique (GITT) was employed in solving the laminar and permanent flow in permeable tubes of Newtonian and incompressible fluid. The mathematical formulation employed the parabolic differential equation of chemical species conservation (convective-diffusive equation). The velocity profiles for the entrance region flow, which are found in the connective terms of the equation, were assessed by solutions obtained from literature. The velocity at the permeable wall was considered uniform, with the concentration at the tube wall regarded as variable with an axial position. A computational methodology using global error control was applied to determine the concentration in the wall and concentration boundary layer thickness. The results obtained for the local transmembrane flux and the concentration boundary layer thickness were compared against others in literature. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dynamic behavior of composite laminates is very complex because there are many concurrent phenomena during composite laminate failure under impact load. Fiber breakage, delaminations, matrix cracking, plastic deformations due to contact and large displacements are some effects which should be considered when a structure made from composite material is impacted by a foreign object. Thus, an investigation of the low velocity impact on laminated composite thin disks of epoxy resin reinforced by carbon fiber is presented. The influence of stacking sequence and energy impact was investigated using load-time histories, displacement-time histories and energy-time histories as well as images from NDE. Indentation tests results were compared to dynamic results, verifying the inertia effects when thin composite laminate was impacted by foreign object with low velocity. Finite element analysis (FEA) was developed, using Hill`s model and material models implemented by UMAT (User Material Subroutine) into software ABAQUS (TM), in order to simulate the failure mechanisms under indentation tests. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A methodology of identification and characterization of coherent structures mostly known as clusters is applied to hydrodynamic results of numerical simulation generated for the riser of a circulating fluidized bed. The numerical simulation is performed using the MICEFLOW code, which includes the two-fluids IIT`s hydrodynamic model B. The methodology for cluster characterization that is used is based in the determination of four characteristics, related to average life time, average volumetric fraction of solid, existing time fraction and frequency of occurrence. The identification of clusters is performed by applying a criterion related to the time average value of the volumetric solid fraction. A qualitative rather than quantitative analysis is performed mainly owing to the unavailability of operational data used in the considered experiments. Concerning qualitative analysis, the simulation results are in good agreement with literature. Some quantitative comparisons between predictions and experiment were also presented to emphasize the capability of the modeling procedure regarding the analysis of macroscopic scale coherent structures. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An accurate estimate of machining time is very important for predicting delivery time, manufacturing costs, and also to help production process planning. Most commercial CAM software systems estimate the machining time in milling operations simply by dividing the entire tool path length by the programmed feed rate. This time estimate differs drastically from the real process time because the feed rate is not always constant due to machine and computer numerical controlled (CNC) limitations. This study presents a practical mechanistic method for milling time estimation when machining free-form geometries. The method considers a variable called machine response time (MRT) which characterizes the real CNC machine`s capacity to move in high feed rates in free-form geometries. MRT is a global performance feature which can be obtained for any type of CNC machine configuration by carrying out a simple test. For validating the methodology, a workpiece was used to generate NC programs for five different types of CNC machines. A practical industrial case study was also carried out to validate the method. The results indicated that MRT, and consequently, the real machining time, depends on the CNC machine`s potential: furthermore, the greater MRT, the larger the difference between predicted milling time and real milling time. The proposed method achieved an error range from 0.3% to 12% of the real machining time, whereas the CAM estimation achieved from 211% to 1244% error. The MRT-based process is also suggested as an instrument for helping in machine tool benchmarking.