913 resultados para numerical calculation
Resumo:
Background: There are several numerical investigations on bone remodelling after total hip arthroplasty (THA) on the basis of the finite element analysis (FEA). For such computations certain boundary conditions have to be defined. The authors chose a maximum of three static load situations, usually taken from the gait cycle because this is the most frequent dynamic activity of a patient after THA. Materials and methods: The numerical study presented here investigates whether it is useful to consider only one static load situation of the gait cycle in the FE calculation of the bone remodelling. For this purpose, 5 different loading cases were examined in order to determine their influence on the change in the physiological load distribution within the femur and on the resulting strain-adaptive bone remodelling. First, four different static loading cases at 25%, 45%, 65% and 85% of the gait cycle, respectively, and then the whole gait cycle in a loading regime were examined in order to regard all the different loadings of the cycle in the simulation. Results: The computed evolution of the apparent bone density (ABD) and the calculated mass losses in the periprosthetic femur show that the simulation results are highly dependent on the chosen boundary conditions. Conclusion: These numerical investigations prove that a static load situation is insufficient for representing the whole gait cycle. This causes severe deviations in the FE calculation of the bone remodelling. However, accompanying clinical examinations are necessary to calibrate the bone adaptation law and thus to validate the FE calculations.
Resumo:
We study a reaction–diffusion mathematical model for the evolution of atherosclerosis as an inflammation process by combining analytical tools with computer-intensive numerical calculations. The computational work involved the calculation of more than sixty thousand solutions of the full reaction–diffusion system and lead to the complete characterisation of the ωω-limit for every initial condition. Qualitative properties of the solution are rigorously proved, some of them hinted at by the numerical study
Resumo:
Mathematical skills that we acquire during formal education mostly entail exact numerical processing. Besides this specifically human faculty, an additional system exists to represent and manipulate quantities in an approximate manner. We share this innate approximate number system (ANS) with other nonhuman animals and are able to use it to process large numerosities long before we can master the formal algorithms taught in school. Dehaene´s (1992) Triple Code Model (TCM) states that also after the onset of formal education, approximate processing is carried out in this analogue magnitude code no matter if the original problem was presented nonsymbolically or symbolically. Despite the wide acceptance of the model, most research only uses nonsymbolic tasks to assess ANS acuity. Due to this silent assumption that genuine approximation can only be tested with nonsymbolic presentations, up to now important implications in research domains of high practical relevance remain unclear, and existing potential is not fully exploited. For instance, it has been found that nonsymbolic approximation can predict math achievement one year later (Gilmore, McCarthy, & Spelke, 2010), that it is robust against the detrimental influence of learners´ socioeconomic status (SES), and that it is suited to foster performance in exact arithmetic in the short-term (Hyde, Khanum, & Spelke, 2014). We provided evidence that symbolic approximation might be equally and in some cases even better suited to generate predictions and foster more formal math skills independently of SES. In two longitudinal studies, we realized exact and approximate arithmetic tasks in both a nonsymbolic and a symbolic format. With first graders, we demonstrated that performance in symbolic approximation at the beginning of term was the only measure consistently not varying according to children´s SES, and among both approximate tasks it was the better predictor for math achievement at the end of first grade. In part, the strong connection seems to come about from mediation through ordinal skills. In two further experiments, we tested the suitability of both approximation formats to induce an arithmetic principle in elementary school children. We found that symbolic approximation was equally effective in making children exploit the additive law of commutativity in a subsequent formal task as a direct instruction. Nonsymbolic approximation on the other hand had no beneficial effect. The positive influence of the symbolic approximate induction was strongest in children just starting school and decreased with age. However, even third graders still profited from the induction. The results show that also symbolic problems can be processed as genuine approximation, but that beyond that they have their own specific value with regard to didactic-educational concerns. Our findings furthermore demonstrate that the two often con-founded factors ꞌformatꞌ and ꞌdemanded accuracyꞌ cannot be disentangled easily in first graders numerical understanding, but that children´s SES also influences existing interrelations between the different abilities tested here.
Resumo:
A three-dimensional Direct Finite Element procedure is here presented which takes into account most of the factors affecting the interaction problem of the dam-water-foundation system, whilst keeping the computational cost at a reasonable level by introducing some simplified hypotheses. A truncated domain is defined, and the dynamic behaviour of the system is treated as a wave-scattering problem where the presence of the dam perturbs an original free-field system. The rock foundation truncated boundaries are enclosed by a set of free-field one-dimensional and two-dimensional systems which transmit the effective forces to the main model and apply adsorbing viscous boundaries to ensure radiation damping. The water domain is treated as an added mass moving with the dam. A strategy is proposed to keep the viscous dampers at the boundaries unloaded during the initial phases of analysis, when the static loads are initialised, and thus avoid spurious displacements. A focus is given to the nonlinear behaviour of the rock foundation, with concentrated plasticity along the natural discontinuities of the rock mass, immersed in an otherwise linear elastic medium with Rayleigh damping. The entire procedure is implemented in the commercial software Abaqus®, whose base code is enriched with specific user subroutines when needed. All the extra coding is attached to the Thesis and tested against analytical results and simple examples. Possible rock wedge instabilities induced by intense ground motion, which are not easily investigated within a comprehensive model of the dam-water-foundation system, are treated separately with a simplified decoupled dynamic approach derived from the classical Newmark method, integrated with FE calculation of dam thrust on the wedges during the earthquake. Both the described approaches are applied to the case study of the Ridracoli arch-gravity dam (Italy) in order to investigate its seismic response to the Maximum Credible Earthquake (MCE) in a full reservoir condition.
Resumo:
In this thesis, we perform a next-to-leading order calculation of the impact of primordial magnetic fields (PMF) into the evolution of scalar cosmological perturbations and the cosmic microwave background (CMB) anisotropy. Magnetic fields are everywhere in the Universe at all scales probed so far, but their origin is still under debate. The current standard picture is that they originate from the amplification of initial seed fields, which could have been generated as PMFs in the early Universe. The most robust way to test their presence and constrain their features is to study how they impact on key cosmological observables, in particular the CMB anisotropies. The standard way to model a PMF is to consider its contribution (quadratic in the magnetic field) at the same footing of first order perturbations, under the assumptions of ideal magneto-hydrodynamics and compensated initial conditions. In the perspectives of ever increasing precision of CMB anisotropies measurements and of possible uncounted non-linear effects, in this thesis we study effects which go beyond the standard assumptions. We study the impact of PMFs on cosmological perturbations and CMB anisotropies with adiabatic initial conditions, the effect of Alfvén waves on the speed of sound of perturbations and possible non-linear behavior of baryon overdensity for PMFs with a blue spectral index, by modifying and improving the publicly available Einstein-Boltzmann code SONG, which has been written in order to take into account all second-order contributions in cosmological perturbation theory. One of the objectives of this thesis is to set the basis to verify by an independent fully numerical analysis the possibility to affect recombination and the Hubble constant.
Resumo:
Below cloud scavenging processes have been investigated considering a numerical simulation, local atmospheric conditions and particulate matter (PM) concentrations, at different sites in Germany. The below cloud scavenging model has been coupled with bulk particulate matter counter TSI (Trust Portacounter dataset, consisting of the variability prediction of the particulate air concentrations during chosen rain events. The TSI samples and meteorological parameters were obtained during three winter Campaigns: at Deuselbach, March 1994, consisting in three different events; Sylt, April 1994 and; Freiburg, March 1995. The results show a good agreement between modeled and observed air concentrations, emphasizing the quality of the conceptual model used in the below cloud scavenging numerical modeling. The results between modeled and observed data have also presented high square Pearson coefficient correlations over 0.7 and significant, except the Freiburg Campaign event. The differences between numerical simulations and observed dataset are explained by the wind direction changes and, perhaps, the absence of advection mass terms inside the modeling. These results validate previous works based on the same conceptual model.
Resumo:
Mixing layers are present in very different types of physical situations such as atmospheric flows, aerodynamics and combustion. It is, therefore, a well researched subject, but there are aspects that require further studies. Here the instability of two-and three-dimensional perturbations in the compressible mixing layer was investigated by numerical simulations. In the numerical code, the derivatives were discretized using high-order compact finite-difference schemes. A stretching in the normal direction was implemented with both the objective of reducing the sound waves generated by the shear region and improving the resolution near the center. The compact schemes were modified to work with non-uniform grids. Numerical tests started with an analysis of the growth rate in the linear regime to verify the code implementation. Tests were also performed in the non-linear regime and it was possible to reproduce the vortex roll-up and pairing, both in two-and three-dimensional situations. Amplification rate analysis was also performed for the secondary instability of this flow. It was found that, for essentially incompressible flow, maximum growth rates occurred for a spanwise wavelength of approximately 2/3 of the streamwise spacing of the vortices. The result demonstrated the applicability of the theory developed by Pierrehumbet and Widnall. Compressibility effects were then considered and the maximum growth rates obtained for relatively high Mach numbers (typically under 0.8) were also presented.
Resumo:
In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e. g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of fluctuations. These results, even though preliminary and restricted to very specific conditions, show that the physical properties of turbulence in collisionless plasmas, as those found in the ICM, may be very different from what has been largely believed.
Resumo:
Using series solutions and time-domain evolutions, we probe the eikonal limit of the gravitational and scalar-field quasinormal modes of large black holes and black branes in anti-de Sitter backgrounds. These results are particularly relevant for the AdS/CFT correspondence, since the eikonal regime is characterized by the existence of long-lived modes which (presumably) dominate the decay time scale of the perturbations. We confirm all the main qualitative features of these slowly damped modes as predicted by Festuccia and Liu [G. Festuccia and H. Liu, arXiv:0811.1033.] for the scalar-field (tensor-type gravitational) fluctuations. However, quantitatively we find dimensional-dependent correction factors. We also investigate the dependence of the quasinormal mode frequencies on the horizon radius of the black hole (brane) and the angular momentum (wave number) of vector- and scalar-type gravitational perturbations.
Resumo:
This paper proposes an architecture for machining process and production monitoring to be applied in machine tools with open Computer numerical control (CNC). A brief description of the advantages of using open CNC for machining process and production monitoring is presented with an emphasis on the CNC architecture using a personal computer (PC)-based human-machine interface. The proposed architecture uses the CNC data and sensors to gather information about the machining process and production. It allows the development of different levels of monitoring systems with mininium investment, minimum need for sensor installation, and low intrusiveness to the process. Successful examples of the utilization of this architecture in a laboratory environment are briefly described. As a Conclusion, it is shown that a wide range of monitoring solutions can be implemented in production processes using the proposed architecture.
Resumo:
In this paper we discuss the use of photonic crystal fibers (PCFs) as discrete devices for simultaneous wideband dispersion compensation and Raman amplification. The performance of the PCFs in terms of gain, ripple, optical signal-to-noise ratio (OSNR) and required fiber length for complete dispersion compensation is compared with conventional dispersion compensating fibers (DCFs). The main goal is to determine the minimum PCF loss beyond which its performance surpasses a state-of-the-art DCF and justifies practical use in telecommunication systems. (C) 2009 Optical Society of America
Resumo:
Shot peening is a cold-working mechanical process in which a shot stream is propelled against a component surface. Its purpose is to introduce compressive residual stresses on component surfaces for increasing the fatigue resistance. This process is widely applied in springs due to the cyclical loads requirements. This paper presents a numerical modelling of shot peening process using the finite element method. The results are compared with experimental measurements of the residual stresses, obtained by the X-rays diffraction technique, in leaf springs submitted to this process. Furthermore, the results are compared with empirical and numerical correlations developed by other authors.
Resumo:
Consider a random medium consisting of N points randomly distributed so that there is no correlation among the distances separating them. This is the random link model, which is the high dimensionality limit (mean-field approximation) for the Euclidean random point structure. In the random link model, at discrete time steps, a walker moves to the nearest point, which has not been visited in the last mu steps (memory), producing a deterministic partially self-avoiding walk (the tourist walk). We have analytically obtained the distribution of the number n of points explored by the walker with memory mu=2, as well as the transient and period joint distribution. This result enables us to explain the abrupt change in the exploratory behavior between the cases mu=1 (memoryless walker, driven by extreme value statistics) and mu=2 (walker with memory, driven by combinatorial statistics). In the mu=1 case, the mean newly visited points in the thermodynamic limit (N >> 1) is just < n >=e=2.72... while in the mu=2 case, the mean number < n > of visited points grows proportionally to N(1/2). Also, this result allows us to establish an equivalence between the random link model with mu=2 and random map (uncorrelated back and forth distances) with mu=0 and the abrupt change between the probabilities for null transient time and subsequent ones.
Resumo:
The Perseus galaxy cluster is known to present multiple and misaligned pairs of cavities seen in X-rays, as well as twisted kiloparsec-scale jets at radio wavelengths; both morphologies suggest that the active galactic nucleus (AGN) jet is subject to precession. In this work, we performed three-dimensional hydrodynamical simulations of the interaction between a precessing AGN jet and the warm intracluster medium plasma, whose dynamics are coupled to a Navarro-Frenk-White dark matter gravitational potential. The AGN jet inflates cavities that become buoyantly unstable and rise up out of the cluster core. We found that under certain circumstances precession can originate multiple pairs of bubbles. For the physical conditions in the Perseus cluster, multiple pairs of bubbles are obtained for a jet precession opening angle >40 degrees acting for at least three precession periods, reproducing both radio and X-ray maps well. Based on such conditions, assuming that the Bardeen-Peterson effect is dominant, we studied the evolution of the precession opening angle of this system. We were able to constrain the ratio between the accretion disk and the black hole angular momenta as 0.7-1.4. We were also able to constrain the present precession angle to 30 degrees-40 degrees, as well as the approximate age of the inflated bubbles to 100-150 Myr.
Resumo:
Context. Cluster properties can be more distinctly studied in pairs of clusters, where we expect the effects of interactions to be strong. Aims. We here discuss the properties of the double cluster Abell 1758 at a redshift z similar to 0.279. These clusters show strong evidence for merging. Methods. We analyse the optical properties of the North and South cluster of Abell 1758 based on deep imaging obtained with the Canada-France-Hawaii Telescope (CFHT) archive Megaprime/Megacam camera in the g' and r' bands, covering a total region of about 1.05 x 1.16 deg(2), or 16.1 x 17.6 Mpc(2). Our X-ray analysis is based on archive XMM-Newton images. Numerical simulations were performed using an N-body algorithm to treat the dark-matter component, a semi-analytical galaxy-formation model for the evolution of the galaxies and a grid-based hydrodynamic code with a parts per million (PPM) scheme for the dynamics of the intra-cluster medium. We computed galaxy luminosity functions (GLFs) and 2D temperature and metallicity maps of the X-ray gas, which we then compared to the results of our numerical simulations. Results. The GLFs of Abell 1758 North are well fit by Schechter functions in the g' and r' bands, but with a small excess of bright galaxies, particularly in the r' band; their faint-end slopes are similar in both bands. In contrast, the GLFs of Abell 1758 South are not well fit by Schechter functions: excesses of bright galaxies are seen in both bands; the faint-end of the GLF is not very well defined in g'. The GLF computed from our numerical simulations assuming a halo mass-luminosity relation agrees with those derived from the observations. From the X-ray analysis, the most striking features are structures in the metal distribution. We found two elongated regions of high metallicity in Abell 1758 North with two peaks towards the centre. In contrast, Abell 1758 South shows a deficit of metals in its central regions. Comparing observational results to those derived from numerical simulations, we could mimic the most prominent features present in the metallicity map and propose an explanation for the dynamical history of the cluster. We found in particular that in the metal-rich elongated regions of the North cluster, winds had been more efficient than ram-pressure stripping in transporting metal-enriched gas to the outskirts. Conclusions. We confirm the merging structure of the North and South clusters, both at optical and X-ray wavelengths.