992 resultados para iterative method
Resumo:
The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
The subgradient optimization method is a simple and flexible linear programming iterative algorithm. It is much simpler than Newton's method and can be applied to a wider variety of problems. It also converges when the objective function is non-differentiable. Since an efficient algorithm will not only produce a good solution but also take less computing time, we always prefer a simpler algorithm with high quality. In this study a series of step size parameters in the subgradient equation is studied. The performance is compared for a general piecewise function and a specific p-median problem. We examine how the quality of solution changes by setting five forms of step size parameter.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We propose a method for accelerating iterative algorithms for solving symmetric linear complementarity problems. The method consists in performing a one-dimensional optimization in the direction generated by a splitting method even for non-descent directions. We give strong convergence proofs and present numerical experiments that justify using this acceleration.
Resumo:
A fourth-order numerical method for solving the Navier-Stokes equations in streamfunction/vorticity formulation on a two-dimensional non-uniform orthogonal grid has been tested on the fluid flow in a constricted symmetric channel. The family of grids is generated algebraically using a conformal transformation followed by a non-uniform stretching of the mesh cells in which the shape of the channel boundary can vary from a smooth constriction to one which one possesses a very sharp but smooth corner. The generality of the grids allows the use of long channels upstream and downstream as well as having a refined grid near the sharp corner. Derivatives in the governing equations are replaced by fourth-order central differences and the vorticity is eliminated, either before or after the discretization, to form a wide difference molecule for the streamfunction. Extra boundary conditions, necessary for wide-molecule methods, are supplied by a procedure proposed by Henshaw et al. The ensuing set of non-linear equations is solved using Newton iteration. Results have been obtained for Reynolds numbers up to 250 for three constrictions, the first being smooth, the second having a moderately sharp corner and the third with a very sharp corner. Estimates of the error incurred show that the results are very accurate and substantially better than those of the corresponding second-order method. The observed order of the method has been shown to be close to four, demonstrating that the method is genuinely fourth-order. © 1977 John Wiley & Sons, Ltd.
Resumo:
In this work, the analysis of electroosmotic pumping mechanisms in microchannels is performed through the solution of Poisson-Boltzmann and Navier Stokes equations by the Finite Element Method. This approach is combined with a Newton-Raphson iterative scheme, allowing a full treatment of the non-linear Poisson-Boltzmann source term which is normally approximated by linearizations in other methods.
Resumo:
The importance of mechanical aspects related to cell activity and its environment is becoming more evident due to their influence in stem cell differentiation and in the development of diseases such as atherosclerosis. The mechanical tension homeostasis is related to normal tissue behavior and its lack may be related to the formation of cancer, which shows a higher mechanical tension. Due to the complexity of cellular activity, the application of simplified models may elucidate which factors are really essential and which have a marginal effect. The development of a systematic method to reconstruct the elements involved in the perception of mechanical aspects by the cell may accelerate substantially the validation of these models. This work proposes the development of a routine capable of reconstructing the topology of focal adhesions and the actomyosin portion of the cytoskeleton from the displacement field generated by the cell on a flexible substrate. Another way to think of this problem is to develop an algorithm to reconstruct the forces applied by the cell from the measurements of the substrate displacement, which would be characterized as an inverse problem. For these kind of problems, the Topology Optimization Method (TOM) is suitable to find a solution. TOM is consisted of an iterative application of an optimization method and an analysis method to obtain an optimal distribution of material in a fixed domain. One way to experimentally obtain the substrate displacement is through Traction Force Microscopy (TFM), which also provides the forces applied by the cell. Along with systematically generating the distributions of focal adhesion and actin-myosin for the validation of simplified models, the algorithm also represents a complementary and more phenomenological approach to TFM. As a first approximation, actin fibers and flexible substrate are represented through two-dimensional linear Finite Element Method. Actin contraction is modeled as an initial stress of the FEM elements. Focal adhesions connecting actin and substrate are represented by springs. The algorithm was applied to data obtained from experiments regarding cytoskeletal prestress and micropatterning, comparing the numerical results to the experimental ones
Resumo:
This doctoral thesis focuses on ground-based measurements of stratospheric nitric acid (HNO3)concentrations obtained by means of the Ground-Based Millimeter-wave Spectrometer (GBMS). Pressure broadened HNO3 emission spectra are analyzed using a new inversion algorithm developed as part of this thesis work and the retrieved vertical profiles are extensively compared to satellite-based data. This comparison effort I carried out has a key role in establishing a long-term (1991-2010), global data record of stratospheric HNO3, with an expected impact on studies concerning ozone decline and recovery. The first part of this work is focused on the development of an ad hoc version of the Optimal Estimation Method (Rodgers, 2000) in order to retrieve HNO3 spectra observed by means of GBMS. I also performed a comparison between HNO3 vertical profiles retrieved with the OEM and those obtained with the old iterative Matrix Inversion method. Results show no significant differences in retrieved profiles and error estimates, with the OEM providing however additional information needed to better characterize the retrievals. A final section of this first part of the work is dedicated to a brief review on the application of the OEM to other trace gases observed by GBMS, namely O3 and N2O. The second part of this study deals with the validation of HNO3 profiles obtained with the new inversion method. The first step has been the validation of GBMS measurements of tropospheric opacity, which is a necessary tool in the calibration of any GBMS spectra. This was achieved by means of comparisons among correlative measurements of water vapor column content (or Precipitable Water Vapor, PWV) since, in the spectral region observed by GBMS, the tropospheric opacity is almost entirely due to water vapor absorption. In particular, I compared GBMS PWV measurements collected during the primary field campaign of the ECOWAR project (Bhawar et al., 2008) with simultaneous PWV observations obtained with Vaisala RS92k radiosondes, a Raman lidar, and an IR Fourier transform spectrometer. I found that GBMS PWV measurements are in good agreement with the other three data sets exhibiting a mean difference between observations of ~9%. After this initial validation, GBMS HNO3 retrievals have been compared to two sets of satellite data produced by the two NASA/JPL Microwave Limb Sounder (MLS) experiments (aboard the Upper Atmosphere Research Satellite (UARS) from 1991 to 1999, and on the Earth Observing System (EOS) Aura mission from 2004 to date). This part of my thesis is inserted in GOZCARDS (Global Ozone Chemistry and Related Trace gas Data Records for the Stratosphere), a multi-year project, aimed at developing a long-term data record of stratospheric constituents relevant to the issues of ozone decline and expected recovery. This data record will be based mainly on satellite-derived measurements but ground-based observations will be pivotal for assessing offsets between satellite data sets. Since the GBMS has been operated for more than 15 years, its nitric acid data record offers a unique opportunity for cross-calibrating HNO3 measurements from the two MLS experiments. I compare GBMS HNO3 measurements obtained from the Italian Alpine station of Testa Grigia (45.9° N, 7.7° E, elev. 3500 m), during the period February 2004 - March 2007, and from Thule Air Base, Greenland (76.5°N 68.8°W), during polar winter 2008/09, and Aura MLS observations. A similar intercomparison is made between UARS MLS HNO3 measurements with those carried out from the GBMS at South Pole, Antarctica (90°S), during the most part of 1993 and 1995. I assess systematic differences between GBMS and both UARS and Aura HNO3 data sets at seven potential temperature levels. Results show that, except for measurements carried out at Thule, ground based and satellite data sets are consistent within the errors, at all potential temperature levels.
Resumo:
This Ph.D thesis focuses on iterative regularization methods for regularizing linear and nonlinear ill-posed problems. Regarding linear problems, three new stopping rules for the Conjugate Gradient method applied to the normal equations are proposed and tested in many numerical simulations, including some tomographic images reconstruction problems. Regarding nonlinear problems, convergence and convergence rate results are provided for a Newton-type method with a modified version of Landweber iteration as an inner iteration in a Banach space setting.
Resumo:
PURPOSE To investigate whether the effects of hybrid iterative reconstruction (HIR) on coronary artery calcium (CAC) measurements using the Agatston score lead to changes in assignment of patients to cardiovascular risk groups compared to filtered back projection (FBP). MATERIALS AND METHODS 68 patients (mean age 61.5 years; 48 male; 20 female) underwent prospectively ECG-gated, non-enhanced, cardiac 256-MSCT for coronary calcium scoring. Scanning parameters were as follows: Tube voltage, 120 kV; Mean tube current time-product 63.67 mAs (50 - 150 mAs); collimation, 2 × 128 × 0.625 mm. Images were reconstructed with FBP and with HIR at all levels (L1 to L7). Two independent readers measured Agatston scores of all reconstructions and assigned patients to cardiovascular risk groups. Scores of HIR and FBP reconstructions were correlated (Spearman). Interobserver agreement and variability was assessed with ĸ-statistics and Bland-Altmann-Plots. RESULTS Agatston scores of HIR reconstructions were closely correlated with FBP reconstructions (L1, R = 0.9996; L2, R = 0.9995; L3, R = 0.9991; L4, R = 0.986; L5, R = 0.9986; L6, R = 0.9987; and L7, R = 0.9986). In comparison to FBP, HIR led to reduced Agatston scores between 97 % (L1) and 87.4 % (L7) of the FBP values. Using HIR iterations L1 - L3, all patients were assigned to identical risk groups as after FPB reconstruction. In 5.4 % of patients the risk group after HIR with the maximum iteration level was different from the group after FBP reconstruction. CONCLUSION There was an excellent correlation of Agatston scores after HIR and FBP with identical risk group assignment at levels 1 - 3 for all patients. Hence it appears that the application of HIR in routine calcium scoring does not entail any disadvantages. Thus, future studies are needed to demonstrate whether HIR is a reliable method for reducing radiation dose in coronary calcium scoring.
Resumo:
Researchers in ecology commonly use multivariate analyses (e.g. redundancy analysis, canonical correspondence analysis, Mantel correlation, multivariate analysis of variance) to interpret patterns in biological data and relate these patterns to environmental predictors. There has been, however, little recognition of the errors associated with biological data and the influence that these may have on predictions derived from ecological hypotheses. We present a permutational method that assesses the effects of taxonomic uncertainty on the multivariate analyses typically used in the analysis of ecological data. The procedure is based on iterative randomizations that randomly re-assign non identified species in each site to any of the other species found in the remaining sites. After each re-assignment of species identities, the multivariate method at stake is run and a parameter of interest is calculated. Consequently, one can estimate a range of plausible values for the parameter of interest under different scenarios of re-assigned species identities. We demonstrate the use of our approach in the calculation of two parameters with an example involving tropical tree species from western Amazonia: 1) the Mantel correlation between compositional similarity and environmental distances between pairs of sites, and; 2) the variance explained by environmental predictors in redundancy analysis (RDA). We also investigated the effects of increasing taxonomic uncertainty (i.e. number of unidentified species), and the taxonomic resolution at which morphospecies are determined (genus-resolution, family-resolution, or fully undetermined species) on the uncertainty range of these parameters. To achieve this, we performed simulations on a tree dataset from southern Mexico by randomly selecting a portion of the species contained in the dataset and classifying them as unidentified at each level of decreasing taxonomic resolution. An analysis of covariance showed that both taxonomic uncertainty and resolution significantly influence the uncertainty range of the resulting parameters. Increasing taxonomic uncertainty expands our uncertainty of the parameters estimated both in the Mantel test and RDA. The effects of increasing taxonomic resolution, however, are not as evident. The method presented in this study improves the traditional approaches to study compositional change in ecological communities by accounting for some of the uncertainty inherent to biological data. We hope that this approach can be routinely used to estimate any parameter of interest obtained from compositional data tables when faced with taxonomic uncertainty.
Resumo:
Finite element hp-adaptivity is a technology that allows for very accurate numerical solutions. When applied to open region problems such as radar cross section prediction or antenna analysis, a mesh truncation method needs to be used. This paper compares the following mesh truncation methods in the context of hp-adaptive methods: Infinite Elements, Perfectly Matched Layers and an iterative boundary element based methodology. These methods have been selected because they are exact at the continuous level (a desirable feature required by the extreme accuracy delivered by the hp-adaptive strategy) and they are easy to integrate with the logic of hp-adaptivity. The comparison is mainly based on the number of degrees of freedom needed for each method to achieve a given level of accuracy. Computational times are also included. Two-dimensional examples are used, but the conclusions directly extrapolated to the three dimensional case.
Resumo:
Monte Carlo (MC) method can accurately compute the dose produced by medical linear accelerators. However, these calculations require a reliable description of the electron and/or photon beams delivering the dose, the phase space (PHSP), which is not usually available. A method to derive a phase space model from reference measurements that does not heavily rely on a detailed model of the accelerator head is presented. The iterative optimization process extracts the characteristics of the particle beams which best explains the reference dose measurements in water and air, given a set of constrains
Resumo:
The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.
Resumo:
Electric probes are objects immersed in the plasma with sharp boundaries which collect of emit charged particles. Consequently, the nearby plasma evolves under abrupt imposed and/or naturally emerging conditions. There could be localized currents, different time scales for plasma species evolution, charge separation and absorbing-emitting walls. The traditional numerical schemes based on differences often transform these disparate boundary conditions into computational singularities. This is the case of models using advection-diffusion differential equations with source-sink terms (also called Fokker-Planck equations). These equations are used in both, fluid and kinetic descriptions, to obtain the distribution functions or the density for each plasma species close to the boundaries. We present a resolution method grounded on an integral advancing scheme by using approximate Green's functions, also called short-time propagators. All the integrals, as a path integration process, are numerically calculated, what states a robust grid-free computational integral method, which is unconditionally stable for any time step. Hence, the sharp boundary conditions, as the current emission from a wall, can be treated during the short-time regime providing solutions that works as if they were known for each time step analytically. The form of the propagator (typically a multivariate Gaussian) is not unique and it can be adjusted during the advancing scheme to preserve the conserved quantities of the problem. The effects of the electric or magnetic fields can be incorporated into the iterative algorithm. The method allows smooth transitions of the evolving solutions even when abrupt discontinuities are present. In this work it is proposed a procedure to incorporate, for the very first time, the boundary conditions in the numerical integral scheme. This numerical scheme is applied to model the plasma bulk interaction with a charge-emitting electrode, dealing with fluid diffusion equations combined with Poisson equation self-consistently. It has been checked the stability of this computational method under any number of iterations, even for advancing in time electrons and ions having different time scales. This work establishes the basis to deal in future work with problems related to plasma thrusters or emissive probes in electromagnetic fields.