114 resultados para Shishkin mesh
Resumo:
Microwave (MW) thawing of 2D frozen cylinders exposed to uniform plane waves from one face, is modeled using the effective heat capacity formulation with the MW power obtained from the electric field equations. Computations are illustrated for tylose (23% methyl cellulose gel) which melts over a range of temperatures giving rise to a mushy zone. Within the mushy region the dielectric properties are functions of the liquid volume fraction. The resulting coupled, time dependent non-linear equations are solved using the Galerkin finite element method with a fixed mesh. Our method efficiently captures the multiple connected thawed domains that arise due to the penetration of MWs in the sample. For a cylinder of diameter D, the two length scales that control the thawing dynamics are D/D-p and D/lambda(m), where D-p and lambda(m) are the penetration depth and wavelength of radiation in the sample respectively. For D/D-p, D/lambda(m) much less than 1 power absorption is uniform and thawing occurs almost simultaneously across the sample (Regime I). For D/D-p much greater than 1 thawing is seen to occur from the incident face, since the power decays exponentially into the sample (Regime III). At intermediate values, 0.2 < D/D-p, D/lambda(m) < 2.0 (Regime II) thawing occurs from the unexposed face at smaller diameters, from both faces at intermediate diameters and from the exposed and central regions at larger diameters. Average power absorption during thawing indicates a monotonic rise in Regime I and a monotonic decrease in Regime III. Local maxima in the average power observed for samples in Regime II are due to internal resonances within the sample. Thawing time increases monotonically with sample diameter and temperature gradients in the sample generally increase from Regime I to Regime III. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We describe a System-C based framework we are developing, to explore the impact of various architectural and microarchitectural level parameters of the on-chip interconnection network elements on its power and performance. The framework enables one to choose from a variety of architectural options like topology, routing policy, etc., as well as allows experimentation with various microarchitectural options for the individual links like length, wire width, pitch, pipelining, supply voltage and frequency. The framework also supports a flexible traffic generation and communication model. We provide preliminary results of using this framework to study the power, latency and throughput of a 4x4 multi-core processing array using mesh, torus and folded torus, for two different communication patterns of dense and sparse linear algebra. The traffic consists of both Request-Response messages (mimicing cache accesses)and One-Way messages. We find that the average latency can be reduced by increasing the pipeline depth, as it enables higher link frequencies. We also find that there exists an optimum degree of pipelining which minimizes energy-delay product.
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes on a region in Euclidean space, e.g., the unit square. After deployment, the nodes self-organise into a mesh topology. In a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this paper, we analyse the performance of this approximation. We show that nodes with a certain hop distance from a fixed anchor node lie within a certain annulus with probability approach- ing unity as the number of nodes n → ∞. We take a uniform, i.i.d. deployment of n nodes on a unit square, and consider the geometric graph on these nodes with radius r(n) = c q ln n n . We show that, for a given hop distance h of a node from a fixed anchor on the unit square,the Euclidean distance lies within [(1−ǫ)(h−1)r(n), hr(n)],for ǫ > 0, with probability approaching unity as n → ∞.This result shows that it is more likely to expect a node, with hop distance h from the anchor, to lie within this an- nulus centred at the anchor location, and of width roughly r(n), rather than close to a circle whose radius is exactly proportional to h. We show that if the radius r of the ge- ometric graph is fixed, the convergence of the probability is exponentially fast. Similar results hold for a randomised lattice deployment. We provide simulation results that il- lustrate the theory, and serve to show how large n needs to be for the asymptotics to be useful.
Resumo:
Cargo transport through the nuclear pore complex continues to be a subject of considerable interest to experimentalists and theorists alike. Several recent studies have revealed details of the process that have still to be fully understood, among them the apparent nonlinearity between cargo size and the pore crossing time, the skewed, asymmetric nature of the distribution of such crossing times, and the non-exponentiality in the decay profile of the dynamic autocorrelation function of cargo positions. In this paper, we show that a model of pore transport based on subdiffusive particle motion is in qualitative agreement with many of these observations. The model corresponds to a process of stochastic binding and release of the particle as it moves through the channel. It suggests that the phenylalanine-glycine repeat units that form an entangled polymer mesh across the channel may be involved in translocation, since these units have the potential to intermittently bind to hydrophobic receptor sites on the transporter protein. (C) 2011 American Institute of Physics. [doi:10.1063/1.3651100]
Resumo:
The Morse-Smale complex is a useful topological data structure for the analysis and visualization of scalar data. This paper describes an algorithm that processes all mesh elements of the domain in parallel to compute the Morse-Smale complex of large two-dimensional data sets at interactive speeds. We employ a reformulation of the Morse-Smale complex using Forman's Discrete Morse Theory and achieve scalability by computing the discrete gradient using local accesses only. We also introduce a novel approach to merge gradient paths that ensures accurate geometry of the computed complex. We demonstrate that our algorithm performs well on both multicore environments and on massively parallel architectures such as the GPU.
Resumo:
Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.
Resumo:
CAELinux is a Linux distribution which is bundled with free software packages related to Computer Aided Engineering (CAE). The free software packages include software that can build a three dimensional solid model, programs that can mesh a geometry, software for carrying out Finite Element Analysis (FEA), programs that can carry out image processing etc. Present work has two goals: 1) To give a brief description of CAELinux 2) To demonstrate that CAELinux could be useful for Computer Aided Engineering, using an example of the three dimensional reconstruction of a pig liver from a stack of CT-scan images. One can note that instead of using CAELinux, using commercial software for reconstructing the liver would cost a lot of money. One can also note that CAELinux is a free and open source operating system and all software packages that are included in the operating system are also free. Hence one can conclude that CAELinux could be a very useful tool in application areas like surgical simulation which require three dimensional reconstructions of biological organs. Also, one can see that CAELinux could be a very useful tool for Computer Aided Engineering, in general.
Resumo:
This paper presents a singular edge-based smoothed finite element method (sES-FEM) for mechanics problems with singular stress fields of arbitrary order. The sES-FEM uses a basic mesh of three-noded linear triangular (T3) elements and a special layer of five-noded singular triangular elements (sT5) connected to the singular-point of the stress field. The sT5 element has an additional node on each of the two edges connected to the singular-point. It allows us to represent simple and efficient enrichment with desired terms for the displacement field near the singular-point with the satisfaction of partition-of-unity property. The stiffness matrix of the discretized system is then obtained using the assumed displacement values (not the derivatives) over smoothing domains associated with the edges of elements. An adaptive procedure for the sES-FEM is proposed to enhance the quality of the solution with minimized number of nodes. Several numerical examples are provided to validate the reliability of the present sES-FEM method. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The solution of the forward equation that models the transport of light through a highly scattering tissue material in diffuse optical tomography (DOT) using the finite element method gives flux density (Phi) at the nodal points of the mesh. The experimentally measured flux (U-measured) on the boundary over a finite surface area in a DOT system has to be corrected to account for the system transfer functions (R) of various building blocks of the measurement system. We present two methods to compensate for the perturbations caused by R and estimate true flux density (Phi) from U-measured(cal). In the first approach, the measurement data with a homogeneous phantom (U-measured(homo)) is used to calibrate the measurement system. The second scheme estimates the homogeneous phantom measurement using only the measurement from a heterogeneous phantom, thereby eliminating the necessity of a homogeneous phantom. This is done by statistically averaging the data (U-measured(hetero)) and redistributing it to the corresponding detector positions. The experiments carried out on tissue mimicking phantom with single and multiple inhomogeneities, human hand, and a pork tissue phantom demonstrate the robustness of the approach. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE) DOI: 10.1117/1.JBO.18.2.026023]
Resumo:
We provide new analytical results concerning the spread of information or influence under the linear threshold social network model introduced by Kempe et al. in, in the information dissemination context. The seeder starts by providing the message to a set of initial nodes and is interested in maximizing the number of nodes that will receive the message ultimately. A node's decision to forward the message depends on the set of nodes from which it has received the message. Under the linear threshold model, the decision to forward the information depends on the comparison of the total influence of the nodes from which a node has received the packet with its own threshold of influence. We derive analytical expressions for the expected number of nodes that receive the message ultimately, as a function of the initial set of nodes, for a generic network. We show that the problem can be recast in the framework of Markov chains. We then use the analytical expression to gain insights into information dissemination in some simple network topologies such as the star, ring, mesh and on acyclic graphs. We also derive the optimal initial set in the above networks, and also hint at general heuristics for picking a good initial set.
Resumo:
Tissue injury during therapeutic ultrasound or lithotripsy is thought, in cases, to be due to the action of cavitation bubbles. Assessing this and mitigating it is challenging since bubble dynamics in the complex confinement of tissues or in small blood vessels are challenging to predict. Simulations tools require specialized algorithms to simultaneously represent strong acoustic waves and shocks, topologically complex liquid‐vapor phase boundaries, and the complex viscoelastic material dynamics of tissue. We discuss advances in a simulation tool for such situations. A single‐mesh Eulerian solver is used to solve the governing equations. Special sharpening terms maintain the liquid‐vapor interface in face of the finite numerical dissipation included in the scheme to accurately capture shocks. A recent enhancement to this formulation has significantly improved this interface capturing procedure, which is demonstrated for simulation of the Rayleigh collapse of a bubble. The solver also transports elastic stresses and can thus be used to assess the effects of elastic properties on bubble dynamics. A shock‐induced bubble collapse adjacent to a model elastic tissue is used to demonstrate this and draw some conclusions regarding the injury suppressing role that tissue elasticity might play.
Resumo:
Algorithms for adaptive mesh refinement using a residual error estimator are proposed for fluid flow problems in a finite volume framework. The residual error estimator, referred to as the R-parameter is used to derive refinement and coarsening criteria for the adaptive algorithms. An adaptive strategy based on the R-parameter is proposed for continuous flows, while a hybrid adaptive algorithm employing a combination of error indicators and the R-parameter is developed for discontinuous flows. Numerical experiments for inviscid and viscous flows on different grid topologies demonstrate the effectiveness of the proposed algorithms on arbitrary polygonal grids.
Resumo:
The classical Chapman-Enskog expansion is performed for the recently proposed finite-volume formulation of lattice Boltzmann equation (LBE) method D.V. Patil, K.N. Lakshmisha, Finite volume TVD formulation of lattice Boltzmann simulation on unstructured mesh, J. Comput. Phys. 228 (2009) 5262-5279]. First, a modified partial differential equation is derived from a numerical approximation of the discrete Boltzmann equation. Then, the multi-scale, small parameter expansion is followed to recover the continuity and the Navier-Stokes (NS) equations with additional error terms. The expression for apparent value of the kinematic viscosity is derived for finite-volume formulation under certain assumptions. The attenuation of a shear wave, Taylor-Green vortex flow and driven channel flow are studied to analyze the apparent viscosity relation.
Resumo:
Adaptive Mesh Refinement is a method which dynamically varies the spatio-temporal resolution of localized mesh regions in numerical simulations, based on the strength of the solution features. In-situ visualization plays an important role for analyzing the time evolving characteristics of the domain structures. Continuous visualization of the output data for various timesteps results in a better study of the underlying domain and the model used for simulating the domain. In this paper, we develop strategies for continuous online visualization of time evolving data for AMR applications executed on GPUs. We reorder the meshes for computations on the GPU based on the users input related to the subdomain that he wants to visualize. This makes the data available for visualization at a faster rate. We then perform asynchronous executions of the visualization steps and fix-up operations on the CPUs while the GPU advances the solution. By performing experiments on Tesla S1070 and Fermi C2070 clusters, we found that our strategies result in 60% improvement in response time and 16% improvement in the rate of visualization of frames over the existing strategy of performing fix-ups and visualization at the end of the timesteps.
Resumo:
The contour tree is a topological abstraction of a scalar field that captures evolution in level set connectivity. It is an effective representation for visual exploration and analysis of scientific data. We describe a work-efficient, output sensitive, and scalable parallel algorithm for computing the contour tree of a scalar field defined on a domain that is represented using either an unstructured mesh or a structured grid. A hybrid implementation of the algorithm using the GPU and multi-core CPU can compute the contour tree of an input containing 16 million vertices in less than ten seconds with a speedup factor of upto 13. Experiments based on an implementation in a multi-core CPU environment show near-linear speedup for large data sets.