201 resultados para Computers.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we present a finite element formulation for the Saint-Venant torsion and bending problems for prismatic beams. The torsion problem formulation is based on the warping function, and can handle multiply-connected regions (including thin-walled structures), compound and anisotropic bars. Similarly, the bending formulation, which is based on linearized elasticity theory, can handle multiply-connected domains including thin-walled sections. The torsional rigidity and shear centers can be found as special cases of these formulations. Numerical results are presented to show the good coarse-mesh accuracy of both the formulations for both the displacement and stress fields. The stiffness matrices and load vectors (which are similar to those for a variable body force in a conventional structural mechanics problem) in both formulations involve only domain integrals, which makes them simple to implement and computationally efficient. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real world biological systems such as the human brain are inherently nonlinear and difficult to model. However, most of the previous studies have either employed linear models or parametric nonlinear models for investigating brain function. In this paper, a novel application of a nonlinear measure of phase synchronization based on recurrences, correlation between probabilities of recurrence (CPR), to study connectivity in the brain has been proposed. Being non-parametric, this method makes very few assumptions, making it suitable for investigating brain function in a data-driven way. CPR's utility with application to multichannel electroencephalographic (EEG) signals has been demonstrated. Brain connectivity obtained using thresholded CPR matrix of multichannel EEG signals showed clear differences in the number and pattern of connections in brain connectivity between (a) epileptic seizure and pre-seizure and (b) eyes open and eyes closed states. Corresponding brain headmaps provide meaningful insights about synchronization in the brain in those states. K-means clustering of connectivity parameters of CPR and linear correlation obtained from global epileptic seizure and pre-seizure showed significantly larger cluster centroid distances for CPR as opposed to linear correlation, thereby demonstrating the superior ability of CPR for discriminating seizure from pre-seizure. The headmap in the case of focal epilepsy clearly enables us to identify the focus of the epilepsy which provides certain diagnostic value. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is essential to accurately estimate the working set size (WSS) of an application for various optimizations such as to partition cache among virtual machines or reduce leakage power dissipated in an over-allocated cache by switching it OFF. However, the state-of-the-art heuristics such as average memory access latency (AMAL) or cache miss ratio (CMR) are poorly correlated to the WSS of an application due to 1) over-sized caches and 2) their dispersed nature. Past studies focus on estimating WSS of an application executing on a uniprocessor platform. Estimating the same for a chip multiprocessor (CMP) with a large dispersed cache is challenging due to the presence of concurrently executing threads/processes. Hence, we propose a scalable, highly accurate method to estimate WSS of an application. We call this method ``tagged WSS (TWSS)'' estimation method. We demonstrate the use of TWSS to switch-OFF the over-allocated cache ways in Static and Dynamic NonUniform Cache Architectures (SNUCA, DNUCA) on a tiled CMP. In our implementation of adaptable way SNUCA and DNUCA caches, decision of altering associativity is taken by each L2 controller. Hence, this approach scales better with the number of cores present on a CMP. It gives overall (geometric mean) 26% and 19% higher energy-delay product savings compared to AMAL and CMR heuristics on SNUCA, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Incompressible Smooth Particle Hydrodynamics (ISPH), a pressure Poisson equation (PPE) is solved to obtain a divergence free velocity field. When free surfaces are simulated using this method a Dirichlet boundary condition for pressure at the free surface has to be applied. In existing ISPH methods this is achieved by identifying free surface particles using heuristically chosen threshold of a parameter such as kernel sum, density or divergence of the position, and explicitly setting their pressure values. This often leads to clumping of particles near the free surface and spraying off of surface particles during splashes. Moreover, surface pressure gradients in flows where surface tension is important are not captured well using this approach. We propose a more accurate semi-analytical approach to impose Dirichlet boundary conditions on the free surface. We show the efficacy of the proposed algorithm by using test cases of elongation of a droplet and dam break. We perform two dimensional simulations of water entry and validate the proposed algorithm with experimental results. Further, a three dimensional simulation of droplet splash is shown to compare well with the Volume-of-Fluid simulations. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The occurrence of spurious solutions is a well-known limitation of the standard nodal finite element method when applied to electromagnetic problems. The two commonly used remedies that are used to address this problem are (i) The addition of a penalty term with the penalty factor based on the local dielectric constant, and which reduces to a Helmholtz form on homogeneous domains (regularized formulation); (ii) A formulation based on a vector and a scalar potential. Both these strategies have some shortcomings. The penalty method does not completely get rid of the spurious modes, and both methods are incapable of predicting singular eigenvalues in non-convex domains. Some non-zero spurious eigenvalues are also predicted by these methods on non-convex domains. In this work, we develop mixed finite element formulations which predict the eigenfrequencies (including their multiplicities) accurately, even for nonconvex domains. The main feature of the proposed mixed finite element formulation is that no ad-hoc terms are added to the formulation as in the penalty formulation, and the improvement is achieved purely by an appropriate choice of finite element spaces for the different variables. We show that the formulation works even for inhomogeneous domains where `double noding' is used to enforce the appropriate continuity requirements at an interface. For two-dimensional problems, the shape of the domain can be arbitrary, while for the three-dimensional ones, with our current formulation, only regular domains (which can be nonconvex) can be modeled. Since eigenfrequencies are modeled accurately, these elements also yield accurate results for driven problems. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new C-0 composite plate finite element based on Reddy's third order theory is used for large deformation dynamic analysis of delaminated composite plates. The inter-laminar contact is modeled with an augmented Lagrangian approach. Numerical results show that the widely used ``unconditionally stable'' beta-Newmark method presents instability problems in the transient simulation of delaminated composite plate structures with large deformation. To overcome this instability issue, an energy and momentum conserving composite implicit time integration scheme presented by Bathe and Baig is used. It is found that a proper selection of the penalty parameter is very crucial in the contact simulation. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Haemophilus influenzae (H. Influenzae) is the causative agent of pneumonia, bacteraemia and meningitis. The organism is responsible for large number of deaths in both developed and developing countries. Even-though the first bacterial genome to be sequenced was that of H. Influenzae, there is no exclusive database dedicated for H. Influenzae. This prompted us to develop the Haemophilus influenzae Genome Database (HIGDB). Methods: All data of HIGDB are stored and managed in MySQL database. The HIGDB is hosted on Solaris server and developed using PERL modules. Ajax and JavaScript are used for the interface development. Results: The HIGDB contains detailed information on 42,741 proteins, 18,077 genes including 10 whole genome sequences and also 284 three dimensional structures of proteins of H. influenzae. In addition, the database provides ``Motif search'' and ``GBrowse''. The HIGDB is freely accessible through the URL:http://bioserverl.physicslisc.ernetin/HIGDB/. Discussion: The HIGDB will be a single point access for bacteriological, clinical, genomic and proteomic information of H. influenzae. The database can also be used to identify DNA motifs within H. influenzae genomes and to compare gene or protein sequences of a particular strain with other strains of H. influenzae. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a C-0 interior penalty method has been proposed and analyzed for distributed optimal control problems governed by the biharmonic operator. The state and adjoint variables are discretized using continuous piecewise quadratic finite elements while the control variable is discretized using piecewise constant approximations. A priori and a posteriori error estimates are derived for the state, adjoint and control variables under minimal regularity assumptions. Numerical results justify the theoretical results obtained. The a posteriori error estimators are useful in adaptive finite element approximation and the numerical results indicate that the sharp error estimators work efficiently in guiding the mesh refinement. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fractal dimension based damage detection method is investigated for a composite plate with random material properties. Composite material shows spatially varying random material properties because of complex manufacturing processes. Matrix cracks are considered as damage in the composite plate. Such cracks are often seen as the initial damage mechanism in composites under fatigue loading and also occur due to low velocity impact. Static deflection of the cantilevered composite plate with uniform loading is calculated using the finite element method. Damage detection is carried out based on sliding window fractal dimension operator using the static deflection. Two dimensional homogeneous Gaussian random field is generated using Karhunen-Loeve (KL) expansion to represent the spatial variation of composite material property. The robustness of fractal dimension based damage detection method is demonstrated considering the composite material properties as a two dimensional random field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fractal dimension based damage detection method is studied for a composite structure with random material properties. A composite plate with localized matrix crack is considered. Matrix cracks are often seen as the initial damage mechanism in composites. Fractal dimension based method is applied to the static deformation curve of the structure to detect localized damage. Static deflection of a cantilevered composite plate under uniform loading is calculated using the finite element method. Composite material shows spatially varying random material properties because of complex manufacturing processes. Spatial variation of material property is represented as a two dimensional homogeneous Gaussian random field. Karhunen-Loeve (KL) expansion is used to generate a random field. The robustness of fractal dimension based damage detection methods is studied considering the composite plate with spatial variation in material properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A direct discretization approach and an operator-splitting scheme are applied for the numerical simulation of a population balance system which models the synthesis of urea with a uni-variate population. The problem is formulated in axisymmetric form and the setup is chosen such that a steady state is reached. Both solvers are assessed with respect to the accuracy of the results, where experimental data are used for comparison, and the efficiency of the simulations. Depending on the goal of simulations, to track the evolution of the process accurately or to reach the steady state fast, recommendations for the choice of the solver are given. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The history of computing in India is inextricably intertwined with two interacting forces: the political climate determined by the political party in power) and the government policies mainly driven by the technocrats and bureaucrats who acted within the boundaries drawn by the political party in power. There were four break points (which occurred in 1970, 1978, 1991 and 1998) that changed the direction of the development of computers and their applications. This article explains why these breaks occurred and how they affected the history of computing in India.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The prime movers and refrigerators based on thermoacoustics have gained considerable importance toward practical applications in view of the absence of moving components, reasonable efficiency, use of environmental friendly working fluids, etc. Devices such as twin Standing Wave ThermoAcoustic Prime Mover (SWTAPM), Traveling Wave ThermoAcoustic Prime Mover (TWTAPM) and thermoacoustically driven Standing Wave ThermoAcoustic Refrigerator (SWTAR) have been studied by researchers. The numerical modeling and simulation play a vital role in their development. In our efforts to build the above thermoacoustic systems, we have carried out numerical analysis using the procedures of CFD on the above systems. The results of the analysis are compared with those of DeltaEC (freeware from LANL, USA) simulations and the experimental results wherever possible. For the CFD analysis commercial code Fluent 6.3.26 has been used along with the necessary boundary conditions for different working fluids at various average pressures. The results of simulation indicate that choice of the working fluid and the average pressure are critical to the performance of the above thermoacoustic devices. Also it is observed that the predictions through the CFD analysis are closer to the experimental results in most cases, compared to those of DeltaEC simulations. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several papers have studied fault attacks on computing a pairing value e(P, Q), where P is a public point and Q is a secret point. In this paper, we observe that these attacks are in fact effective only on a small number of pairing-based protocols, and that too only when the protocols are implemented with specific symmetric pairings. We demonstrate the effectiveness of the fault attacks on a public-key encryption scheme, an identity-based encryption scheme, and an oblivious transfer protocol when implemented with a symmetric pairing derived from a supersingular elliptic curve with embedding degree 2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. At such low MTBFs, employing periodic checkpointing alone will result in low efficiency because of the high number of application failures resulting in large amount of lost work due to rollbacks. In such scenarios, it is highly necessary to have proactive fault tolerance mechanisms that can help avoid significant number of failures. In this work, we have developed a mechanism for proactive fault tolerance using partial replication of a set of application processes. Our fault tolerance framework adaptively changes the set of replicated processes periodically based on failure predictions to avoid failures. We have developed an MPI prototype implementation, PAREP-MPI that allows changing the replica set. We have shown that our strategy involving adaptive process replication significantly outperforms existing mechanisms providing up to 20 percent improvement in application efficiency even for exascale systems.