204 resultados para Conjugate gradient methods
Resumo:
Laminar natural convection between two coaxial vertical rectangular cylinders is numerically studied in this work. The outer cylinder is connected with vertical rectangular inlet and outlet pipes. The inner cylinder dissipates volumetric heat. The fluid flow and heat transfer characteristics between the cylinders are analyzed in detail for various Grashof numbers. The heat transfer rates on the individual faces of the inner cylinder are reported. The bottom face of the inner cylinder is found to associate with much higher heat rates than those of the other faces. The average Nusselt number on bottom face is more than 2.5 times of the Nusselt number averaged on all the faces. At a given elevation, local Nusselt number on the inner cylinder faces increases towards cylinder edges. The effect of thermal condition of the walls of outer cylinder, inlet and outlet on the natural convection is analyzed. The thermal condition shows strong qualitative and quantitative impact on the fluid flow and heat transfer. The variation of induced flow rate, dimensionless maximum temperature and average Nusselt numbers with Grashof number is studied. Correlations for dimensionless buoyancy-induced mass flow rate and temperature maximum are presented. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
A computational tool called ``Directional Diffusion Regulator (DDR)'' is proposed to bring forth real multidimensional physics into the upwind discretization in some numerical schemes of hyperbolic conservation laws. The direction based regulator when used with dimension splitting solvers, is set to moderate the excess multidimensional diffusion and hence cause genuine multidimensional upwinding like effect. The basic idea of this regulator driven method is to retain a full upwind scheme across local discontinuities, with the upwind bias decreasing smoothly to a minimum in the farthest direction. The discontinuous solutions are quantified as gradients and the regulator parameter across a typical finite volume interface or a finite difference interpolation point is formulated based on fractional local maximum gradient in any of the weak solution flow variables (say density, pressure, temperature, Mach number or even wave velocity etc.). DDR is applied to both the non-convective as well as whole unsplit dissipative flux terms of some numerical schemes, mainly of Local Lax-Friedrichs, to solve some benchmark problems describing inviscid compressible flow, shallow water dynamics and magneto-hydrodynamics. The first order solutions consistently improved depending on the extent of grid non-alignment to discontinuities, with the major influence due to regulation of non-convective diffusion. The application is also experimented on schemes such as Roe, Jameson-Schmidt-Turkel and some second order accurate methods. The consistent improvement in accuracy either at moderate or marked levels, for a variety of problems and with increasing grid size, reasonably indicate a scope for DDR as a regular tool to impart genuine multidimensional upwinding effect in a simpler framework. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Electrical failure of insulation is known to be an extremal random process wherein nominally identical pro-rated specimens of equipment insulation, at constant stress fail at inordinately different times even under laboratory test conditions. In order to be able to estimate the life of power equipment, it is necessary to run long duration ageing experiments under accelerated stresses, to acquire and analyze insulation specific failure data. In the present work, Resin Impregnated Paper (RIP) a relatively new insulation system of choice used in transformer bushings, is taken as an example. The failure data has been processed using proven statistical methods, both graphical and analytical. The physical model governing insulation failure at constant accelerated stress has been assumed to be based on temperature dependent inverse power law model.
Resumo:
Nonlinear equations in mathematical physics and engineering are solved by linearizing the equations and forming various iterative procedures, then executing the numerical simulation. For strongly nonlinear problems, the solution obtained in the iterative process can diverge due to numerical instability. As a result, the application of numerical simulation for strongly nonlinear problems is limited. Helicopter aeroelasticity involves the solution of systems of nonlinear equations in a computationally expensive environment. Reliable solution methods which do not need Jacobian calculation at each iteration are needed for this problem. In this paper, a comparative study is done by incorporating different methods for solving the nonlinear equations in helicopter trim. Three different methods based on calculating the Jacobian at the initial guess are investigated. (C) 2011 Elsevier Masson SAS. All rights reserved.
Resumo:
Present study performs the spatial and temporal trend analysis of annual, monthly and seasonal maximum and minimum temperatures (t(max), t(min)) in India. Recent trends in annual, monthly, winter, pre-monsoon, monsoon and post-monsoon extreme temperatures (t(max), t(min)) have been analyzed for three time slots viz. 1901-2003,1948-2003 and 1970-2003. For this purpose, time series of extreme temperatures of India as a whole and seven homogeneous regions, viz. Western Himalaya (WH), Northwest (NW), Northeast (NE), North Central (NC), East coast (EC), West coast (WC) and Interior Peninsula (IP) are considered. Rigorous trend detection analysis has been exercised using variety of non-parametric methods which consider the effect of serial correlation during analysis. During the last three decades minimum temperature trend is present in All India as well as in all temperature homogeneous regions of India either at annual or at any seasonal level (winter, pre-monsoon, monsoon, post-monsoon). Results agree with the earlier observation that the trend in minimum temperature is significant in the last three decades over India (Kothawale et al., 2010). Sequential MK test reveals that most of the trend both in maximum and minimum temperature began after 1970 either in annual or seasonal levels. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In many real world prediction problems the output is a structured object like a sequence or a tree or a graph. Such problems range from natural language processing to compu- tational biology or computer vision and have been tackled using algorithms, referred to as structured output learning algorithms. We consider the problem of structured classifi- cation. In the last few years, large margin classifiers like sup-port vector machines (SVMs) have shown much promise for structured output learning. The related optimization prob -lem is a convex quadratic program (QP) with a large num-ber of constraints, which makes the problem intractable for large data sets. This paper proposes a fast sequential dual method (SDM) for structural SVMs. The method makes re-peated passes over the training set and optimizes the dual variables associated with one example at a time. The use of additional heuristics makes the proposed method more efficient. We present an extensive empirical evaluation of the proposed method on several sequence learning problems.Our experiments on large data sets demonstrate that the proposed method is an order of magnitude faster than state of the art methods like cutting-plane method and stochastic gradient descent method (SGD). Further, SDM reaches steady state generalization performance faster than the SGD method. The proposed SDM is thus a useful alternative for large scale structured output learning.
Resumo:
This paper presents an experimental study that was conducted to compare the results obtained from using different design methods (brainstorming (BR), functional analysis (FA), and SCAMPER) in design processes. The objectives of this work are twofold. The first was to determine whether there are any differences in the length of time devoted to the different types of activities that are carried out in the design process, depending on the method that is employed; in other words, whether the design methods that are used make a difference in the profile of time spent across the design activities. The second objective was to analyze whether there is any kind of relationship between the time spent on design process activities and the degree of creativity in the solutions that are obtained. Creativity evaluation has been done by means of the degree of novelty and the level of resolution of the designed solutions using creative product semantic scale (CPSS) questionnaire. The results show that there are significant differences between the amounts of time devoted to activities related to understanding the problem and the typology of the design method, intuitive or logical, that are used. While the amount of time spent on analyzing the problem is very small in intuitive methods, such as brainstorming and SCAMPER (around 8-9% of the time), with logical methods like functional analysis practically half the time is devoted to analyzing the problem. Also, it has been found that the amount of time spent in each design phase has an influence on the results in terms of creativity, but results are not enough strong to define in which measure are they affected. This paper offers new data and results on the distinct benefits to be obtained from applying design methods. DOI: 10.1115/1.4007362]
Resumo:
Effects of dynamic contact angle models on the flow dynamics of an impinging droplet in sharp interface simulations are presented in this article. In the considered finite element scheme, the free surface is tracked using the arbitrary Lagrangian-Eulerian approach. The contact angle is incorporated into the model by replacing the curvature with the Laplace-Beltrami operator and integration by parts. Further, the Navier-slip with friction boundary condition is used to avoid stress singularities at the contact line. Our study demonstrates that the contact angle models have almost no influence on the flow dynamics of the non-wetting droplets. In computations of the wetting and partially wetting droplets, different contact angle models induce different flow dynamics, especially during recoiling. It is shown that a large value for the slip number has to be used in computations of the wetting and partially wetting droplets in order to reduce the effects of the contact angle models. Among all models, the equilibrium model is simple and easy to implement. Further, the equilibrium model also incorporates the contact angle hysteresis. Thus, the equilibrium contact angle model is preferred in sharp interface numerical schemes.
Resumo:
Analyses of the invariants of the velocity gradient ten- sor were performed on flow fields obtained by DNS of compressible plane mixing layers at convective Mach num- bers Mc=0:15 and 1.1. Joint pdfs of the 2nd and 3rd invariants were examined at turbulent/nonturbulent (T/NT) boundaries—defined as surfaces where the local vorticity first exceeds a threshold fraction of the maximum of the mean vorticity. By increasing the threshold from very small lev-els, the boundary points were moved closer into the turbulent region, and the effects on the pdfs of the invariants were ob-served. Generally, T/NT boundaries are in sheet-like regions at both Mach numbers. At the higher Mach number a distinct lobe appears in the joint pdf isolines which has not been ob-served/reported before. A connection to the delayed entrain-ment and reduced growth rate of the higher Mach number flow is proposed.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
Background: Immunotherapy is fast emerging as one of the leading modes of treatment of cancer, in combination with chemotherapy and radiation. Use of immunotoxins, proteins bearing a cell-surface receptor-specific antibody conjugated to a toxin, enhances the efficacy of cancer treatment. The toxin Abrin, isolated from the Abrus precatorius plant, is a type II ribosome inactivating protein, has a catalytic efficiency higher than any other toxin belonging to this class of proteins but has not been exploited much for use in targeted therapy. Methods: Protein synthesis assay using (3)H] L-leucine incorporation; construction and purification of immunotoxin; study of cell death using flow cytometry; confocal scanning microscopy and sub-cellular fractionation with immunoblot analysis of localization of proteins. Results: We used the recombinant A chain of abrin to conjugate to antibodies raised against the human gonadotropin releasing hormone receptor. The conjugate inhibited protein synthesis and also induced cell death specifically in cells expressing the receptor. The conjugate exhibited differences in the kinetics of inhibition of protein synthesis, in comparison to abrin, and this was attributed to differences in internalization and trafficking of the conjugate within the cells. Moreover, observations of sequestration of the A chain into the nucleus of cells treated with abrin but not in cells treated with the conjugate reveal a novel pathway for the movement of the conjugate in the cells. Conclusions: This is one of the first reports on nuclear localization of abrin, a type II RIP. The immunotoxin mAb F1G4-rABRa-A, generated in our laboratory, inhibits protein synthesis specifically on cells expressing the gonadotropin releasing hormone receptor and the pathway of internalization of the protein is distinct from that seen for abrin.
Resumo:
Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.
Resumo:
The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced with the help of Lee, Frost and Gamma MAP filters. A performance comparison of these speckle removal filters is presented. From the results obtained, we deduce that the Gamma MAP is reliable. The selected Gamma MAP filtered image is segmented using Gray Level Co-occurrence Matrix (GLCM) and Mean Shift Segmentation (MSS). The GLCM is a texture analysis method that separates the image pixels into water and non-water groups based on their spectral feature whereas MSS is a gradient ascent method, here segmentation is carried out using spectral and spatial information. As test case, Kosi river flood is considered in our study. From the segmentation result of both these methods are comprehensively analysed and concluded that the MSS is efficient for flood mapping.