186 resultados para Gradient methods
Resumo:
This paper presents an experimental study that was conducted to compare the results obtained from using different design methods (brainstorming (BR), functional analysis (FA), and SCAMPER) in design processes. The objectives of this work are twofold. The first was to determine whether there are any differences in the length of time devoted to the different types of activities that are carried out in the design process, depending on the method that is employed; in other words, whether the design methods that are used make a difference in the profile of time spent across the design activities. The second objective was to analyze whether there is any kind of relationship between the time spent on design process activities and the degree of creativity in the solutions that are obtained. Creativity evaluation has been done by means of the degree of novelty and the level of resolution of the designed solutions using creative product semantic scale (CPSS) questionnaire. The results show that there are significant differences between the amounts of time devoted to activities related to understanding the problem and the typology of the design method, intuitive or logical, that are used. While the amount of time spent on analyzing the problem is very small in intuitive methods, such as brainstorming and SCAMPER (around 8-9% of the time), with logical methods like functional analysis practically half the time is devoted to analyzing the problem. Also, it has been found that the amount of time spent in each design phase has an influence on the results in terms of creativity, but results are not enough strong to define in which measure are they affected. This paper offers new data and results on the distinct benefits to be obtained from applying design methods. DOI: 10.1115/1.4007362]
Resumo:
Effects of dynamic contact angle models on the flow dynamics of an impinging droplet in sharp interface simulations are presented in this article. In the considered finite element scheme, the free surface is tracked using the arbitrary Lagrangian-Eulerian approach. The contact angle is incorporated into the model by replacing the curvature with the Laplace-Beltrami operator and integration by parts. Further, the Navier-slip with friction boundary condition is used to avoid stress singularities at the contact line. Our study demonstrates that the contact angle models have almost no influence on the flow dynamics of the non-wetting droplets. In computations of the wetting and partially wetting droplets, different contact angle models induce different flow dynamics, especially during recoiling. It is shown that a large value for the slip number has to be used in computations of the wetting and partially wetting droplets in order to reduce the effects of the contact angle models. Among all models, the equilibrium model is simple and easy to implement. Further, the equilibrium model also incorporates the contact angle hysteresis. Thus, the equilibrium contact angle model is preferred in sharp interface numerical schemes.
Resumo:
Analyses of the invariants of the velocity gradient ten- sor were performed on flow fields obtained by DNS of compressible plane mixing layers at convective Mach num- bers Mc=0:15 and 1.1. Joint pdfs of the 2nd and 3rd invariants were examined at turbulent/nonturbulent (T/NT) boundaries—defined as surfaces where the local vorticity first exceeds a threshold fraction of the maximum of the mean vorticity. By increasing the threshold from very small lev-els, the boundary points were moved closer into the turbulent region, and the effects on the pdfs of the invariants were ob-served. Generally, T/NT boundaries are in sheet-like regions at both Mach numbers. At the higher Mach number a distinct lobe appears in the joint pdf isolines which has not been ob-served/reported before. A connection to the delayed entrain-ment and reduced growth rate of the higher Mach number flow is proposed.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.
Resumo:
The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced with the help of Lee, Frost and Gamma MAP filters. A performance comparison of these speckle removal filters is presented. From the results obtained, we deduce that the Gamma MAP is reliable. The selected Gamma MAP filtered image is segmented using Gray Level Co-occurrence Matrix (GLCM) and Mean Shift Segmentation (MSS). The GLCM is a texture analysis method that separates the image pixels into water and non-water groups based on their spectral feature whereas MSS is a gradient ascent method, here segmentation is carried out using spectral and spatial information. As test case, Kosi river flood is considered in our study. From the segmentation result of both these methods are comprehensively analysed and concluded that the MSS is efficient for flood mapping.
Resumo:
Fastest curve-fitting procedures are proposed for vertical and radial consolidations for rapid loading methods. In vertical consolidation, the next load increment can be applied at 50-60% consolidation (or even earlier if the compression index is known). In radial consolidation, the next load increment can be applied at just 10-15% consolidation. The effects of secondary consolidation on the coefficient of consolidation and ultimate settlement are minimized in both cases. A quick procedure is proposed in vertical consolidation that determines how far is calculated from the true , where is coefficient of consolidation. In radial consolidation no such procedure is required because at 10-15% the consolidation effects of secondary consolidation are already less in most inorganic soils. The proposed rapid loading methods can be used when the settlement or time of load increment is not known. The characteristic features of vertical, radial, three-dimensional, and secondary consolidations are given in terms of the rate of settlement. A relationship is proposed between the coefficient of the vertical consolidation, load increment ratio, and compression index. (C) 2013 American Society of Civil Engineers.
Resumo:
Hydrogeological and climatic effect on chemical behavior of groundwater along a climatic gradient is studied along a river basin. `Semi-arid' (500-800 mm of mean annual rainfall), `sub-humid' (800-1,200 mm/year) and `humid' (1,200-1,500 mm/year) are the climatic zones chosen along the granito-gneissic plains of Kabini basin in South India for the present analysis. Data on groundwater chemistry is initially checked for its quality using NICB ratio (<+/- 5 %), EC versus TZ+ (similar to 0.85 correlation), EC versus TDS and EC versus TH analysis. Groundwater in the three climatic zones is `hard' to `very hard' in terms of Ca-Mg hardness. Polluted wells are identified (> 40 % of pollution) and eliminated for the characterization. Piper's diagram with mean concentrations indicates the evolution of CaNaHCO3 (semi-arid) from CaHCO3 (humid zone) along the climatic gradient. Carbonates dominate other anions and strong acids exceeded weak acids in the region. Mule Hole SEW, an experimental watershed in sub-humid zone, is characterized initially using hydrogeochemistry and is observed to be a replica of entire sub-humid zone (with 25 wells). Extension of the studies for the entire basin (120 wells) showed a chemical gradient along the climatic gradient with sub-humid zone bridging semi-arid and humid zones. Ca/Na molar ratio varies by more than 100 times from semi-arid to humid zones. Semi-arid zone is more silicaceous than sub-humid while humid zone is more carbonaceous (Ca/Cl similar to 14). Along the climatic gradient, groundwater is undersaturated (humid), saturated (sub-humid) and slightly supersaturated (semi-arid) with calcite and dolomite. Concentration-depth profiles are in support of the geological stratification i.e., not approximate to 18 m of saprolite and similar to 25 m of fracture rock with parent gneiss beneath. All the wells are classified into four groups based on groundwater fluctuations and further into `deep' and `shallow' based on the depth to groundwater. Higher the fluctuations, larger is its impact on groundwater chemistry. Actual seasonal patterns are identified using `recharge-discharge' concept based on rainfall intensity instead of traditional monsoon-non-monsoon concept. Non-pumped wells have low Na/Cl and Ca/Cl ratios in recharge period than in discharge period (Dilution). Few other wells, which are subjected to pumping, still exhibit dilution chemistry though water level fluctuations are high due to annual recharge. Other wells which do not receive sufficient rainfall and are constantly pumped showed high concentrations in recharge period rather than in discharge period (Anti-dilution). In summary, recharge-discharge concept demarcates the pumped wells from natural deep wells thus, characterizing the basin.
Resumo:
We consider the problem of developing privacy-preserving machine learning algorithms in a dis-tributed multiparty setting. Here different parties own different parts of a data set, and the goal is to learn a classifier from the entire data set with-out any party revealing any information about the individual data points it owns. Pathak et al [7]recently proposed a solution to this problem in which each party learns a local classifier from its own data, and a third party then aggregates these classifiers in a privacy-preserving manner using a cryptographic scheme. The generaliza-tion performance of their algorithm is sensitive to the number of parties and the relative frac-tions of data owned by the different parties. In this paper, we describe a new differentially pri-vate algorithm for the multiparty setting that uses a stochastic gradient descent based procedure to directly optimize the overall multiparty ob-jective rather than combining classifiers learned from optimizing local objectives. The algorithm achieves a slightly weaker form of differential privacy than that of [7], but provides improved generalization guarantees that do not depend on the number of parties or the relative sizes of the individual data sets. Experimental results corrob-orate our theoretical findings.
Resumo:
Thyroid hormones are essential for the development and differentiation of all cells of the human body. They regulate protein, fat, and carbohydrate metabolism. In this Account, we discuss the synthesis, structure, and mechanism of action of thyroid hormones and their analogues. The prohormone thyroxine (14) is synthesized on thyroglobulin by thyroid peroxidase (TPO), a heme enzyme that uses iodide and hydrogen peroxide to perform iodination and phenolic coupling reactions. The monodeiodination of T4 to 3,3',5-triiodothyronine (13) by selenium-containing deiodinases (ID-1, ID-2) is a key step in the activation of thyroid hormones. The type 3 deiodinase (ID-3) catalyzes the deactivation of thyroid hormone in a process that removes iodine selectively from the tyrosyl ring of T4 to produce 3,3',5'-triiodothyronine (rT3). Several physiological and pathological stimuli influence thyroid hormone synthesis. The overproduction of thyroid hormones leads to hyperthyroidism, which is treated by antithyroid drugs that either inhibit the thyroid hormone biosynthesis and/or decrease the conversion of T4 to T3. Antithyroid drugs are thiourea-based compounds, which indude propylthiouracil (PTU), methimazole (MM I), and carbimazole (CBZ). The thyroid gland actively concentrates these heterocyclic compounds against a concentration gradient Recently, the selenium analogues of PTU, MMI, and CBZ attracted significant attention because the selenium moiety in these compounds has a higher nucleophilicity than that of the sulfur moiety. Researchers have developed new methods for the synthesis of the selenium compounds. Several experimental and theoretical investigations revealed that the selone (C=Se) in the selenium analogues is more polarized than the thione (C=S) in the sulfur compounds, and the selones exist predominantly in their zwitterionic forms. Although the thionamide-based antithyroid drugs have been used for almost 70 years, the mechanism of their action is not completely understood. Most investigations have revealed that MMI and PTU irreversibly inhibit TPO. PTU, MTU, and their selenium analogues also inhibit ID-1, most likely by reacting with the selenenyl iodide intermediate. The good ID-1 inhibitory activity of Pill and its analogues can be ascribed to the presence of the -N(H)-C(=O)- functionality that can form hydrogen bonds with nearby amino add residues in the selenenyl sulfide state. In addition to the TPO and ID-1 inhibition, the selenium analogues are very good antioxidants. In the presence of cellular reducing agents such as GSH, these compounds catalytically reduce hydrogen peroxide. They can also efficiently scavenge peroxynitrite, a potent biological oxidant and nitrating agent.
Resumo:
A review of high operating temperature (HOT) infrared (IR) photon detector technology vis-a-vis material requirements, device design and state of the art achieved is presented in this article. The HOT photon detector concept offers the promise of operation at temperatures above 120 K to near room temperature. Advantages are reduction in system size, weight, cost and increase in system reliability. A theoretical study of the thermal generation-recombination (g-r) processes such as Auger and defect related Shockley Read Hall (SRH) recombination responsible for increasing dark current in HgCdTe detectors is presented. Results of theoretical analysis are used to evaluate performance of long wavelength (LW) and mid wavelength (MW) IR detectors at high operating temperatures. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In this article, we derive an a posteriori error estimator for various discontinuous Galerkin (DG) methods that are proposed in (Wang, Han and Cheng, SIAM J. Numer. Anal., 48: 708-733, 2010) for an elliptic obstacle problem. Using a key property of DG methods, we perform the analysis in a general framework. The error estimator we have obtained for DG methods is comparable with the estimator for the conforming Galerkin (CG) finite element method. In the analysis, we construct a non-linear smoothing function mapping DG finite element space to CG finite element space and use it as a key tool. The error estimator consists of a discrete Lagrange multiplier associated with the obstacle constraint. It is shown for non-over-penalized DG methods that the discrete Lagrange multiplier is uniformly stable on non-uniform meshes. Finally, numerical results demonstrating the performance of the error estimator are presented.
Resumo:
Energy research is to a large extent materials research, encompassing the physics and chemistry of materials, including their synthesis, processing toward components and design toward architectures, allowing for their functionality as energy devices, extending toward their operation parameters and environment, including also their degradation, limited life, ultimate failure and potential recycling. In all these stages, X-ray and electron spectroscopy are helpful methods for analysis, characterization and diagnostics for the engineer and for the researcher working in basic science.This paper gives a short overview of experiments with X-ray and electron spectroscopy for solar energy and water splitting materials and addresses also the issue of solar fuel, a relatively new topic in energy research. The featured systems are iron oxide and tungsten oxide as photoanodes, and hydrogenases as molecular systems. We present surface and subsurface studies with ambient pressure XPS and hard X-ray XPS, resonant photoemission, light induced effects in resonant photoemission experiments and a photo-electrochemical in situ/operando NEXAFS experiment in a liquid cell, and nuclear resonant vibrational spectroscopy (NRVS). (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Structural dynamics of dendritic spines is one of the key correlative measures of synaptic plasticity for encoding short-term and long-term memory. Optical studies of structural changes in brain tissue using confocal microscopy face difficulties of scattering. This results in low signal-to-noise ratio and thus limiting the imaging depth to few tens of microns. Multiphoton microscopy (MpM) overcomes this limitation by using low-energy photons to cause localized excitation and achieve high resolution in all three dimensions. Multiple low-energy photons with longer wavelengths minimize scattering and allow access to deeper brain regions at several hundred microns. In this article, we provide a basic understanding of the physical phenomena that give MpM an edge over conventional microscopy. Further, we highlight a few of the key studies in the field of learning and memory which would not have been possible without the advent of MpM.