999 resultados para Methods : Miscellaneous
Resumo:
Effects of dynamic contact angle models on the flow dynamics of an impinging droplet in sharp interface simulations are presented in this article. In the considered finite element scheme, the free surface is tracked using the arbitrary Lagrangian-Eulerian approach. The contact angle is incorporated into the model by replacing the curvature with the Laplace-Beltrami operator and integration by parts. Further, the Navier-slip with friction boundary condition is used to avoid stress singularities at the contact line. Our study demonstrates that the contact angle models have almost no influence on the flow dynamics of the non-wetting droplets. In computations of the wetting and partially wetting droplets, different contact angle models induce different flow dynamics, especially during recoiling. It is shown that a large value for the slip number has to be used in computations of the wetting and partially wetting droplets in order to reduce the effects of the contact angle models. Among all models, the equilibrium model is simple and easy to implement. Further, the equilibrium model also incorporates the contact angle hysteresis. Thus, the equilibrium contact angle model is preferred in sharp interface numerical schemes.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Fastest curve-fitting procedures are proposed for vertical and radial consolidations for rapid loading methods. In vertical consolidation, the next load increment can be applied at 50-60% consolidation (or even earlier if the compression index is known). In radial consolidation, the next load increment can be applied at just 10-15% consolidation. The effects of secondary consolidation on the coefficient of consolidation and ultimate settlement are minimized in both cases. A quick procedure is proposed in vertical consolidation that determines how far is calculated from the true , where is coefficient of consolidation. In radial consolidation no such procedure is required because at 10-15% the consolidation effects of secondary consolidation are already less in most inorganic soils. The proposed rapid loading methods can be used when the settlement or time of load increment is not known. The characteristic features of vertical, radial, three-dimensional, and secondary consolidations are given in terms of the rate of settlement. A relationship is proposed between the coefficient of the vertical consolidation, load increment ratio, and compression index. (C) 2013 American Society of Civil Engineers.
Resumo:
A review of high operating temperature (HOT) infrared (IR) photon detector technology vis-a-vis material requirements, device design and state of the art achieved is presented in this article. The HOT photon detector concept offers the promise of operation at temperatures above 120 K to near room temperature. Advantages are reduction in system size, weight, cost and increase in system reliability. A theoretical study of the thermal generation-recombination (g-r) processes such as Auger and defect related Shockley Read Hall (SRH) recombination responsible for increasing dark current in HgCdTe detectors is presented. Results of theoretical analysis are used to evaluate performance of long wavelength (LW) and mid wavelength (MW) IR detectors at high operating temperatures. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In this article, we derive an a posteriori error estimator for various discontinuous Galerkin (DG) methods that are proposed in (Wang, Han and Cheng, SIAM J. Numer. Anal., 48: 708-733, 2010) for an elliptic obstacle problem. Using a key property of DG methods, we perform the analysis in a general framework. The error estimator we have obtained for DG methods is comparable with the estimator for the conforming Galerkin (CG) finite element method. In the analysis, we construct a non-linear smoothing function mapping DG finite element space to CG finite element space and use it as a key tool. The error estimator consists of a discrete Lagrange multiplier associated with the obstacle constraint. It is shown for non-over-penalized DG methods that the discrete Lagrange multiplier is uniformly stable on non-uniform meshes. Finally, numerical results demonstrating the performance of the error estimator are presented.
Resumo:
Energy research is to a large extent materials research, encompassing the physics and chemistry of materials, including their synthesis, processing toward components and design toward architectures, allowing for their functionality as energy devices, extending toward their operation parameters and environment, including also their degradation, limited life, ultimate failure and potential recycling. In all these stages, X-ray and electron spectroscopy are helpful methods for analysis, characterization and diagnostics for the engineer and for the researcher working in basic science.This paper gives a short overview of experiments with X-ray and electron spectroscopy for solar energy and water splitting materials and addresses also the issue of solar fuel, a relatively new topic in energy research. The featured systems are iron oxide and tungsten oxide as photoanodes, and hydrogenases as molecular systems. We present surface and subsurface studies with ambient pressure XPS and hard X-ray XPS, resonant photoemission, light induced effects in resonant photoemission experiments and a photo-electrochemical in situ/operando NEXAFS experiment in a liquid cell, and nuclear resonant vibrational spectroscopy (NRVS). (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Structural dynamics of dendritic spines is one of the key correlative measures of synaptic plasticity for encoding short-term and long-term memory. Optical studies of structural changes in brain tissue using confocal microscopy face difficulties of scattering. This results in low signal-to-noise ratio and thus limiting the imaging depth to few tens of microns. Multiphoton microscopy (MpM) overcomes this limitation by using low-energy photons to cause localized excitation and achieve high resolution in all three dimensions. Multiple low-energy photons with longer wavelengths minimize scattering and allow access to deeper brain regions at several hundred microns. In this article, we provide a basic understanding of the physical phenomena that give MpM an edge over conventional microscopy. Further, we highlight a few of the key studies in the field of learning and memory which would not have been possible without the advent of MpM.
Resumo:
The sparse recovery methods utilize the l(p)-normbased regularization in the estimation problem with 0 <= p <= 1. These methods have a better utility when the number of independent measurements are limited in nature, which is a typical case for diffuse optical tomographic image reconstruction problem. These sparse recovery methods, along with an approximation to utilize the l(0)-norm, have been deployed for the reconstruction of diffuse optical images. Their performancewas compared systematically using both numerical and gelatin phantom cases to show that these methods hold promise in improving the reconstructed image quality.
Resumo:
Structural Support Vector Machines (SSVMs) and Conditional Random Fields (CRFs) are popular discriminative methods used for classifying structured and complex objects like parse trees, image segments and part-of-speech tags. The datasets involved are very large dimensional, and the models designed using typical training algorithms for SSVMs and CRFs are non-sparse. This non-sparse nature of models results in slow inference. Thus, there is a need to devise new algorithms for sparse SSVM and CRF classifier design. Use of elastic net and L1-regularizer has already been explored for solving primal CRF and SSVM problems, respectively, to design sparse classifiers. In this work, we focus on dual elastic net regularized SSVM and CRF. By exploiting the weakly coupled structure of these convex programming problems, we propose a new sequential alternating proximal (SAP) algorithm to solve these dual problems. This algorithm works by sequentially visiting each training set example and solving a simple subproblem restricted to a small subset of variables associated with that example. Numerical experiments on various benchmark sequence labeling datasets demonstrate that the proposed algorithm scales well. Further, the classifiers designed are sparser than those designed by solving the respective primal problems and demonstrate comparable generalization performance. Thus, the proposed SAP algorithm is a useful alternative for sparse SSVM and CRF classifier design.
Resumo:
The electronic structure of Nd1-xYxMnO3 (x-0-0.5) is studied using x-ray absorption near-edge structure (XANES) spectroscopy at the Mn K-edge along with the DFT-based LSDA+U and real space cluster calculations. The main edge of the spectra does not show any variation with doping. The pre-edge shows two distinct features which appear well-separated with doping. The intensity of the pre-edge decreases with doping. The theoretical XANES were calculated using real space multiple scattering methods which reproduces the entire experimental spectra at the main edge as well as the pre-edge. Density functional theory calculations are used to obtain the Mn 4p, Mn 3d and O 2p density of states. For x=0, the site-projected density of states at 1.7 eV above Fermi energy shows a singular peak of unoccupied e(g) (spin-up) states which is hybridized Mn 4p and O 2p states. For x=0.5, this feature develops at a higher energy and is highly delocalized and overlaps with the 3d spin-down states which changes the pre-edge intensity. The Mn 4p DOS for both compositions, show considerable difference between the individual p(x), p(y) and p(z)), states. For x=0.5, there is a considerable change in the 4p orbital polarization suggesting changes in the Jahn-Teller effect with doping. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential.
Resumo:
A number of ecosystems can exhibit abrupt shifts between alternative stable states. Because of their important ecological and economic consequences, recent research has focused on devising early warning signals for anticipating such abrupt ecological transitions. In particular, theoretical studies show that changes in spatial characteristics of the system could provide early warnings of approaching transitions. However, the empirical validation of these indicators lag behind their theoretical developments. Here, we summarize a range of currently available spatial early warning signals, suggest potential null models to interpret their trends, and apply them to three simulated spatial data sets of systems undergoing an abrupt transition. In addition to providing a step-by-step methodology for applying these signals to spatial data sets, we propose a statistical toolbox that may be used to help detect approaching transitions in a wide range of spatial data. We hope that our methodology together with the computer codes will stimulate the application and testing of spatial early warning signals on real spatial data.
Resumo:
We develop a communication theoretic framework for modeling 2-D magnetic recording channels. Using the model, we define the signal-to-noise ratio (SNR) for the channel considering several physical parameters, such as the channel bit density, code rate, bit aspect ratio, and noise parameters. We analyze the problem of optimizing the bit aspect ratio for maximizing SNR. The read channel architecture comprises a novel 2-D joint self-iterating equalizer and detection system with noise prediction capability. We evaluate the system performance based on our channel model through simulations. The coded performance with the 2-D equalizer detector indicates similar to 5.5 dB of SNR gain over uncoded data.
Resumo:
In this article, we analyse several discontinuous Galerkin (DG) methods for the Stokes problem under minimal regularity on the solution. We assume that the velocity u belongs to H-0(1)(Omega)](d) and the pressure p is an element of L-0(2)(Omega). First, we analyse standard DG methods assuming that the right-hand side f belongs to H-1(Omega) boolean AND L-1(Omega)](d). A DG method that is well defined for f belonging to H-1(Omega)](d) is then investigated. The methods under study include stabilized DG methods using equal-order spaces and inf-sup stable ones where the pressure space is one polynomial degree less than the velocity space.