53 resultados para Topology-based methods
Resumo:
The study extends the first order reliability method (FORM) and inverse FORM to update reliability models for existing, statically loaded structures based on measured responses. Solutions based on Bayes' theorem, Markov chain Monte Carlo simulations, and inverse reliability analysis are developed. The case of linear systems with Gaussian uncertainties and linear performance functions is shown to be exactly solvable. FORM and inverse reliability based methods are subsequently developed to deal with more general problems. The proposed procedures are implemented by combining Matlab based reliability modules with finite element models residing on the Abaqus software. Numerical illustrations on linear and nonlinear frames are presented. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) offers a huge potential for designing trade-offs involving energy, power, temperature and performance of computing systems. In this paper, we evaluate three different DVFS schemes - our enhancement of a Petri net performance model based DVFS method for sequential programs to stream programs, a simple profile based Linear Scaling method, and an existing hardware based DVFS method for multithreaded applications - using multithreaded stream applications, in a full system Chip Multiprocessor (CMP) simulator. From our evaluation, we find that the software based methods achieve significant Energy/Throughput2(ET−2) improvements. The hardware based scheme degrades performance heavily and suffers ET−2 loss. Our results indicate that the simple profile based scheme achieves the benefits of the complex Petri net based scheme for stream programs, and present a strong case for the need for independent voltage/frequency control for different cores of CMPs, which is lacking in most of the state-of-the-art CMPs. This is in contrast to the conclusions of a recent evaluation of per-core DVFS schemes for multithreaded applications for CMPs.
Resumo:
Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]
Resumo:
N-gram language models and lexicon-based word-recognition are popular methods in the literature to improve recognition accuracies of online and offline handwritten data. However, there are very few works that deal with application of these techniques on online Tamil handwritten data. In this paper, we explore methods of developing symbol-level language models and a lexicon from a large Tamil text corpus and their application to improving symbol and word recognition accuracies. On a test database of around 2000 words, we find that bigram language models improve symbol (3%) and word recognition (8%) accuracies and while lexicon methods offer much greater improvements (30%) in terms of word recognition, there is a large dependency on choosing the right lexicon. For comparison to lexicon and language model based methods, we have also explored re-evaluation techniques which involve the use of expert classifiers to improve symbol and word recognition accuracies.
Resumo:
In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.
Resumo:
A circuit topology based on accumulate-and-use philosophy has been developed to harvest RF energy from ambient radiations such as those from cellular towers. Main functional units of this system are antenna, tuned rectifier, supercapacitor, a gated boost converter and the necessary power management circuits. Various RF aspects of the design philosophy for maximizing the conversion efficiency at an input power level of 15 mu W are presented here. The system is characterized in an anechoic chamber and it has been established that this topology can harvest RF power densities as low as 180 mu W/m(2) and can adaptively operate the load depending on the incident radiation levels. The output of this system can be easily configured at a desired voltage in the range 2.2-4.5 V. A practical CMOS load - a low power wireless radio module has been demonstrated to operate intermittently by this approach. This topology can be easily modified for driving other practical loads, from harvested RF energy at different frequencies and power levels.
Resumo:
This article presents the investigation of the coordination behavior of a newly synthesized tricarboxylate ligand, obtained by joining imidazole dicarboxylic acid and 4-carboxybenzyl moieties cbimdaH(3), 1-(4-carboxybenzyl)-1H-imidazole-4,5-dicarboxylic acid]. Two novel coordination polymers were obtained through solvothermal reactions under similar conditions namely Sr(cbimdaH)(H2O)](n) (1) and Cd-2(cbimdaH)(2)(H2O)(6)](n)center dot(DMF)(3n)(H2O)(3n) (2), with the ligand behaving as a dianionic tricarboxylate linker. The single crystal X-ray structures show that while 1 forms a 3D coordination polymer, 2 forms a 1D polymer which is further assembled in three dimensions through supramolecular interactions (H-bonding). Complex 1 consists of Sr2+ ions in a distorted dodecahedral coordination geometry, while 2 consists of Cd2+ ions in distorted pentagonal bipyramidal geometries. A topology study reveals that 1 has a new topology based on the 5,6-coordinated 3D net architecture. The luminescence properties of the complexes in the solid state and their thermal stabilities were studied.
Resumo:
Compliant mechanisms are elastic continua used to transmit or transform force and motion mechanically. The topology optimization methods developed for compliant mechanisms also give the shape for a chosen parameterization of the design domain with a fixed mesh. However, in these methods, the shapes of the flexible segments in the resulting optimal solutions are restricted either by the type or the resolution of the design parameterization. This limitation is overcome in this paper by focusing on optimizing the skeletal shape of the compliant segments in a given topology. It is accomplished by identifying such segments in the topology and representing them using Bezier curves. The vertices of the Bezier control polygon are used to parameterize the shape-design space. Uniform parameter steps of the Bezier curves naturally enable adaptive finite element discretization of the segments as their shapes change. Practical constraints such as avoiding intersections with other segments, self-intersections, and restrictions on the available space and material, are incorporated into the formulation. A multi-criteria function from our prior work is used as the objective. Analytical sensitivity analysis for the objective and constraints is presented and is used in the numerical optimization. Examples are included to illustrate the shape optimization method.
Resumo:
Layered transition metal dichalcogenides (TMDs), such as MoS2, are candidate materials for next generation 2-D electronic and optoelectronic devices. The ability to grow uniform, crystalline, atomic layers over large areas is the key to developing such technology. We report a chemical vapor deposition (CVD) technique which yields n-layered MoS2 on a variety of substrates. A generic approach suitable to all TMDs, involving thermodynamic modeling to identify the appropriate CVD process window, and quantitative control of the vapor phase supersaturation, is demonstrated. All reactant sources in our method are outside the growth chamber, a significant improvement over vapor-based methods for atomic layers reported to date. The as-deposited layers are p-type, due to Mo deficiency, with field effect and Hall hole mobilities of up to 2.4 cm(2) V-1 s(-1) and 44 cm(2) V-1 s(-1) respectively. These are among the best reported yet for CVD MoS2.
Resumo:
In this paper, we propose a super resolution (SR) method for synthetic images using FeatureMatch. Existing state-of-the-art super resolution methods are learning based methods, where a pair of low-resolution and high-resolution dictionary pair are trained, and this trained pair is used to replace patches in low-resolution image with appropriate matching patches from the high-resolution dictionary. In this paper, we show that by using Approximate Nearest Neighbour Fields (ANNF), and a common source image, we can by-pass the learning phase, and use a single image for dictionary. Thus, reducing the dictionary from a collection obtained from hundreds of training images, to a single image. We show that by modifying the latest developments in ANNF computation, to suit super resolution, we can perform much faster and more accurate SR than existing techniques. To establish this claim we will compare our algorithm against various state-of-the-art algorithms, and show that we are able to achieve better and faster reconstruction without any training phase.
Resumo:
Morphological changes in cells associated with disease states are often assessed using clinical microscopy. However, the changes in chemical composition of cells can also be used to detect disease conditions. Optical absorption measurements carried out on single cells using inexpensive sources, detectors can help assess the chemical composition of cells; thereby enable detection of diseases. In this article, we present a novel technique capable of simultaneously detecting changes in morphology and chemical composition of cells. The presented technique enables characterization of optical absorbance-based methods against microscopy for detection of disease states. Using the technique, we have been able to achieve a throughput of about 1000 cells per second. We demonstrate the proof-of-principle by detecting malaria in a given blood sample. The presented technique is capable of detecting very lower levels of parasitemia within time scales comparable to antigen-based rapid diagnostic tests.
Resumo:
The study introduces two new alternatives for global response sensitivity analysis based on the application of the L-2-norm and Hellinger's metric for measuring distance between two probabilistic models. Both the procedures are shown to be capable of treating dependent non-Gaussian random variable models for the input variables. The sensitivity indices obtained based on the L2-norm involve second order moments of the response, and, when applied for the case of independent and identically distributed sequence of input random variables, it is shown to be related to the classical Sobol's response sensitivity indices. The analysis based on Hellinger's metric addresses variability across entire range or segments of the response probability density function. The measure is shown to be conceptually a more satisfying alternative to the Kullback-Leibler divergence based analysis which has been reported in the existing literature. Other issues addressed in the study cover Monte Carlo simulation based methods for computing the sensitivity indices and sensitivity analysis with respect to grouped variables. Illustrative examples consist of studies on global sensitivity analysis of natural frequencies of a random multi-degree of freedom system, response of a nonlinear frame, and safety margin associated with a nonlinear performance function. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Further improvement in performance, to achieve near transparent quality LSF quantization, is shown to be possible by using a higher order two dimensional (2-D) prediction in the coefficient domain. The prediction is performed in a closed-loop manner so that the LSF reconstruction error is the same as the quantization error of the prediction residual. We show that an optimum 2-D predictor, exploiting both inter-frame and intra-frame correlations, performs better than existing predictive methods. Computationally efficient split vector quantization technique is used to implement the proposed 2-D prediction based method. We show further improvement in performance by using weighted Euclidean distance.
Resumo:
NDDO-based (AM1) configuration interaction (CI) calculations have been used to calculate the wavelength and oscillator strengths of electronic absorptions in organic molecules and the results used in a sum-over-states treatment to calculate second-order-hyperpolarizabilities. The results for both spectra and hyperpolarizabilities are of acceptable quality as long as a suitable CI-expansion is used. We have found that using an active space of eight electrons in eight orbitals and including all single and pair-double excitations in the CI leads to results that agree well with experiment and that do not change significantly with increasing active space for most organic molecules. Calculated second-order hyperpolarizabilities using this type of CI within a sum-over-states calculation appear to be of useful accuracy.
Resumo:
The present paper develops a family of explicit algorithms for rotational dynamics and presents their comparison with several existing methods. For rotational motion the configuration space is a non-linear manifold, not a Euclidean vector space. As a consequence the rotation vector and its time derivatives correspond to different tangent spaces of rotation manifold at different time instants. This renders the usual integration algorithms for Euclidean space inapplicable for rotation. In the present algorithms this problem is circumvented by relating the equation of motion to a particular tangent space. It has been accomplished with the help of already existing relation between rotation increments which belongs to two different tangent spaces. The suggested method could in principle make any integration algorithm on Euclidean space, applicable to rotation. However, the present paper is restricted only within explicit Runge-Kutta enabled to handle rotation. The algorithms developed here are explicit and hence computationally cheaper than implicit methods. Moreover, they appear to have much higher local accuracy and hence accurate in predicting any constants of motion for reasonably longer time. The numerical results for solutions as well as constants of motion, indicate superior performance by most of our algorithms, when compared to some of the currently known algorithms, namely ALGO-C1, STW, LIEMID[EA], MCG, SUBCYC-M.