85 resultados para Operation based method


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we propose low-complexity algorithms based on Monte Carlo sampling for signal detection and channel estimation on the uplink in large-scale multiuser multiple-input-multiple-output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and a similar number of uplink users. A BS receiver that employs a novel mixed sampling technique (which makes a probabilistic choice between Gibbs sampling and random uniform sampling in each coordinate update) for detection and a Gibbs-sampling-based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high signal-to-noise ratios (SNRs) in conventional Gibbs-sampling-based detection and achieves near-optimal performance in large systems with M-ary quadrature amplitude modulation (M-QAM). A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexity is the joint use of a mixed Gibbs sampling (MGS) strategy coupled with a multiple restart (MR) strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for a large number of BS antennas and users (e. g., 64 and 128 BS antennas and users). The proposed Gibbs-sampling-based channel estimation algorithm refines an initial estimate of the channel obtained during the pilot phase through iterations with the proposed MGS-based detection during the data phase. In time-division duplex systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. The proposed receiver is shown to achieve good performance and scale well for large dimensions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Research has been undertaken to ascertain the predictability of non-stationary time series using wavelet and Empirical Mode Decomposition (EMD) based time series models. Methods have been developed in the past to decompose a time series into components. Forecasting of these components combined with random component could yield predictions. Using this ideology, wavelet and EMD analyses have been incorporated separately which decomposes a time series into independent orthogonal components with both time and frequency localizations. The component series are fit with specific auto-regressive models to obtain forecasts which are later combined to obtain the actual predictions. Four non-stationary streamflow sites (USGS data resources) of monthly total volumes and two non-stationary gridded rainfall sites (IMD) of monthly total rainfall are considered for the study. The predictability is checked for six and twelve months ahead forecasts across both the methodologies. Based on performance measures, it is observed that wavelet based method has better prediction capabilities over EMD based method despite some of the limitations of time series methods and the manner in which decomposition takes place. Finally, the study concludes that the wavelet based time series algorithm can be used to model events such as droughts with reasonable accuracy. Also, some modifications that can be made in the model have been discussed that could extend the scope of applicability to other areas in the field of hydrology. (C) 2013 Elesvier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper discusses a wave propagation based method for identifying the damages in an aircraft built up structural component such as delamination and skin-stiffener debonding. First, a spectral finite element mode l (SFEM) is developed for modeling wave propagation in general built-up structures by using the concept of assembling 2D spectral plate elements. The developed numerical model is validated using conventional 2-D FEM. Studies are performed to capture the mode coupling,that is, the flexural-axial coupling present in the wave responses. Lastly, the damages in these built up structures are then identified using the developed SFEM model and the measured responses using the concept Damage Force Indicator (DFI) technique.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multilevel inverters with dodecagonal (12-sided polygon) voltage space vector (SV) structures have advantages like extension of linear modulation range, elimination of fifth and seventh harmonics in phase voltages and currents for the full modulation range including extreme 12-step operation, reduced device voltage ratings, lesser dv/dt stresses on devices and motor phase windings resulting in lower EMI/EMC problems, and lower switching frequency-making it more suitable for high-power drive applications. This paper proposes a simple method to obtain pulsewidth modulation (PWM) timings for a dodecagonal voltage SV structure using only sampled reference voltages. In addition to this, a carrier-based method for obtaining the PWM timings for a general N-level dodecagonal structure is proposed in this paper for the first time. The algorithm outputs the triangle information and the PWM timing values which can be set as the compare values for any carrier-based hardware PWM module to obtain SV PWM like switching sequences. The proposed method eliminates the need for angle estimation, computation of modulation indices, and iterative search algorithms that are typical in multilevel dodecagonal SV systems. The proposed PWM scheme was implemented on a five-level dodecagonal SV structure. Exhaustive simulation and experimental results for steady-state and transient conditions are presented to validate the proposed method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of identifying parameters of time invariant linear dynamical systems with fractional derivative damping models, based on a spatially incomplete set of measured frequency response functions and experimentally determined eigensolutions, is considered. Methods based on inverse sensitivity analysis of damped eigensolutions and frequency response functions are developed. It is shown that the eigensensitivity method requires the development of derivatives of solutions of an asymmetric generalized eigenvalue problem. Both the first and second order inverse sensitivity analyses are considered. The study demonstrates the successful performance of the identification algorithms developed based on synthetic data on one, two and a 33 degrees of freedom vibrating systems with fractional dampers. Limited studies have also been conducted by combining finite element modeling with experimental data on accelerances measured in laboratory conditions on a system consisting of two steel beams rigidly joined together by a rubber hose. The method based on sensitivity of frequency response functions is shown to be more efficient than the eigensensitivity based method in identifying system parameters, especially for large scale systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A lack of information on protein-protein interactions at the host-pathogen interface is impeding the understanding of the pathogenesis process. A recently developed, homology search-based method to predict protein-protein interactions is applied to the gastric pathogen, Helicobacter pylori to predict the interactions between proteins of H. pylori and human proteins in vitro. Many of the predicted interactions could potentially occur between the pathogen and its human host during pathogenesis as we focused mainly on the H. pylori proteins that have a transmembrane region or are encoded in the pathogenic island and those which are known to be secreted into the human host. By applying the homology search approach to protein-protein interaction databases DIP and iPfam, we could predict in vitro interactions for a total of 623 H. pylori proteins with 6559 human proteins. The predicted interactions include 549 hypothetical proteins of as yet unknown function encoded in the H. pylori genome and 13 experimentally verified secreted proteins. We have recognized 833 interactions involving the extracellular domains of transmembrane proteins of H. pylori. Structural analysis of some of the examples reveals that the interaction predicted by us is consistent with the structural compatibility of binding partners. Examples of interactions with discernible biological relevance are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper presents a novel slicing based method for computation of volume fractions in multi-material solids given as a B-rep whose faces are triangulated and shared by either one or two materials. Such objects occur naturally in geoscience applications and the said computation is necessary for property estimation problems and iterative forward modeling. Each facet in the model is cut by the planes delineating the given grid structure or grid cells. The method, instead of classifying the points or cells with respect to the solid, exploits the convexity of triangles and the simple axis-oriented disposition of the cutting surfaces to construct a novel intermediate space enumeration representation called slice-representation, from which both the cell containment test and the volume-fraction computation are done easily. Cartesian and cylindrical grids with uniform and non-uniform spacings have been dealt with in this paper. After slicing, each triangle contributes polygonal facets, with potential elliptical edges, to the grid cells through which it passes. The volume fractions of different materials in a grid cell that is in interaction with the material interfaces are obtained by accumulating the volume contributions computed from each facet in the grid cell. The method is fast, accurate, robust and memory efficient. Examples illustrating the method and performance are included in the paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work addresses the optimum design of a composite box-beam structure subject to strength constraints. Such box-beams are used as the main load carrying members of helicopter rotor blades. A computationally efficient analytical model for box-beam is used. Optimal ply orientation angles are sought which maximize the failure margins with respect to the applied loading. The Tsai-Wu-Hahn failure criterion is used to calculate the reserve factor for each wall and ply and the minimum reserve factor is maximized. Ply angles are used as design variables and various cases of initial starting design and loadings are investigated. Both gradient-based and particle swarm optimization (PSO) methods are used. It is found that the optimization approach leads to the design of a box-beam with greatly improved reserve factors which can be useful for helicopter rotor structures. While the PSO yields globally best designs, the gradient-based method can also be used with appropriate starting designs to obtain useful designs efficiently. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, direct numerical simulation of autoignition in an initially non-premixed medium under isotropic, homogeneous, and decaying turbulence is presented. The pressure-based method developed herein is a spectral implementation of the sequential steps followed in the predictor-corrector type of algorithms; it includes the effects of density fluctuations caused by spatial inhomogeneities ill temperature and species. The velocity and pressure field are solved in the spectral space while the scalars and density field are solved in the physical space. The presented results reveal that the autoignition spots originate and evolve at locations where (1) the composition corresponds to a small range around a specific mixture fraction, and (2) the conditional scaler dissipation rate is low. A careful examination of the data obtained indicates that the autoignition spots originate in the vortex cores, and the hot gases travel outward as combustion progresses. Hence, the applicability of the transient laminar flamelet model for this problem is questioned. The dependence of autoignition characteristics on parameters such as (1) die initial eddy-turnover time and (2) the initial ratio of length scale of scalars to that of velocities are investigated. Certain implications of new results on the conditional moment closure modeling are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a differential evolution based method of improving the performance of conventional guidance laws at high heading errors, without resorting to techniques from optimal control theory, which are complicated and suffer from several limitations. The basic guidance law is augmented with a term that is a polynomial function of the heading error. The values of the coefficients of the polynomial are found by applying the differential evolution algorithm. The results are compared with the basic guidance law, and the all-aspect proportional navigation laws in the literature. A scheme for online implementation of the proposed law for application in practice is also given. (c) 2010 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The kinetics of the oxidation of electrodeposited boron powder and the boron powder produced by the reduction process were studied using thermogravimetry (TG). The oxidation was carried out by heating boron powder in a stream of oxygen. Both isothermal and non-isothermal methods were used to study the kinetics. Model-free isoconversional method was used to derive the kinetics parameters. A two step oxidation reaction (exothermic) was observed. The oxidation reaction could not be completed due to the formation of glassy layer of boric oxide on the surface of boron powder which acts as a barrier for further diffusion of oxygen into the particle. The activation energy obtained using model-free method for electrodeposited boron is 122 +/- 7 kJ mol(-1) whereas a value of 205 +/- 9 kJ mol(-1) was obtained for boron produced by the reduction process (commercially procured boron). Mechanistic interpretation of the oxidation reaction was done using model based method. The activation energy was found to depend on the size distribution of the particles and specific surface area of the powder. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A generalized enthalpy update scheme is presented for evaluating solid and liquid fractions during the solidification of binary alloys, taking solid movement into consideration. A fixed-grid, enthalpy-based method is developed such that the scheme accounts for equilibrium as well as for nonequilibrium solidification phenomena, along with solid phase movement. The effect of solid movement on the solidification interface shape and macrosegregation is highlighted.