978 resultados para Isomorphic coordinate projections


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reynolds Averaged Navier Stokes (RANS) equations are solved using third order upwind biased Roe's scheme for the inviscid fluxes and second order central difference scheme for the viscous fluxes. The Baldwin & Lomax turbulence model is employed for Reynolds stresses. The governing equations are solved using finite-volume implicit scheme in body fitted curvilinear coordinate O-grid system. Computations axe reported for a flat plate apart from RAE 2822 and NACA 0012 airfoils. Results for the flat plate at M = 0.3, R-c = 4.0 x 10(6) compare favourably with the analytical solution. Results for the two airfoils are compared with experiment. There is a good agreement in C-p distribution between experiment and computation for both the airfoils. Comparison of C-f distribution with experiment for RAE 2822 airfoil is reasonable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polynomial chaos expansion (PCE) with Latin hypercube sampling (LHS) is employed for calculating the vibrational frequencies of an inviscid incompressible fluid partially filled in a rectangular tank with and without a baffle. Vibration frequencies of the coupled system are described through their projections on the PCE which uses orthogonal basis functions. PCE coefficients are evaluated using LHS. Convergence on the coefficient of variation is used to find the orthogonal polynomial basis function order which is employed in PCE. It is observed that the dispersion in the eigenvalues is more in the case of a rectangular tank with a baffle. The accuracy of the PCE method is verified with standard MCS results and is found to be more efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An assessment of the impact of projected climate change on forest ecosystems in India based on climate projections of the Regional Climate Model of the Hadley Centre (HadRM3) and the global dynamic vegetation model IBIS for A1B scenario is conducted for short-term (2021-2050) and long-term (2071-2100) periods. Based on the dynamic global vegetation modelling, vulnerable forested regions of India have been identified to assist in planning adaptation interventions. The assessment of climate impacts showed that at the national level, about 45% of the forested grids is projected to undergo change. Vulnerability assessment showed that such vulnerable forested grids are spread across India. However, their concentration is higher in the upper Himalayan stretches, parts of Central India, northern Western Ghats and the Eastern Ghats. In contrast, the northeastern forests, southern Western Ghats and the forested regions of eastern India are estimated to be the least vulnerable. Low tree density, low biodiversity status as well as higher levels of fragmentation, in addition to climate change, contribute to the vulnerability of these forests. The mountainous forests (sub-alpine and alpine forest, the Himalayan dry temperate forest and the Himalayan moist temperate forest) are susceptible to the adverse effects of climate change. This is because climate change is predicted to be larger for regions that have greater elevations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A construction of a new family of distributed space time codes (DSTCs) having full diversity and low Maximum Likelihood (ML) decoding complexity is provided for the two phase based cooperative diversity protocols of Jing-Hassibi and the recently proposed Generalized Non-orthogonal Amplify and Forward (GNAF) protocol of Rajan et al. The salient feature of the proposed DSTCs is that they satisfy the extra constraints imposed by the protocols and are also four-group ML decodable which leads to significant reduction in ML decoding complexity compared to all existing DSTC constructions. Moreover these codes have uniform distribution of power among the relays as well as in time. Also, simulations results indicate that these codes perform better in comparison with the only known DSTC with the same rate and decoding complexity, namely the Coordinate Interleaved Orthogonal Design (CIOD). Furthermore, they perform very close to DSTCs from field extensions which have same rate but higher decoding complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy. 1

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In general the objective of accurately encoding the input data and the objective of extracting good features to facilitate classification are not consistent with each other. As a result, good encoding methods may not be effective mechanisms for classification. In this paper, an earlier proposed unsupervised feature extraction mechanism for pattern classification has been extended to obtain an invertible map. The method of bimodal projection-based features was inspired by the general class of methods called projection pursuit. The principle of projection pursuit concentrates on projections that discriminate between clusters and not faithful representations. The basic feature map obtained by the method of bimodal projections has been extended to overcome this. The extended feature map is an embedding of the input space in the feature space. As a result, the inverse map exists and hence the representation of the input space in the feature space is exact. This map can be naturally expressed as a feedforward neural network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we address the reconstruction problem from laterally truncated helical cone-beam projections. The reconstruction problem from lateral truncation, though similar to that of interior radon problem, is slightly different from it as well as the local (lambda) tomography and pseudo-local tomography in the sense that we aim to reconstruct the entire object being scanned from a region-of-interest (ROI) scan data. The method proposed in this paper is a projection data completion approach followed by the use of any standard accurate FBP type reconstruction algorithm. In particular, we explore a windowed linear prediction (WLP) approach for data completion and compare the quality of reconstruction with the linear prediction (LP) technique proposed earlier.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed space-time block codes (DSTBCs) from complex orthogonal designs (CODs) (both square and nonsquare), coordinate interleaved orthogonal designs (CIODs), and Clifford unitary weight designs (CUWDs) are known to lose their single-symbol ML decodable (SSD) property when used in two-hop wireless relay networks using amplify and forward protocol. For such networks, in this paper, three new classes of high rate, training-symbol embedded (TSE) SSD DSTBCs are constructed: TSE-CODs, TSE-CIODs, and TSE-CUWDs. The proposed codes include the training symbols inside the structure of the code which is shown to be the key point to obtain the SSD property along with the channel estimation capability. TSE-CODs are shown to offer full-diversity for arbitrary complex constellations and the constellations for which TSE-CIODs and TSE-CUWDs offer full-diversity are characterized. It is shown that DSTBCs from nonsquare TSE-CODs provide better rates (in symbols per channel use) when compared to the known SSD DSTBCs for relay networks. Important from the practical point of view, the proposed DSTBCs do not contain any zeros in their codewords and as a result, antennas of the relay nodes do not undergo a sequence of switch on/off transitions within every codeword, and, thus, avoid the antenna switching problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the introduction of 2D flat-panel X-ray detectors, 3D image reconstruction using helical cone-beam tomography is fast replacing the conventional 2D reconstruction techniques. In 3D image reconstruction, the source orbit or scanning geometry should satisfy the data sufficiency or completeness condition for exact reconstruction. The helical scan geometry satisfies this condition and hence can give exact reconstruction. The theoretically exact helical cone-beam reconstruction algorithm proposed by Katsevich is a breakthrough and has attracted interest in the 3D reconstruction using helical cone-beam Computed Tomography.In many practical situations, the available projection data is incomplete. One such case is where the detector plane does not completely cover the full extent of the object being imaged in lateral direction resulting in truncated projections. This result in artifacts that mask small features near to the periphery of the ROI when reconstructed using the convolution back projection (CBP) method assuming that the projection data is complete. A number of techniques exist which deal with completion of missing data followed by the CBP reconstruction. In 2D, linear prediction (LP)extrapolation has been shown to be efficient for data completion, involving minimal assumptions on the nature of the data, producing smooth extensions of the missing projection data.In this paper, we propose to extend the LP approach for extrapolating helical cone beam truncated data. The projection on the multi row flat panel detectors has missing columns towards either ends in the lateral direction in truncated data situation. The available data from each detector row is modeled using a linear predictor. The available data is extrapolated and this completed projection data is backprojected using the Katsevich algorithm. Simulation results show the efficacy of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, using 3-D device simulation, we perform an extensive gate to source/drain underlap optimization for the recently proposed hybrid transistor, HFinFET, to show that the underlap lengths can be suitably tuned to improve the ON-OFF ratio as well as the subthreshold characteristics in an ultrashort channel n-type device without significantON performance degradation. We also show that the underlap knob can be tuned to mitigate the device quality degradation in presence of interface traps. The obtained results are shown to be promising when compared against ITRS 2009 performance projections, as well as published state of the art planar and nonplanar Silicon MOSFET data of comparable gate lengths using standard benchmarking techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the image reconstruction using the fan-beam filtered backprojection (FBP) algorithm with no backprojection weight from windowed linear prediction (WLP) completed truncated projection data. The image reconstruction from truncated projections aims to reconstruct the object accurately from the available limited projection data. Due to the incomplete projection data, the reconstructed image contains truncation artifacts which extends into the region of interest (ROI) making the reconstructed image unsuitable for further use. Data completion techniques have been shown to be effective in such situations. We use windowed linear prediction technique for projection completion and then use the fan-beam FBP algorithm with no backprojection weight for the 2-D image reconstruction. We evaluate the quality of the reconstructed image using fan-beam FBP algorithm with no backprojection weight after WLP completion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report describes some preliminary experiments on the use of the relaxation technique for the reconstruction of the elements of a matrix given their various directional sums (or projections).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new and efficient approach to construct a 3D wire-frame of an object from its orthographic projections is described. The input projections can be two or more and can include regular and complete auxiliary views. Each view may contain linear, circular and other conic sections. The output is a 3D wire-frame that is consistent with the input views. The approach can handle auxiliary views containing curved edges. This generality derives from a new technique to construct 3D vertices from the input 2D vertices (as opposed to matching coordinates that is prevalent in current art). 3D vertices are constructed by projecting the 2D vertices in a pair of views on the common line of the two views. The construction of 3D edges also does not require the addition of silhouette and tangential vertices and subsequently splitting edges in the views. The concepts of complete edges and n-tuples are introduced to obviate this need. Entities corresponding to the 3D edge in each view are first identified and the 3D edges are then constructed from the information available with the matching 2D edges. This allows the algorithm to handle conic sections that are not parallel to any of the viewing directions. The localization of effort in constructing 3D edges is the source of efficiency of the construction algorithm as it does not process all potential 3D edges. Working of the algorithm on typical drawings is illustrated. (C) 2011 Elsevier Ltd. All rights reserved.