945 resultados para piece-wise polynomials
Resumo:
Simplified equations are derived for a granular flow in the `dense' limit where the volume fraction is close to that for dynamical arrest, and the `shallow' limit where the stream-wise length for flow development (L) is large compared with the cross-stream height (h). The mass and diameter of the particles are set equal to 1 in the analysis without loss of generality. In the dense limit, the equations are simplified by taking advantage of the power-law divergence of the pair distribution function chi proportional to (phi(ad) - phi)(-alpha), and a faster divergence of the derivativ rho(d chi/d rho) similar to (d chi/d phi), where rho and phi are the density and volume fraction, and phi(ad) is the volume fraction for arrested dynamics. When the height h is much larger than the conduction length, the energy equation reduces to an algebraic balance between the rates of production and dissipation of energy, and the stress is proportional to the square of the strain rate (Bagnold law). In the shallow limit, the stress reduces to a simplified Bagnold stress, where all components of the stress are proportional to (partial derivative u(x)/partial derivative y)(2), which is the cross-stream (y) derivative of the stream-wise (x) velocity. In the simplified equations for dense shallow flows, the inertial terms are neglected in the y momentum equation in the shallow limit because the are O(h/L) smaller than the divergence of the stress. The resulting model contains two equations, a mass conservation equations which reduces to a solenoidal condition on the velocity in the incompressible limit, and a stream-wise momentum equation which contains just one parameter B which is a combination of the Bagnold coefficients and their derivatives with respect to volume fraction. The leading-order dense shallow flow equations, as well as the first correction due to density variations, are analysed for two representative flows. The first is the development from a plug flow to a fully developed Bagnold profile for the flow down an inclined plane. The analysis shows that the flow development length is ((rho) over barh(3)/B) , where (rho) over bar is the mean density, and this length is numerically estimated from previous simulation results. The second example is the development of the boundary layer at the base of the flow when a plug flow (with a slip condition at the base) encounters a rough base, in the limit where the momentum boundary layer thickness is small compared with the flow height. Analytical solutions can be found only when the stream-wise velocity far from the surface varies as x(F), where x is the stream-wise distance from the start of the rough base and F is an exponent. The boundary layer thickness increases as (l(2)x)(1/3) for all values of F, where the length scale l = root 2B/(rho) over bar. The analysis reveals important differences between granular flows and the flows of Newtonian fluids. The Reynolds number (ratio of inertial and viscous terms) turns out to depend only on the layer height and Bagnold coefficients, and is independent of the flow velocity, because both the inertial terms in the conservation equations and the divergence of the stress depend on the square of the velocity/velocity gradients. The compressibility number (ratio of the variation in volume fraction and mean volume fraction) is independent of the flow velocity and layer height, and depends only on the volume fraction and Bagnold coefficients.
Resumo:
Let Z(n) denote the ring of integers modulo n. A permutation of Z(n) is a sequence of n distinct elements of Z(n). Addition and subtraction of two permutations is defined element-wise. In this paper we consider two extremal problems on permutations of Z(n), namely, the maximum size of a collection of permutations such that the sum of any two distinct permutations in the collection is again a permutation, and the maximum size of a collection of permutations such that no sum of two distinct permutations in the collection is a permutation. Let the sizes be denoted by s (n) and t (n) respectively. The case when n is even is trivial in both the cases, with s (n) = 1 and t (n) = n!. For n odd, we prove (n phi(n))/2(k) <= s(n) <= n!.2(-)(n-1)/2/((n-1)/2)! and 2 (n-1)/2 . (n-1/2)! <= t (n) <= 2(k) . (n-1)!/phi(n), where k is the number of distinct prime divisors of n and phi is the Euler's totient function.
Resumo:
Friction stir processing (FSP) is emerging as one of the most competent severe plastic deformation (SPD) method for producing bulk ultra-fine grained materials with improved properties. Optimizing the process parameters for a defect free process is one of the challenging aspects of FSP to mark its commercial use. For the commercial aluminium alloy 2024-T3 plate of 6 mm thickness, a bottom-up approach has been attempted to optimize major independent parameters of the process such as plunge depth, tool rotation speed and traverse speed. Tensile properties of the optimum friction stir processed sample were correlated with the microstructural characterization done using Scanning Electron Microscope (SEM) and Electron Back-Scattered Diffraction (EBSD). Optimum parameters from the bottom-up approach have led to a defect free FSP having a maximum strength of 93% the base material strength. Micro tensile testing of the samples taken from the center of processed zone has shown an increased strength of 1.3 times the base material. Measured maximum longitudinal residual stress on the processed surface was only 30 MPa which was attributed to the solid state nature of FSP. Microstructural observation reveals significant grain refinement with less variation in the grain size across the thickness and a large amount of grain boundary precipitation compared to the base metal. The proposed experimental bottom-up approach can be applied as an effective method for optimizing parameters during FSP of aluminium alloys, which is otherwise difficult through analytical methods due to the complex interactions between work-piece, tool and process parameters. Precipitation mechanisms during FSP were responsible for the fine grained microstructure in the nugget zone that provided better mechanical properties than the base metal. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Fix a prime p. Given a positive integer k, a vector of positive integers Delta = (Delta(1), Delta(2), ... , Delta(k)) and a function Gamma : F-p(k) -> F-p, we say that a function P : F-p(n) -> F-p is (k, Delta, Gamma)-structured if there exist polynomials P-1, P-2, ..., P-k : F-p(n) -> F-p with each deg(P-i) <= Delta(i) such that for all x is an element of F-p(n), P(x) = Gamma(P-1(x), P-2(x), ..., P-k(x)). For instance, an n-variate polynomial over the field Fp of total degree d factors nontrivially exactly when it is (2, (d - 1, d - 1), prod)- structured where prod(a, b) = a . b. We show that if p > d, then for any fixed k, Delta, Gamma, we can decide whether a given polynomial P(x(1), x(2), ..., x(n)) of degree d is (k, Delta, Gamma)-structured and if so, find a witnessing decomposition. The algorithm takes poly(n) time. Our approach is based on higher-order Fourier analysis.
Resumo:
We present a physics-based closed form small signal Nonquasi-static (NQS) model for a long channel Common Double Gate MOSFET (CDG) by taking into account the asymmetry that may prevail between the gate oxide thickness. We use the unique quasi-linear relationship between the surface potentials along the channel to solve the governing continuity equation (CE) in order to develop the analytical expressions for the Y parameters. The Bessel function based solution of the CE is simplified in form of polynomials so that it could be easily implemented in any circuit simulator. The model shows good agreement with the TCAD simulation at-least till 4 times of the cut-off frequency for different device geometries and bias conditions.
Resumo:
Concentration of greenhouse gases (GHG) in the atmosphere has been increasing rapidly during the last century due to ever increasing anthropogenic activities resulting in significant increases in the temperature of the Earth causing global warming. Major sources of GHG are forests (due to human induced land cover changes leading to deforestation), power generation (burning of fossil fuels), transportation (burning fossil fuel), agriculture (livestock, farming, rice cultivation and burning of crop residues), water bodies (wetlands), industry and urban activities (building, construction, transport, solid and liquid waste). Aggregation of GHG (CO2 and non-CO2 gases), in terms of Carbon dioxide equivalent (CO(2)e), indicate the GHG footprint. GHG footprint is thus a measure of the impact of human activities on the environment in terms of the amount of greenhouse gases produced. This study focuses on accounting of the amount of three important greenhouses gases namely carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) and thereby developing GHG footprint of the major cities in India. National GHG inventories have been used for quantification of sector-wise greenhouse gas emissions. Country specific emission factors are used where all the emission factors are available. Default emission factors from IPCC guidelines are used when there are no country specific emission factors. Emission of each greenhouse gas is estimated by multiplying fuel consumption by the corresponding emission factor. The current study estimates GHG footprint or GHG emissions (in terms of CO2 equivalent) for Indian major cities and explores the linkages with the population and GDP. GHG footprint (Aggregation of Carbon dioxide equivalent emissions of GHG's) of Delhi, Greater Mumbai, Kolkata, Chennai, Greater Bangalore, Hyderabad and Ahmedabad are found to be 38,633.2 Gg, 22,783.08 Gg, 14,812.10 Gg, 22,090.55 Gg, 19,796.5 Gg, 13,734.59 Gg and 91,24.45 Gg CO2 eq., respectively. The major contributors sectors are transportation sector (contributing 32%, 17.4%, 13.3%, 19.5%, 43.5%, 56.86% and 25%), domestic sector (contributing 30.26%, 37.2%, 42.78%, 39%, 21.6%, 17.05% and 27.9%) and industrial sector (contributing 7.9%, 7.9%, 17.66%, 20.25%, 1231%, 11.38% and 22.41%) of the total emissions in Delhi, Greater Mumbai, Kolkata, Chennai, Greater Bangalore, Hyderabad and Ahmedabad, respectively. Chennai emits 4.79 t of CO2 equivalent emissions per capita, the highest among all the cities followed by Kolkata which emits 3.29 t of CO2 equivalent emissions per capita. Also Chennai emits the highest CO2 equivalent emissions per GDP (2.55 t CO2 eq./Lakh Rs.) followed by Greater Bangalore which emits 2.18 t CO2 eq./Lakh Rs. (C) 2015 Elsevier Ltd. All rights reserved.
B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy
Resumo:
An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy). A comparative study of the proposed technique with the state-of-art maximum likelihood (ML) and maximum-a-posteriori (MAP) with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED. (C) 2015 Author(s).
Resumo:
A silver ion (Ag+)-triggered thixotropic metallo(organo)gel of p-pyridyl-appended oligo(p-phenylenevinylene) derivatives (OPVs) is reported for the first time. Solubilization of single-walled carbon nanohorns (SWCNHs) in solutions of the pure OPVs as well as in the metallogels mediated by pi-pi interactions has also been achieved. In situ fabrication of silver nanoparticles (AgNPs) in the SWCNH-doped dihybrid gel leads to the formation of a trihybrid metallogel. The mechanical strength of the metallogels could be increased step- wise in the order: freshly prepared gel
Resumo:
This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system under various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic
Resumo:
This study concerns the relationship between the power law recession coefficient k (in - dQ/dt = kQ(alpha), Q being discharge at the basin outlet) and past average discharge Q(N) (where N is the temporal distance from the center of the selected time span in the past to the recession peak), which serves as a proxy for past storage state of the basin. The strength of the k-Q(N) relationship is characterized by the coefficient of determination R-N(2), which is expected to indicate the basin's ability to hold water for N days. The main objective of this study is to examine how R-N(2) value of a basin is related with its physical characteristics. For this purpose, we use streamflow data from 358 basins in the United States and selected 18 physical parameters for each basin. First, we transform the physical parameters into mutually independent principal components. Then we employ multiple linear regression method to construct a model of R-N(2) in terms of the principal components. Furthermore, we employ step-wise multiple linear regression method to identify the dominant catchment characteristics that influence R-N(2) and their directions of influence. Our results indicate that R-N(2) is appreciably related to catchment characteristics. Particularly, it is noteworthy that the coefficient of determination of the relationship between R-N(2) and the catchment characteristics is 0.643 for N = 45. We found that topographical characteristics of a basin are the most dominant factors in controlling the value of R-N(2). Our results may be suggesting that it is possible to tell about the water holding capacity of a basin by just knowing about a few of its physical characteristics. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Clustering techniques which can handle incomplete data have become increasingly important due to varied applications in marketing research, medical diagnosis and survey data analysis. Existing techniques cope up with missing values either by using data modification/imputation or by partial distance computation, often unreliable depending on the number of features available. In this paper, we propose a novel approach for clustering data with missing values, which performs the task by Symmetric Non-Negative Matrix Factorization (SNMF) of a complete pair-wise similarity matrix, computed from the given incomplete data. To accomplish this, we define a novel similarity measure based on Average Overlap similarity metric which can effectively handle missing values without modification of data. Further, the similarity measure is more reliable than partial distances and inherently possesses the properties required to perform SNMF. The experimental evaluation on real world datasets demonstrates that the proposed approach is efficient, scalable and shows significantly better performance compared to the existing techniques.
Resumo:
The standard procedure of groundwater resource estimation in India till date is based on the specific yield parameters of each rock type (lithology) derived through pumping test analysis. Using the change in groundwater level, specific yield, and area of influence, groundwater storage change could be estimated. However, terrain conditions in the form of geomorphological variations have an important bearing on the net groundwater recharge. In this study, an attempt was made to use both lithology and geomorphology as input variables to estimate the recharge from different sources in each lithology unit influenced by the geomorphic conditions (lith-geom), season wise separately. The study provided a methodological approach for an evaluation of groundwater in a semi-arid hard rock terrain in Tirunelveli, Tamil Nadu, India. While characterizing the gneissic rock, it was found that the geomorphologic variations in the gneissic rock due to weathering and deposition behaved differently with respect to aquifer recharge. The three different geomorphic units identified in gneissic rock (pediplain shallow weathered (PPS), pediplain moderate weathered (PPM), and buried pediplain moderate (BPM)) showed a significant variation in recharge conditions among themselves. It was found from the study that Peninsular gneiss gives a net recharge value of 0.13 m/year/unit area when considered as a single unit w.r.t. lithology, whereas the same area considered with lith-geom classes gives recharge values between 0.1 and 0.41 m/year presenting a different assessment. It is also found from this study that the stage of development (SOD) for each lith-geom unit in Peninsular gneiss varies from 168 to 230 %, whereas the SOD is 223 % for the lithology as a single unit.
Resumo:
Let R be a (commutative) local principal ideal ring of length two, for example, the ring R = Z/p(2)Z with p prime. In this paper, we develop a theory of normal forms for similarity classes in the matrix rings M-n (R) by interpreting them in terms of extensions of R t]-modules. Using this theory, we describe the similarity classes in M-n (R) for n <= 4, along with their centralizers. Among these, we characterize those classes which are similar to their transposes. Non-self-transpose classes are shown to exist for all n > 3. When R has finite residue field of order q, we enumerate the similarity classes and the cardinalities of their centralizers as polynomials in q. Surprisingly, the polynomials representing the number of similarity classes in M-n (R) turn out to have non-negative integer coefficients.
Resumo:
We consider the problem of representing a univariate polynomial f(x) as a sum of powers of low degree polynomials. We prove a lower bound of Omega(root d/t) for writing an explicit univariate degree-d polynomial f(x) as a sum of powers of degree-t polynomials.
Resumo:
In this paper, we present the solutions of 1-D and 2-D non-linear partial differential equations with initial conditions. We approach the solutions in time domain using two methods. We first solve the equations using Fourier spectral approximation in the spatial domain and secondly we compare the results with the approximation in the spatial domain using orthogonal functions such as Legendre or Chebyshev polynomials as their basis functions. The advantages and the applicability of the two different methods for different types of problems are brought out by considering 1-D and 2-D nonlinear partial differential equations namely the Korteweg-de-Vries and nonlinear Schrodinger equation with different potential function. (C) 2015 Elsevier Ltd. All rights reserved.