959 resultados para Finite size scaling
Resumo:
In this paper, we consider the optimization of the cross-section profile of a cantilever beam under deformation-dependent loads. Such loads are encountered in plants and trees, cereal crop plants such as wheat and corn in particular. The wind loads acting on the grain-bearing spike of a wheat stalk vary with the orientation of the spike as the stalk bends; this bending and the ensuing change in orientation depend on the deformation of the plant under the same load.The uprooting of the wheat stalks under wind loads is an unresolved problem in genetically modified dwarf wheat stalks. Although it was thought that the dwarf varieties would acquire increased resistance to uprooting, it was found that the dwarf wheat plants selectively decreased the Young's modulus in order to be compliant. The motivation of this study is to investigate why wheat plants prefer compliant stems. We analyze this by seeking an optimal shape of the wheat plant's stem, which is modeled as a cantilever beam, by taking the large deflection of the stem into account with the help of co-rotational finite element beam modeling. The criteria considered here include minimum moment at the fixed ground support, adequate stiffness and strength, and the volume of material. The result reported here is an example of flexibility, rather than stiffness, leading to increased strength.
Resumo:
Abstract of Macbeth, G. M., Broderick, D., Buckworth, R. & Ovenden, J. R. (In press, Feb 2013). Linkage disequilibrium estimation of effective population size with immigrants from divergent populations: a case study on Spanish mackerel (Scomberomorus commerson). G3: Genes, Genomes and Genetics. Estimates of genetic effective population size (Ne) using molecular markers are a potentially useful tool for the management of endangered through to commercial species. But, pitfalls are predicted when the effective size is large, as estimates require large numbers of samples from wild populations for statistical validity. Our simulations showed that linkage disequilibrium estimates of Ne up to 10,000 with finite confidence limits can be achieved with sample sizes around 5000. This was deduced from empirical allele frequencies of seven polymorphic microsatellite loci in a commercially harvested fisheries species, the narrow barred Spanish mackerel (Scomberomorus commerson). As expected, the smallest standard deviation of Ne estimates occurred when low frequency alleles were excluded. Additional simulations indicated that the linkage disequilibrium method was sensitive to small numbers of genotypes from cryptic species or conspecific immigrants. A correspondence analysis algorithm was developed to detect and remove outlier genotypes that could possibly be inadvertently sampled from cryptic species or non-breeding immigrants from genetically separate populations. Simulations demonstrated the value of this approach in Spanish mackerel data. When putative immigrants were removed from the empirical data, 95% of the Ne estimates from jacknife resampling were above 24,000.
Resumo:
The behaviour of the slotted ALOHA satellite channel with a finite buffer at each of the user terminals is studied. Approximate relationships between the queuing delay, overflow probabilities and buffer size are derived as functions of the system input parameters (i.e. the number of users, the traffic intensity, the transmission and the retransmission probabilities) for two cases found in the literature: the symmetric case (same transmission and retransmission probabilities), and the asymmetric case (transmission probability far greater than the retransmission probability). For comparison, the channel performance with an infinite buffer is also derived. Additionally, the stability condition for the system is defined in the latter case. The analysis carried out in the paper reveals that the queuing delays are quite significant, especially under high traffic conditions.
Resumo:
Reverse osmosis (RO) brine produced at a full-scale coal seam gas (CSG) water treatment facility was characterized with spectroscopic and other analytical techniques. A number of potential scalants including silica, calcium, magnesium, sulphates and carbonates, all of which were present in dissolved and non-dissolved forms, were characterized. The presence of spherical particles with a size range of 10–1000 nm and aggregates of 1–10 microns was confirmed by transmission electron microscopy (TEM). Those particulates contained the following metals in decreasing order: K, Si, Sr, Ca, B, Ba, Mg, P, and S. Characterization showed that nearly one-third of the total silicon in the brine was present in the particulates. Further, analysis of the RO brine suggested supersaturation and precipitation of metal carbonates and sulphates during the RO process should take place and could be responsible for subsequently capturing silica in the solid phase. However, the precipitation of crystalline carbonates and sulphates are complex. X-ray diffraction analysis did not confirm the presence of common calcium carbonates or sulphates but instead showed the presence of a suite of complex minerals, to which amorphous silica and/or silica rich compounds could have adhered. A filtration study showed that majority of the siliceous particles were less than 220 nm in size, but could still be potentially captured using a low molecular weight ultrafiltration membrane.
Resumo:
The behaviour of the slotted ALOHA satellite channel with a finite buffer at each of the user terminals is studied. Approximate relationships between the queuing delay, overflow probabilities and buffer size are derived as functions of the system input parameters (i.e. the number of users, the traffic intensity, the transmission and the retransmission probabilities) for two cases found in the literature: the symmetric case (same transmission and retransmission probabilities), and the asymmetric case (transmission probability far greater than the retransmission probability). For comparison, the channel performance with an infinite buffer is also derived. Additionally, the stability condition for the system is defined in the latter case. The analysis carried out in the paper reveals that the queuing delays are quite significant, especially under high traffic conditions.
An FETI-preconditioned conjuerate gradient method for large-scale stochastic finite element problems
Resumo:
In the spectral stochastic finite element method for analyzing an uncertain system. the uncertainty is represented by a set of random variables, and a quantity of Interest such as the system response is considered as a function of these random variables Consequently, the underlying Galerkin projection yields a block system of deterministic equations where the blocks are sparse but coupled. The solution of this algebraic system of equations becomes rapidly challenging when the size of the physical system and/or the level of uncertainty is increased This paper addresses this challenge by presenting a preconditioned conjugate gradient method for such block systems where the preconditioning step is based on the dual-primal finite element tearing and interconnecting method equipped with a Krylov subspace reusage technique for accelerating the iterative solution of systems with multiple and repeated right-hand sides. Preliminary performance results on a Linux Cluster suggest that the proposed Solution method is numerically scalable and demonstrate its potential for making the uncertainty quantification Of realistic systems tractable.
Resumo:
Support Vector Machines(SVMs) are hyperplane classifiers defined in a kernel induced feature space. The data size dependent training time complexity of SVMs usually prohibits its use in applications involving more than a few thousands of data points. In this paper we propose a novel kernel based incremental data clustering approach and its use for scaling Non-linear Support Vector Machines to handle large data sets. The clustering method introduced can find cluster abstractions of the training data in a kernel induced feature space. These cluster abstractions are then used for selective sampling based training of Support Vector Machines to reduce the training time without compromising the generalization performance. Experiments done with real world datasets show that this approach gives good generalization performance at reasonable computational expense.
Resumo:
We report numerical and analytic results for the spatial survival probability for fluctuating one-dimensional interfaces with Edwards-Wilkinson or Kardar-Parisi-Zhang dynamics in the steady state. Our numerical results are obtained from analysis of steady-state profiles generated by integrating a spatially discretized form of the Edwards-Wilkinson equation to long times. We show that the survival probability exhibits scaling behavior in its dependence on the system size and the "sampling interval" used in the measurement for both "steady-state" and "finite" initial conditions. Analytic results for the scaling functions are obtained from a path-integral treatment of a formulation of the problem in terms of one-dimensional Brownian motion. A "deterministic approximation" is used to obtain closed-form expressions for survival probabilities from the formally exact analytic treatment. The resulting approximate analytic results provide a fairly good description of the numerical data.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
Accurate, reliable and economical methods of determining stress distributions are important for fastener joints. In the past the contact stress problems in these mechanically fastened joints using interference or push or clearance fit pins were solved using both inverse and iterative techniques. Inverse techniques were found to be most efficient, but at times inadequate in the presence of asymmetries. Iterative techniques based on the finite element method of analysis have wider applications, but they have the major drawbacks of being expensive and time-consuming. In this paper an improved finite element technique for iteration is presented to overcome these drawbacks. The improved iterative technique employs a frontal solver for elimination of variables not requiring iteration, by creation of a dummy element. This automatically results in a large reduction in computer time and in the size of the problem to be handled during iteration. Numerical results are compared with those available in the literature. The method is used to study an eccentrically located pin in a quasi-isotropic laminated plate under uniform tension.
Resumo:
A systematic procedure is outlined for scaling analysis of momentum and heat transfer in gas tungsten arc weld pools. With suitable selections of non-dimentionalised parameters, the governing equations coupled with appropriate boundary conditions are first scaled, and the relative significance of various terms appearing in them is analysed accordingly. The analysis is then used to predict the orders of magnitude of some important quantities, such as the velocity scene lit the top surface, velocity boundary layer thickness, maximum temperature increase in the pool, and time required for initiation of melting. Some of the quantities predicted from the scaling analysis can also be used for optimised selection of appropriate grid size and time steps for full numerical simulation of the process. The scaling predictions are finally assessed by comparison with numerical results quoted in the literature, and a good qualitative agreement is observed.
Resumo:
In this paper, we outline a systematic procedure for scaling analysis of momentum and heat transfer in laser melted pools. With suitable choices of non-dimensionalising parameters, the governing equations coupled with appropriate boundary conditions are first scaled, and the relative significance of various terms appearing in them are accordingly analysed. The analysis is then utilised to predict the orders of magnitude of some important quantities, such as the velocity scale at the top surface, velocity boundary layer thickness, maximum temperature rise in the pool, fully developed pool-depth, and time required for initiation of melting. Using the scaling predictions, the influence of various processing parameters on the system variables can be well recognised, which enables us to develop a deeper insight into the physical problem of interest. Moreover, some of the quantities predicted from the scaling analysis can be utilised for optimised selection of appropriate grid-size and time-steps for full numerical simulation of the process. The scaling predictions are finally assessed by comparison with experimental and numerical results quoted in the literature, and an excellent qualitative agreement is observed.
Resumo:
For studying systems with a cubic anisotropy in interfacial energy sigma, we extend the Cahn-Hilliard model by including in it a fourth-rank term, namely, gamma (ijlm) [partial derivative (2) c/(partial derivativex(i) partial derivativex(j))] [partial derivative (2) c/(partial derivativex(l) partial derivativex(m))]. This term leads to an additional linear term in the evolution equation for the composition parameter field. It also leads to an orientation-dependent effective fourth-rank coefficient gamma ([hkl]) in the governing equation for the one-dimensional composition profile across a planar interface. The main effect of a non-negative gamma ([hkl]) is to increase both sigma and interfacial width w, each of which, upon suitable scaling, is related to gamma ([hkl]) through a universal scaling function. In this model, sigma is a differentiable function of interface orientation (n) over cap, and does not exhibit cusps; therefore, the equilibrium particle shapes (Wulff shapes) do not contain planar facets. However, the anisotropy in the interfacial energy can be large enough to give rise to corners in the Wulff shapes in two dimensions. In particles of finite sizes, the corners become rounded, and their shapes tend towards the Wulff shape with increasing particle size.
Resumo:
Size and strain rate effects are among several factors which play an important role in determining the response of nanostructures, such as their deformations, to the mechanical loadings. The mechanical deformations in nanostructure systems at finite temperatures are intrinsically dynamic processes. Most of the recent works in this context have been focused on nanowires [1, 2], but very little attention has been paid to such low dimensional nanostructures as quantum dots (QDs). In this contribution, molecular dynamics (MD) simulations with an embedded atom potential method(EAM) are carried out to analyse the size and strain rate effects in the silicon (Si) QDs, as an example. We consider various geometries of QDs such as spherical, cylindrical and cubic. We choose Si QDs as an example due to their major applications in solar cells and biosensing. The analysis has also been focused on the variation in the deformation mechanisms with the size and strain rate for Si QD embedded in a matrix of SiO2 [3] (other cases include SiN and SiC matrices).It is observed that the mechanical properties are the functions of the QD size, shape and strain rate as it is in the case for nanowires [2]. We also present the comparative study resulted from the application of different EAM potentials in particular, the Stillinger-Weber (SW) potential, the Tersoff potentials and the environment-dependent interatomic potential (EDIP) [1]. Finally, based on the stabilized structural properties we compute electronic bandstructures of our nanostructures using an envelope function approach and its finite element implementation.
Resumo:
In this paper, a model for composite beam with embedded de-lamination is developed using the wavelet based spectral finite element (WSFE) method particularly for damage detection using wave propagation analysis. The simulated responses are used as surrogate experimental results for the inverse problem of detection of damage using wavelet filtering. The WSFE technique is very similar to the fast fourier transform (FFT) based spectral finite element (FSFE) except that it uses compactly supported Daubechies scaling function approximation in time. Unlike FSFE formulation with periodicity assumption, the wavelet-based method allows imposition of initial values and thus is free from wrap around problems. This helps in analysis of finite length undamped structures, where the FSFE method fails to simulate accurate response. First, numerical experiments are performed to study the effect of de-lamination on the wave propagation characteristics. The responses are simulated for different de-lamination configurations for both broad-band and narrow-band excitations. Next, simulated responses are used for damage detection using wavelet analysis.