969 resultados para Finite size scalling
Resumo:
Size tolerance of a 4X4 general interference tapered multimode interference (MMI) coupler in a silicon-on-insulator (SOI) structure is investigated by means of a 2-D finite difference beam propagation method (2D-FDBPM), together with an effective refractive index method (EIM). The results show that the tapered multimode interference coupler exhibits relatively larger size tolerance when light is launched from the edgeport than from midport, though it has much better output power uniformity when light is launched from midport. Besides that, it can reduce the device length greatly. The 4X4 general interference tapered MMI coupler has a slightly larger size tolerance compared with a conventional straight multimode interference coupler. (C) 2003 Society of Photo-Optical Instrumentation Engineers.
Resumo:
The generation of models and counterexamples is an important form of reasoning. In this paper, we give a formal account of a system, called FALCON, for constructing finite algebras from given equational axioms. The abstract algorithms, as well as some implementation details and sample applications, are presented. The generation of finite models is viewed as a constraint satisfaction problem, with ground instances of the axioms as constraints. One feature of the system is that it employs a very simple technique, called the least number heuristic, to eliminate isomorphic (partial) models, thus reducing the size of the search space. The correctness of the heuristic is proved. Some experimental data are given to show the performance and applications of the system.
Resumo:
A general numerical algorithm in the context of finite element scheme is developed to solve Richards’ equation, in which a mass-conservative, modified head based scheme (MHB) is proposed to approximate the governing equation, and mass-lumping techniques are used to keep the numerical simulation stable. The MHB scheme is compared with the modified Picard iteration scheme (MPI) in a ponding infiltration example. Although the MHB scheme is a little inferior to the MPI scheme in respect of mass balance, it is superior in convergence character and simplicity. Fully implicit, explicit and geometric average conductivity methods are performed and compared, the first one is superior in simulation accuracy and can use large time-step size, but the others are superior in iteration efficiency. The algorithm works well over a wide variety of problems, such as infiltration fronts, steady-state and transient water tables, and transient seepage faces, as demonstrated by its performance against published experimental data. The algorithm is presented in sufficient detail to facilitate its implementation.
Resumo:
The microregion approximation explicit finite difference method is used to simulate cyclic voltammetry of an electrochemical reversible system in a three-dimensional thin layer cell with minigrid platinum electrode. The simulated CV curve and potential scan-absorbance curve were in very good accordance with the experimental results, which differed from those at a plate electrode. The influences of sweep rate, thickness of the thin layer, and mesh size on the peak current and peak separation were also studied by numerical analysis, which give some instruction for choosing experimental conditions or designing a thin layer cell. The critical ratio (1.33) of the diffusion path inside the mesh hole and across the thin layer was also obtained. If the ratio is greater than 1.33 by means of reducing the thickness of a thin layer, the electrochemical property will be far away from the thin layer property.
Resumo:
Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.
Resumo:
A monotone scheme for finite volume simulation of magnetohydrodynamic internal flows at high Hartmann number is presented. The numerical stability is analysed with respect to the electromagnetic force. Standard central finite differences applied to finite volumes can only be numerically stable if the vector products involved in this force are computed with a scheme using a fully staggered grid. The electromagnetic quantities (electric currents and electric potential) must be shifted by half the grid size from the mechanical ones (velocity and pressure). An integral treatment of the boundary layers is used in conjunction with boundary conditions for electrically conducting walls. The simulations are performed with inhomogeneous electrical conductivities of the walls and reach high Hartmann numbers in three-dimensional simulations, even though a non-adaptive grid is used.
Resumo:
In recognition of the differences of scale between the welding pool and the heat affected zone along the welding line on one hand, and the overall size of the components being welded on the other, a local-global finite element approach was developed for the evaluation of distortions in laser welded shipbuilding parts. The approach involves the tandem use of a 'local' and a 'global' step. The local step involves a three-dimensional finite element model for the simulation of the laser welding process using the Sysweld finite element code, which takes into account thermal, metallurgical, and mechanical aspects. The simulation of the laser welding process was performed using a non-linear heat transfer analysis, based on a keyhole formation model, and a coupled transient thermomechanical analysis, which takes into account metallurgical transformations using the temperature dependent material properties and the continuous cooling transformation diagram. The size and shape of the keyhole used in the local finite element analysis was evaluated using a keyhole formation model and the Physica finite volume code. The global step involves the transfer of residual plastic strains and the stiffness of the weld obtained from the local model to the global analysis, which then provides the predicted distortions for the whole part. This newly developed methodology was applied to the evaluation of global distortions due to laser welding of stiffeners on a shipbuilding part. The approach has been proved reliable in comparison with experiments and of practical industrial use in terms of computing time and storage.
Resumo:
This paper derives optimal life histories for fishes or other animals in relation to the size spectrum of the ecological community in which they are both predators and prey. Assuming log-linear size-spectra and well known scaling laws for feeding and mortality, we first construct the energetics of the individual. From these we find, using dynamic programming, the optimal allocation of energy between growth and reproduction as well as the trade-off between offspring size and numbers. Optimal strategies were found to be strongly dependent on size spectrum slope. For steep size spectra (numbers declining rapidly with size), determinate growth was optimal and allocation to somatic growth increased rapidly with increasing slope. However, restricting reproduction to a fixed mating season changed optimal allocations to give indeterminate growth approximating a von Bertalanffy trajectory. The optimal offspring size was as small as possible given other restrictions such as newborn starvation mortality. For shallow size spectra, finite optimal maturity size required a decline in fitness for large size or age. All the results are compared with observed size spectra of fish communities to show their consistency and relevance.
Resumo:
Lap joints are widely used in the manufacture of stiffened panels and influence local panel sub-component stability, defining buckling unit dimensions and boundary conditions. Using the Finite Element method it is possible to model joints in great detail and predict panel buckling behaviour with accuracy. However, when modelling large panel structures such detailed analysis becomes computationally expensive. Moreover, the impact of local behaviour on global panel performance may reduce as the scale of the modelled structure increases. Thus this study presents coupled computational and experimental analysis, aimed at developing relationships between modelling fidelity and the size of the modelled structure, when the global static load to cause initial buckling is the required analysis output. Small, medium and large specimens representing welded lap-joined fuselage panel structure are examined. Two element types, shell and solid-shell, are employed to model each specimen, highlighting the impact of idealisation on the prediction of welded stiffened panel initial skin buckling.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
Warships are generally sleek, slender with V shaped sections and block coefficient below 0.5, compared to fuller forms and higher values for commercial ships. They normally operate in the higher Froude number regime, and the hydrodynamic design is primarily aimed at achieving higher speeds with the minimum power. Therefore the structural design and analysis methods are different from those for commercial ships. Certain design guidelines have been given in documents like Naval Engineering Standards and one of the new developments in this regard is the introduction of classification society rules for the design of warships.The marine environment imposes subjective and objective uncertainties on ship structure. The uncertainties in loads, material properties etc.,. make reliable predictions of ship structural response a difficult task. Strength, stiffness and durability criteria for warship structures can be established by investigations on elastic analysis, ultimate strength analysis and reliability analysis. For analysis of complicated warship structures, special means and valid approximations are required.Preliminary structural design of a frigate size ship has been carried out . A finite element model of the hold model, representative of the complexities in the geometric configuration has been created using the finite element software NISA. Two other models representing the geometry to a limited extent also have been created —- one with two transverse frames and the attached plating alongwith the longitudinal members and the other representing the plating and longitudinal stiffeners between two transverse frames. Linear static analysis of the three models have been carried out and each one with three different boundary conditions. The structural responses have been checked for deflections and stresses against the permissible values. The structure has been found adequate in all the cases. The stresses and deflections predicted by the frame model are comparable with those of the hold model. But no such comparison has been realized for the interstiffener plating model with the other two models.Progressive collapse analyses of the models have been conducted for the three boundary conditions, considering geometric nonlinearity and then combined geometric and material nonlinearity for the hold and the frame models. von Mises — lllyushin yield criteria with elastic-perfectly plastic stress-strain curve has been chosen. ln each case, P-Delta curves have been generated and the ultimate load causing failure (ultimate load factor) has been identified as a multiple of the design load specified by NES.Reliability analysis of the hull module under combined geometric and material nonlinearities have been conducted. The Young's Modulus and the shell thickness have been chosen as the variables. Randomly generated values have been used in the analysis. First Order Second Moment has been used to predict the reliability index and thereafter, the probability of failure. The values have been compared against standard values published in literature.
Resumo:
Pontryagin's maximum principle from optimal control theory is used to find the optimal allocation of energy between growth and reproduction when lifespan may be finite and the trade-off between growth and reproduction is linear. Analyses of the optimal allocation problem to date have generally yielded bang-bang solutions, i.e. determinate growth: life-histories in which growth is followed by reproduction, with no intermediate phase of simultaneous reproduction and growth. Here we show that an intermediate strategy (indeterminate growth) can be selected for if the rates of production and mortality either both increase or both decrease with increasing body size, this arises as a singular solution to the problem. Our conclusion is that indeterminate growth is optimal in more cases than was previously realized. The relevance of our results to natural situations is discussed.
Resumo:
We derive energy-norm a posteriori error bounds, using gradient recovery (ZZ) estimators to control the spatial error, for fully discrete schemes for the linear heat equation. This appears to be the �rst completely rigorous derivation of ZZ estimators for fully discrete schemes for evolution problems, without any restrictive assumption on the timestep size. An essential tool for the analysis is the elliptic reconstruction technique.Our theoretical results are backed with extensive numerical experimentation aimed at (a) testing the practical sharpness and asymptotic behaviour of the error estimator against the error, and (b) deriving an adaptive method based on our estimators. An extra novelty provided is an implementation of a coarsening error "preindicator", with a complete implementation guide in ALBERTA in the appendix.