27 resultados para Irreducible polynomial
Resumo:
This paper describes a unified approach to modelling the polysilicon thin film transistor (TFT) for the purposes of circuit design. The approach uses accurate methods of predicting the channel conductance and then fitting the resulting data with a polynomial. Two methods are proposed to find the channel conductance: a device model and measurement. The approach is suitable because the TFT does not have a well defined threshold voltage. The polynomial conductance is then integrated generally to find the drain current and channel charge, necessary for a complete circuit model. © 1991 The Japan Society of Applied Physics.
Resumo:
Multiwalled carbon nanotubes display dielectric properties similar to those of graphite, which can be calculated using the well known Drude-Lorentz model. However, most computational softwares lack the capacity to directly incorporate this model into the simulations. We present the finite element modeling of optical propagation through periodic arrays of multiwalled carbon nanotubes. The dielectric function of nanotubes was incorporated into the model by using polynomial curve fitting technique. The computational analysis revealed interesting metamaterial filtering effects displayed by the highly dense square lattice arrays of carbon nanotubes, having lattice constants of the order few hundred nanometers. The curve fitting results for the dielectric function can also be used for simulating other interesting optical applications based on nanotube arrays.
Resumo:
In order to disign an airfoil of which maximum lift coefficient (CL max) is not sensitive to location of forced top boundary layer transition. Taking maximizing mean value of CL max and minimizing standard deviation as biobjective, leading edge radius, manximum thickness and its location, maximum camber and its location as deterministic design variables, location of forced top boundary layer transition as stochastic variable, XFOIL as deterministic CFD solver, non-intrusive polynomial chaos as substitute of Monte Carlo method, we completed a robust airfoil design problem. Results demonstrate performance of initial airfoil is enhanced through reducing standard deviation of CL max. Besides, we also know maximum thickness has the most dominating effect on mean value of CL max, location of maximum thickness has the most dominating effect on standard deviation of CL max, maximum camber has a little effect on both mean value and standard deviation, and maximum camber is the only element of which increase can lead increase of mean value and standard deviation at the same time. Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
This article introduces Periodically Controlled Hybrid Automata (PCHA) for modular specification of embedded control systems. In a PCHA, control actions that change the control input to the plant occur roughly periodically, while other actions that update the state of the controller may occur in the interim. Such actions could model, for example, sensor updates and information received from higher-level planning modules that change the set point of the controller. Based on periodicity and subtangential conditions, a new sufficient condition for verifying invariant properties of PCHAs is presented. For PCHAs with polynomial continuous vector fields, it is possible to check these conditions automatically using, for example, quantifier elimination or sum of squares decomposition. We examine the feasibility of this automatic approach on a small example. The proposed technique is also used to manually verify safety and progress properties of a fairly complex planner-controller subsystem of an autonomous ground vehicle. Geometric properties of planner-generated paths are derived which guarantee that such paths can be safely followed by the controller. © 2012 ACM.
Resumo:
There is an increasing demand for optimising complete systems and the devices within that system, including capturing the interactions between the various multi-disciplinary (MD) components involved. Furthermore confidence in robust solutions is esential. As a consequence the computational cost rapidly increases and in many cases becomes infeasible to perform such conceptual designs. A coherent design methodology is proposed, where the aim is to improve the design process by effectively exploiting the potential of computational synthesis, search and optimisation and conventional simulation, with a reduction of the computational cost. This optimization framework consists of a hybrid optimization algorithm to handles multi-fidelity simulations. Simultaneously and in order to handles uncertainty without recasting the model and at affordable computational cost, a stochastic modelling method known as non-intrusive polynomial chaos is introduced. The effectiveness of the design methodology is demonstrated with the optimisation of a submarine propulsion system.
Resumo:
The design of a deployable structure which deploys from a compact bundle of six parallel bars to a rectangular ring is considered. The structure is a plane symmetric Bricard linkage. The internal mechanism is described in terms of its Denavit-Hartenberg parameters; the nature of its single degree of freedom is examined in detail by determining the exact structure of the system of equations governing its movement; a range of design parameters for building feasible mechanisms is determined numerically; and polynomial continuation is used to design rings with certain specified desirable properties. © 2013 Elsevier Ltd.
Resumo:
Reconstruction of biochemical reaction networks (BRN) and genetic regulatory networks (GRN) in particular is a central topic in systems biology which raises crucial theoretical challenges in system identification. Nonlinear Ordinary Differential Equations (ODEs) that involve polynomial and rational functions are typically used to model biochemical reaction networks. Such nonlinear models make the problem of determining the connectivity of biochemical networks from time-series experimental data quite difficult. In this paper, we present a network reconstruction algorithm that can deal with ODE model descriptions containing polynomial and rational functions. Rather than identifying the parameters of linear or nonlinear ODEs characterised by pre-defined equation structures, our methodology allows us to determine the nonlinear ODEs structure together with their associated parameters. To solve the network reconstruction problem, we cast it as a compressive sensing (CS) problem and use sparse Bayesian learning (SBL) algorithms as a computationally efficient and robust way to obtain its solution. © 2012 IEEE.
Resumo:
We offer a solution to the problem of efficiently translating algorithms between different types of discrete statistical model. We investigate the expressive power of three classes of model-those with binary variables, with pairwise factors, and with planar topology-as well as their four intersections. We formalize a notion of "simple reduction" for the problem of inferring marginal probabilities and consider whether it is possible to "simply reduce" marginal inference from general discrete factor graphs to factor graphs in each of these seven subclasses. We characterize the reducibility of each class, showing in particular that the class of binary pairwise factor graphs is able to simply reduce only positive models. We also exhibit a continuous "spectral reduction" based on polynomial interpolation, which overcomes this limitation. Experiments assess the performance of standard approximate inference algorithms on the outputs of our reductions.
Resumo:
Biofuels are increasingly promoted worldwide as a means for reducing greenhouse gas (GHG) emissions from transport. However, current regulatory frameworks and most academic life cycle analyses adopt a deterministic approach in determining the GHG intensities of biofuels and thus ignore the inherent risk associated with biofuel production. This study aims to develop a transparent stochastic method for evaluating UK biofuels that determines both the magnitude and uncertainty of GHG intensity on the basis of current industry practices. Using wheat ethanol as a case study, we show that the GHG intensity could span a range of 40-110 gCO2e MJ-1 when land use change (LUC) emissions and various sources of uncertainty are taken into account, as compared with a regulatory default value of 44 gCO2e MJ-1. This suggests that the current deterministic regulatory framework underestimates wheat ethanol GHG intensity and thus may not be effective in evaluating transport fuels. Uncertainties in determining the GHG intensity of UK wheat ethanol include limitations of available data at a localized scale, and significant scientific uncertainty of parameters such as soil N2O and LUC emissions. Biofuel polices should be robust enough to incorporate the currently irreducible uncertainties and flexible enough to be readily revised when better science is available. © 2013 IOP Publishing Ltd.
Resumo:
This work employed a clayey, silty, sandy gravel contaminated with a mixture of metals (Cd, Cu, Pb, Ni and Zn) and diesel. The contaminated soil was treated with 5 and 10% dosages of different cementitious binders. The binders include Portland cement, cement-fly ash, cement-slag and lime-slag mixtures. Monolithic leaching from the treated soils was evaluated over a 64-day period alongside granular leachability of 49- and 84-day old samples. Surface wash-off was the predominant leaching mechanism for monolithic samples. In this condition, with data from different binders and curing ages combined, granular leachability as a function of monolithic leaching generally followed degrees 4 and 6 polynomial functions. The only exception was for Cu, which followed the multistage dose-response model. The relationship between both leaching tests varied with the type of metal, curing age/residence time of monolithic samples in the leachant, and binder formulation. The results provide useful design information on the relationship between leachability of metals from monolithic forms of S/S treated soils and the ultimate leachability in the eventual breakdown of the stabilized/solidified soil.
Resumo:
A multivariate, robust, rational interpolation method for propagating uncertainties in several dimensions is presented. The algorithm for selecting numerator and denominator polynomial orders is based on recent work that uses a singular value decomposition approach. In this paper we extend this algorithm to higher dimensions and demonstrate its efficacy in terms of convergence and accuracy, both as a method for response suface generation and interpolation. To obtain stable approximants for continuous functions, we use an L2 error norm indicator to rank optimal numerator and denominator solutions. For discontinous functions, a second criterion setting an upper limit on the approximant value is employed. Analytical examples demonstrate that, for the same stencil, rational methods can yield more rapid convergence compared to pseudospectral or collocation approaches for certain problems. © 2012 AIAA.
Resumo:
Flow measurement data at the district meter area (DMA) level has the potential for burst detection in the water distribution systems. This work investigates using a polynomial function fitted to the historic flow measurements based on a weighted least-squares method for automatic burst detection in the U.K. water distribution networks. This approach, when used in conjunction with an expectationmaximization (EM) algorithm, can automatically select useful data from the historic flow measurements, which may contain normal and abnormal operating conditions in the distribution network, e.g., water burst. Thus, the model can estimate the normal water flow (nonburst condition), and hence the burst size on the water distribution system can be calculated from the difference between the measured flow and the estimated flow. The distinguishing feature of this method is that the burst detection is fully unsupervised, and the burst events that have occurred in the historic data do not affect the procedure and bias the burst detection algorithm. Experimental validation of the method has been carried out using a series of flushing events that simulate burst conditions to confirm that the simulated burst sizes are capable of being estimated correctly. This method was also applied to eight DMAs with known real burst events, and the results of burst detections are shown to relate to the water company's records of pipeline reparation work. © 2014 American Society of Civil Engineers.