962 resultados para full implementation
Resumo:
A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.
Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.
Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.
Resumo:
48 p.
Resumo:
Methods that exploit the intrinsic locality of molecular interactions show significant promise in making tractable the electronic structure calculation of large-scale systems. In particular, embedded density functional theory (e-DFT) offers a formally exact approach to electronic structure calculations in which the interactions between subsystems are evaluated in terms of their electronic density. In the following dissertation, methodological advances of embedded density functional theory are described, numerically tested, and applied to real chemical systems.
First, we describe an e-DFT protocol in which the non-additive kinetic energy component of the embedding potential is treated exactly. Then, we present a general implementation of the exact calculation of the non-additive kinetic potential (NAKP) and apply it to molecular systems. We demonstrate that the implementation using the exact NAKP is in excellent agreement with reference Kohn-Sham calculations, whereas the approximate functionals lead to qualitative failures in the calculated energies and equilibrium structures.
Next, we introduce density-embedding techniques to enable the accurate and stable calculation of correlated wavefunction (CW) in complex environments. Embedding potentials calculated using e-DFT introduce the effect of the environment on a subsystem for CW calculations (WFT-in-DFT). We demonstrate that WFT-in-DFT calculations are in good agreement with CW calculations performed on the full complex.
We significantly improve the numerics of the algorithm by enforcing orthogonality between subsystems by introduction of a projection operator. Utilizing the projection-based embedding scheme, we rigorously analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using CWs, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We develop an algorithm which corrects this term and demonstrate the accuracy of this corrected embedding scheme.
Resumo:
This thesis aims at a simple one-parameter macroscopic model of distributed damage and fracture of polymers that is amenable to a straightforward and efficient numerical implementation. The failure model is motivated by post-mortem fractographic observations of void nucleation, growth and coalescence in polyurea stretched to failure, and accounts for the specific fracture energy per unit area attendant to rupture of the material.
Furthermore, it is shown that the macroscopic model can be rigorously derived, in the sense of optimal scaling, from a micromechanical model of chain elasticity and failure regularized by means of fractional strain-gradient elasticity. Optimal scaling laws that supply a link between the single parameter of the macroscopic model, namely the critical energy-release rate of the material, and micromechanical parameters pertaining to the elasticity and strength of the polymer chains, and to the strain-gradient elasticity regularization, are derived. Based on optimal scaling laws, it is shown how the critical energy-release rate of specific materials can be determined from test data. In addition, the scope and fidelity of the model is demonstrated by means of an example of application, namely Taylor-impact experiments of polyurea rods. Hereby, optimal transportation meshfree approximation schemes using maximum-entropy interpolation functions are employed.
Finally, a different crazing model using full derivatives of the deformation gradient and a core cut-off is presented, along with a numerical non-local regularization model. The numerical model takes into account higher-order deformation gradients in a finite element framework. It is shown how the introduction of non-locality into the model stabilizes the effect of strain localization to small volumes in materials undergoing softening. From an investigation of craze formation in the limit of large deformations, convergence studies verifying scaling properties of both local- and non-local energy contributions are presented.
Resumo:
The Water Framework Directive (WFD; European Commission 2000) is a framework for European environmental legislation that aims at improving water quality by using an integrated approach to implement the necessary societal and technical measures. Assessments to guide, support, monitor and evaluate policies, such as the WFD, require scientific approaches which integrate biophysical and human aspects of ecological systems and their interactions, as outlined by the International Council for Science (2002). These assessments need to be based on sound scientific principles and address the environmental problems in a holistic way. End-users need help to select the most appropriate methods and models. Advice on the selection and use of a wide range of water quality models has been developed within the project Benchmark Models for the Water Framework Directive (BMW). In this article, the authors summarise the role of benchmarking in the modelling process and explain how such an archive of validated models can be used to support the implementation of the WFD.
Resumo:
Quantum mechanics places limits on the minimum energy of a harmonic oscillator via the ever-present "zero-point" fluctuations of the quantum ground state. Through squeezing, however, it is possible to decrease the noise of a single motional quadrature below the zero-point level as long as noise is added to the orthogonal quadrature. While squeezing below the quantum noise level was achieved decades ago with light, quantum squeezing of the motion of a mechanical resonator is a more difficult prospect due to the large thermal occupations of megahertz-frequency mechanical devices even at typical dilution refrigerator temperatures of ~ 10 mK.
Kronwald, Marquardt, and Clerk (2013) propose a method of squeezing a single quadrature of mechanical motion below the level of its zero-point fluctuations, even when the mechanics starts out with a large thermal occupation. The scheme operates under the framework of cavity optomechanics, where an optical or microwave cavity is coupled to the mechanics in order to control and read out the mechanical state. In the proposal, two pump tones are applied to the cavity, each detuned from the cavity resonance by the mechanical frequency. The pump tones establish and couple the mechanics to a squeezed reservoir, producing arbitrarily-large, steady-state squeezing of the mechanical motion. In this dissertation, I describe two experiments related to the implementation of this proposal in an electromechanical system. I also expand on the theory presented in Kronwald et. al. to include the effects of squeezing in the presence of classical microwave noise, and without assumptions of perfect alignment of the pump frequencies.
In the first experiment, we produce a squeezed thermal state using the method of Kronwald et. al.. We perform back-action evading measurements of the mechanical squeezed state in order to probe the noise in both quadratures of the mechanics. Using this method, we detect single-quadrature fluctuations at the level of 1.09 +/- 0.06 times the quantum zero-point motion.
In the second experiment, we measure the spectral noise of the microwave cavity in the presence of the squeezing tones and fit a full model to the spectrum in order to deduce a quadrature variance of 0.80 +/- 0.03 times the zero-point level. These measurements provide the first evidence of quantum squeezing of motion in a mechanical resonator.
Resumo:
This thesis outlines the construction of several types of structured integrators for incompressible fluids. We first present a vorticity integrator, which is the Hamiltonian counterpart of the existing Lagrangian-based fluid integrator. We next present a model-reduced variational Eulerian integrator for incompressible fluids, which combines the efficiency gains of dimension reduction, the qualitative robustness to coarse spatial and temporal resolutions of geometric integrators, and the simplicity of homogenized boundary conditions on regular grids to deal with arbitrarily-shaped domains with sub-grid accuracy.
Both these numerical methods involve approximating the Lie group of volume-preserving diffeomorphisms by a finite-dimensional Lie-group and then restricting the resulting variational principle by means of a non-holonomic constraint. Advantages and limitations of this discretization method will be outlined. It will be seen that these derivation techniques are unable to yield symplectic integrators, but that energy conservation is easily obtained, as is a discretized version of Kelvin's circulation theorem.
Finally, we outline the basis of a spectral discrete exterior calculus, which may be a useful element in producing structured numerical methods for fluids in the future.
Resumo:
We demonstrate that a pattern spectrum can be decomposed into the union of hit-or-miss transforms with respect to a series of structure-element pairs. Moreover we use a Boolean-logic function to express the pattern spectrum and show that the Boolean-logic representation of a pattern spectrum is composed of hit-or-miss min terms. The optical implementation of a pattern spectrum is based on an incoherent optical correlator with a feedback operation. (C) 1996 Optical Society of America
Resumo:
Fuzzy-reasoning theory is widely used in industrial control. Mathematical morphology is a powerful tool to perform image processing. We apply fuzzy-reasoning theory to morphology and suggest a scheme of fuzzy-reasoning morphology, including fuzzy-reasoning dilation and erosion functions. These functions retain more fine details than the corresponding conventional morphological operators with the same structuring element. An optical implementation has been developed with area-coding and thresholding methods. (C) 1997 Optical Society of America.
Resumo:
The scaled fractional Fourier transform is suggested and is implemented optically by one lens for different values of phi and output scale. In addition, physically it relates the FRT with the general lens transform-the optical diffraction between two asymmetrically positioned planes before and after a lens. (C) 1997 Optical Society of America.
Resumo:
A more powerful tool for binary image processing, i.e., logic-operated mathematical morphology (LOMM), is proposed. With LOMM the image and the structuring element (SE) are treated as binary logical variables, and the MULTIPLY between the image and the SE in correlation is replaced with 16 logical operations. A total of 12 LOMM operations are obtained. The optical implementation of LOMM is described. The application of LOMM and its experimental results are also presented. (C) 1999 Optical Society of America.