857 resultados para Finite-precision computation
Resumo:
En esta tesis se integran numéricamente las ecuaciones reducidas de Navier Stokes (RNS), que describen el flujo en una capa límite tridimensional que presenta también una escala característica espacial corta en el sentido transversal. La formulación RNS se usa para el cálculo de “streaks” no lineales de amplitud finita, y los resultados conseguidos coinciden con los existentes en la literatura, obtenidos típicamente utilizando simulación numérica directa (DNS) o nonlinear parabolized stability equations (PSE). El cálculo de los “streaks” integrando las RNS es mucho menos costoso que usando DNS, y no presenta los problemas de estabilidad que aparecen en la formulación PSE cuando la amplitud del “streak” deja de ser pequeña. El código de integración RNS se utiliza también para el cálculo de los “streaks” que aparecen de manera natural en el borde de ataque de una placa plana en ausencia de perturbaciones en la corriente uniforme exterior. Los resultados existentes hasta ahora calculaban estos “streaks” únicamente en el límite lineal (amplitud pequeña), y en esta tesis se lleva a cabo el cálculo de los mismos en el régimen completamente no lineal (amplitud finita). En la segunda parte de la tesis se generaliza el código RNS para incluir la posibilidad de tener una placa no plana, con curvatura en el sentido transversal que varía lentamente en el sentido de la corriente. Esto se consigue aplicando un cambio de coordenadas, que transforma el dominio físico en uno rectangular. La formulación RNS se integra también expresada en las correspondientes coordenadas curvilíneas. Este código generalizado RNS se utiliza finalmente para estudiar el flujo de capa límite sobre una placa con surcos que varían lentamente en el sentido de la corriente, y es usado para simular el flujo sobre surcos que crecen en tal sentido. Abstract In this thesis, the reduced Navier Stokes (RNS) equations are numerically integrated. This formulation describes the flow in a three-dimensional boundary layer that also presents a short characteristic space scale in the spanwise direction. RNS equations are used to calculate nonlinear finite amplitude “streaks”, and the results agree with those reported in the literature, typically obtained using direct numerical simulation (DNS) or nonlinear parabolized stability equations (PSE). “Streaks” simulations through the RNS integration are much cheaper than using DNS, and avoid stability problems that appear in the PSE when the amplitude of the “streak” is not small. The RNS integration code is also used to calculate the “streaks” that naturally emerge at the leading edge of a flat plate boundary layer in the absence of any free stream perturbations. Up to now, the existing results for these “streaks” have been only calculated in the linear limit (small amplitude), and in this thesis their calculation is carried out in the fully nonlinear regime (finite amplitude). In the second part of the thesis, the RNS code is generalized to include the possibility of having a non-flat plate, curved in the spanwise direction and slowly varying in the streamwise direction. This is achieved by applying a change of coordinates, which transforms the physical domain into a rectangular one. The RNS formulation expressed in the corresponding curvilinear coordinates is also numerically integrated. This generalized RNS code is finally used to study the boundary layer flow over a plate with grooves which vary slowly in the streamwise direction; and this code is used to simulate the flow over grooves that grow in the streamwise direction.
Resumo:
In this paper, a fully automatic goal-oriented hp-adaptive finite element strategy for open region electromagnetic problems (radiation and scattering) is presented. The methodology leads to exponential rates of convergence in terms of an upper bound of an user-prescribed quantity of interest. Thus, the adaptivity may be guided to provide an optimal error, not globally for the field in the whole finite element domain, but for specific parameters of engineering interest. For instance, the error on the numerical computation of the S-parameters of an antenna array, the field radiated by an antenna, or the Radar Cross Section on given directions, can be minimized. The efficiency of the approach is illustrated with several numerical simulations with two dimensional problem domains. Results include the comparison with the previously developed energy-norm based hp-adaptivity.
Resumo:
The security of a passive plug-and-play QKD arrangement in the case of finite (resources) key lengths is analysed. It is assumed that the eavesdropper has full access to the channel so an unknown and untrusted source is assumed. To take into account the security of the BB84 protocol under collective attacks within the framework of quantum adversaries, a full treatment provides the well-known equations for the secure key rate. A numerical simulation keeping a minimum number of initial parameters constant as the total error sought and the number of pulses is carried out. The remaining parameters are optimized to produce the maximum secure key rate. Two main strategies are addressed: with and without two-decoy-states including the optimization of signal to decoy relationship.
Resumo:
In this work, the Reduced Navier Stokes (RNS) are numerically integrated, and used to calculate nonlinear finite amplitude streaks. These structures are interesting since they can have a stabilizing effect and delay the transition to the turbulent regime. RNS formulation is also used to compute the family of nonlinear intrinsic streaks that emerge from the leading edge in absence of any external perturbation. Finally, this formulation is generalized to include the possibility of having a curved bottom wall
Resumo:
Civil buildings are not specifically designed to support blast loads, but it is important to take into account these potential scenarios because of their catastrophic effects, on persons and structures. A practical way to consider explosions on reinforced concrete structures is necessary. With this objective we propose a methodology to evaluate blast loads on large concrete buildings, using LS-DYNA code for calculation, with Lagrangian finite elements and explicit time integration. The methodology has three steps. First, individual structural elements of the building like columns and slabs are studied, using continuum 3D elements models subjected to blast loads. In these models reinforced concrete is represented with high precision, using advanced material models such as CSCM_CONCRETE model, and segregated rebars constrained within the continuum mesh. Regrettably this approach cannot be used for large structures because of its excessive computational cost. Second, models based on structural elements are developed, using shells and beam elements. In these models concrete is represented using CONCRETE_EC2 model and segregated rebars with offset formulation, being calibrated with continuum elements models from step one to obtain the same structural response: displacement, velocity, acceleration, damage and erosion. Third, models basedon structural elements are used to develop large models of complete buildings. They are used to study the global response of buildings subjected to blast loads and progressive collapse. This article carries out different techniques needed to calibrate properly the models based on structural elements, using shells and beam elements, in order to provide results of sufficient accuracy that can be used with moderate computational cost.
Resumo:
In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's of the new mesh remain constant and equal to the initial FE mesh. In order to find the mesh producing the minimum of the selected objective function the steepest descent gradient technique has been applied as optimization algorithm. However this efficient technique has the drawback that demands a large computation power. Extensive application of this methodology to different 2-D elasticity problems leads to the conclusion that isometric isostatic meshes (ii-meshes) produce better results than the standard reasonably initial regular meshes used in practice. This conclusion seems to be independent on the objective function used for comparison. These ii-meshes are obtained by placing FE nodes along the isostatic lines, i.e. curves tangent at each point to the principal direction lines of the elastic problem to be solved and they should be regularly spaced in order to build regular elements. That means ii-meshes are usually obtained by iteration, i.e. with the initial FE mesh the elastic analysis is carried out. By using the obtained results of this analysis the net of isostatic lines can be drawn and in a first trial an ii-mesh can be built. This first ii-mesh can be improved, if it necessary, by analyzing again the problem and generate after the FE analysis the new and improved ii-mesh. Typically, after two first tentative ii-meshes it is sufficient to produce good FE results from the elastic analysis. Several example of this procedure are presented.
Resumo:
Paper submitted to the XVIII Conference on Design of Circuits and Integrated Systems (DCIS), Ciudad Real, España, 2003.
Resumo:
Paper submitted to 10th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Sharjah, Emiratos Árabes, 2003.
Resumo:
We review progress at the Australian Centre for Quantum Computer Technology towards the fabrication and demonstration of spin qubits and charge qubits based on phosphorus donor atoms embedded in intrinsic silicon. Fabrication is being pursued via two complementary pathways: a 'top-down' approach for near-term production of few-qubit demonstration devices and a 'bottom-up' approach for large-scale qubit arrays with sub-nanometre precision. The 'top-down' approach employs a low-energy (keV) ion beam to implant the phosphorus atoms. Single-atom control during implantation is achieved by monitoring on-chip detector electrodes, integrated within the device structure. In contrast, the 'bottom-up' approach uses scanning tunnelling microscope lithography and epitaxial silicon overgrowth to construct devices at an atomic scale. In both cases, surface electrodes control the qubit using voltage pulses, and dual single-electron transistors operating near the quantum limit provide fast read-out with spurious-signal rejection.
Resumo:
The adsorption of simple Lennard-Jones fluids in a carbon slit pore of finite length was studied with Canonical Ensemble (NVT) and Gibbs Ensemble Monte Carlo Simulations (GEMC). The Canonical Ensemble was a collection of cubic simulation boxes in which a finite pore resides, while the Gibbs Ensemble was that of the pore space of the finite pore. Argon was used as a model for Lennard-Jones fluids, while the adsorbent was modelled as a finite carbon slit pore whose two walls were composed of three graphene layers with carbon atoms arranged in a hexagonal pattern. The Lennard-Jones (LJ) 12-6 potential model was used to compute the interaction energy between two fluid particles, and also between a fluid particle and a carbon atom. Argon adsorption isotherms were obtained at 87.3 K for pore widths of 1.0, 1.5 and 2.0 nm using both Canonical and Gibbs Ensembles. These results were compared with isotherms obtained with corresponding infinite pores using Grand Canonical Ensembles. The effects of the number of cycles necessary to reach equilibrium, the initial allocation of particles, the displacement step and the simulation box size were particularly investigated in the Monte Carlo simulation with Canonical Ensembles. Of these parameters, the displacement step had the most significant effect on the performance of the Monte Carlo simulation. The simulation box size was also important, especially at low pressures at which the size must be sufficiently large to have a statistically acceptable number of particles in the bulk phase. Finally, it was found that the Canonical Ensemble and the Gibbs Ensemble both yielded the same isotherm (within statistical error); however, the computation time for GEMC was shorter than that for canonical ensemble simulation. However, the latter method described the proper interface between the reservoir and the adsorbed phase (and hence the meniscus).
Resumo:
What is the computational power of a quantum computer? We show that determining the output of a quantum computation is equivalent to counting the number of solutions to an easily computed set of polynomials defined over the finite field Z(2). This connection allows simple proofs to be given for two known relationships between quantum and classical complexity classes, namely BQP subset of P-#P and BQP subset of PP.
Resumo:
The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n3), where n is the sample size. We show that the optimal m-dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.
Resumo:
For neural networks with a wide class of weight priors, it can be shown that in the limit of an infinite number of hidden units, the prior over functions tends to a gaussian process. In this article, analytic forms are derived for the covariance function of the gaussian processes corresponding to networks with sigmoidal and gaussian hidden units. This allows predictions to be made efficiently using networks with an infinite number of hidden units and shows, somewhat paradoxically, that it may be easier to carry out Bayesian prediction with infinite networks rather than finite ones.
Resumo:
This thesis demonstrates that the use of finite elements need not be confined to space alone, but that they may also be used in the time domain, It is shown that finite element methods may be used successfully to obtain the response of systems to applied forces, including, for example, the accelerations in a tall structure subjected to an earthquake shock. It is further demonstrated that at least one of these methods may be considered to be a practical alternative to more usual methods of solution. A detailed investigation of the accuracy and stability of finite element solutions is included, and methods of applications to both single- and multi-degree of freedom systems are described. Solutions using two different temporal finite elements are compared with those obtained by conventional methods, and a comparison of computation times for the different methods is given. The application of finite element methods to distributed systems is described, using both separate discretizations in space and time, and a combined space-time discretization. The inclusion of both viscous and hysteretic damping is shown to add little to the difficulty of the solution. Temporal finite elements are also seen to be of considerable interest when applied to non-linear systems, both when the system parameters are time-dependent and also when they are functions of displacement. Solutions are given for many different examples, and the computer programs used for the finite element methods are included in an Appendix.
Resumo:
Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA's behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.