990 resultados para Optimal Testing
Resumo:
A parallel strategy for solving multidimensional tridiagonal equations is investigated in this paper. We present in detail an improved version of single parallel partition (SPP) algorithm in conjunction with message vectorization, which aggregates several communication messages into one to reduce the communication cost. We show the resulting block SPP can achieve good speedup for a wide range of message vector length (MVL), especially when the number of grid points in the divided direction is large. Instead of only using the largest possible MVL, we adopt numerical tests and modeling analysis to determine an optimal MVL so that significant improvement in speedup can be obtained.
Resumo:
It has long been recognized that many direct parallel tridiagonal solvers are only efficient for solving a single tridiagonal equation of large sizes, and they become inefficient when naively used in a three-dimensional ADI solver. In order to improve the parallel efficiency of an ADI solver using a direct parallel solver, we implement the single parallel partition (SPP) algorithm in conjunction with message vectorization, which aggregates several communication messages into one to reduce the communication costs. The measured performances show that the longest allowable message vector length (MVL) is not necessarily the best choice. To understand this observation and optimize the performance, we propose an improved model that takes the cache effect into consideration. The optimal MVL for achieving the best performance is shown to depend on number of processors and grid sizes. Similar dependence of the optimal MVL is also found for the popular block pipelined method.
Resumo:
Smart and mobile environments require seamless connections. However, due to the frequent process of ''discovery'' and disconnection of mobile devices while data interchange is happening, wireless connections are often interrupted. To minimize this drawback, a protocol that enables an easy and fast synchronization is crucial. Bearing this in mind, Bluetooth technology appears to be a suitable solution to carry on such connections due to the discovery and pairing capabilities it provides. Nonetheless, the time and energy spent when several devices are being discovered and used at the same time still needs to be managed properly. It is essential that this process of discovery takes as little time and energy as possible. In addition to this, it is believed that the performance of the communications is not constant when the transmission speeds and throughput increase, but this has not been proved formally. Therefore, the purpose of this project is twofold: Firstly, to design and build a framework-system capable of performing controlled Bluetooth device discovery, pairing and communications. Secondly, to analyze and test the scalability and performance of the \emph{classic} Bluetooth standard under different scenarios and with various sensors and devices using the framework developed. To achieve the first goal, a generic Bluetooth platform will be used to control the test conditions and to form a ubiquitous wireless system connected to an Android Smartphone. For the latter goal, various stress-tests will be carried on to measure the consumption rate of battery life as well as the quality of the communications between the devices involved.
Resumo:
Experimental research on a 150 kW arc-heated plasma testing facility was conducted. Stable plasma jets with different gas compositions, temperatures and velocities were obtained at chamber pressure between 400 Pa – 100 kPa. Stagnation ablation experiments were conducted on samples of typical super alloys used for thermal protection systems. The microstructure and hardness of alloys before and after ablation were compared.
Resumo:
Recent observations of the temperature anisotropies of the cosmic microwave background (CMB) favor an inflationary paradigm in which the scale factor of the universe inflated by many orders of magnitude at some very early time. Such a scenario would produce the observed large-scale isotropy and homogeneity of the universe, as well as the scale-invariant perturbations responsible for the observed (10 parts per million) anisotropies in the CMB. An inflationary epoch is also theorized to produce a background of gravitational waves (or tensor perturbations), the effects of which can be observed in the polarization of the CMB. The E-mode (or parity even) polarization of the CMB, which is produced by scalar perturbations, has now been measured with high significance. Con- trastingly, today the B-mode (or parity odd) polarization, which is sourced by tensor perturbations, has yet to be observed. A detection of the B-mode polarization of the CMB would provide strong evidence for an inflationary epoch early in the universe’s history.
In this work, we explore experimental techniques and analysis methods used to probe the B- mode polarization of the CMB. These experimental techniques have been used to build the Bicep2 telescope, which was deployed to the South Pole in 2009. After three years of observations, Bicep2 has acquired one of the deepest observations of the degree-scale polarization of the CMB to date. Similarly, this work describes analysis methods developed for the Bicep1 three-year data analysis, which includes the full data set acquired by Bicep1. This analysis has produced the tightest constraint on the B-mode polarization of the CMB to date, corresponding to a tensor-to-scalar ratio estimate of r = 0.04±0.32, or a Bayesian 95% credible interval of r < 0.70. These analysis methods, in addition to producing this new constraint, are directly applicable to future analyses of Bicep2 data. Taken together, the experimental techniques and analysis methods described herein promise to open a new observational window into the inflationary epoch and the initial conditions of our universe.
Resumo:
Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.
This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.
When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.
Resumo:
This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity play as contributors to the specific fracture energy of the material. Next, we present an experimental assessment of the optimal scaling laws. We show that when the specific fracture energy is renormalized in a manner suggested by the optimal scaling laws, the data falls within the bounds predicted by the analysis and, moreover, they ostensibly collapse---with allowances made for experimental scatter---on a master curve dependent on the hardening exponent, but otherwise material independent.
Resumo:
The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.
Resumo:
A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.
The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.
Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.
The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.
Resumo:
An optimal feedback control of two-photon fluorescence in the ethanol solution of 4-dicyanomethylene-2-methyl-6-p-dimethyl-amiiiostryryl-4H-pyran (DCM) using pulse-shaping technique based on genetic algorithm is demonstrated experimentally. The two-photon fluorescence of the DCM ethanol solution is enhanced in intensity of about 23%. The second harmonic generation frequency-resolved optical gating (SHG-FROG) trace indicates that the effective population transfer arises from the positively chirped pulse. The experimental results appear the potential applications of coherent control to the complicated molecular system.
Resumo:
An optimal feedback control of two-photon fluorescence in the Coumarin 515 ethanol solution excited by shaping femtosecond laser pulses based on genetic algorithm is demonstrated experimentally. The two-photon fluorescence intensity can be enhanced by similar to 20%. Second harmonic generation frequency-resolved optical gating traces indicate that the optimal laser pulses are positive chirp, which are in favor of the effective population transfer of two-photon transitions. The dependence of the two-photon fluorescence signal on the laser pulse chirp is investigated to validate the theoretical model for the effective population transfer of two-photon transitions. The experimental results appear the potential applications in nonlinear spectroscopy and molecular physics. (c) 2005 Elsevier B.V. All rights reserved.