994 resultados para Computer Experiments
Resumo:
This article addresses the issue of kriging-based optimization of stochastic simulators. Many of these simulators depend on factors that tune the level of precision of the response, the gain in accuracy being at a price of computational time. The contribution of this work is two-fold: first, we propose a quantile-based criterion for the sequential design of experiments, in the fashion of the classical expected improvement criterion, which allows an elegant treatment of heterogeneous response precisions. Second, we present a procedure for the allocation of the computational time given to each measurement, allowing a better distribution of the computational effort and increased efficiency. Finally, the optimization method is applied to an original application in nuclear criticality safety. This article has supplementary material available online. The proposed criterion is available in the R package DiceOptim.
Resumo:
The paper presents basic notions and scientific achievements in the field of program transformations, describes usage of these achievements both in the professional activity (when developing optimizing and unparallelizing compilers) and in the higher education. It also analyzes main problems in this area. The concept of control of program transformation information is introduced in the form of specialized knowledge bank on computer program transformations to support the scientific research, education and professional activity in the field. The tasks that are solved by the knowledge bank are formulated. The paper is intended for experts in the artificial intelligence, optimizing compilation, postgraduates and senior students of corresponding specialties; it may be also interesting for university lecturers and instructors.
Resumo:
Computer simulators of real-world processes are often computationally expensive and require many inputs. The problem of the computational expense can be handled using emulation technology; however, highly multidimensional input spaces may require more simulator runs to train and validate the emulator. We aim to reduce the dimensionality of the problem by screening the simulators inputs for nonlinear effects on the output rather than distinguishing between negligible and active effects. Our proposed method is built upon the elementary effects (EE) method for screening and uses a threshold value to separate the inputs with linear and nonlinear effects. The technique is simple to implement and acts in a sequential way to keep the number of simulator runs down to a minimum, while identifying the inputs that have nonlinear effects. The algorithm is applied on a set of simulated examples and a rabies disease simulator where we observe run savings ranging between 28% and 63% compared with the batch EE method. Supplementary materials for this article are available online.
Resumo:
The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have been emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rainfall amounts. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e. RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance, but also for use in hydrological modeling. The results show that the RCs considering measurement errors derived from laboratory experiments provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Even assuming higher uncertainties for RCs as obtained from the laboratory up to a certain level is observed practical.
Resumo:
Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.
Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.
Resumo:
We present a general approach based on nonequilibrium thermodynamics for bridging the gap between a well-defined microscopic model and the macroscopic rheology of particle-stabilised interfaces. Our approach is illustrated by starting with a microscopic model of hard ellipsoids confined to a planar surface, which is intended to simply represent a particle-stabilised fluid–fluid interface. More complex microscopic models can be readily handled using the methods outlined in this paper. From the aforementioned microscopic starting point, we obtain the macroscopic, constitutive equations using a combination of systematic coarse-graining, computer experiments and Hamiltonian dynamics. Exemplary numerical solutions of the constitutive equations are given for a variety of experimentally relevant flow situations to explore the rheological behaviour of our model. In particular, we calculate the shear and dilatational moduli of the interface over a wide range of surface coverages, ranging from the dilute isotropic regime, to the concentrated nematic regime.
Resumo:
Computer experiments of interstellar cloud collisions were performed with a new smoothed-particle-hydrodynamics (SPH) code. The SPH quantities were calculated by using spatially adaptive smoothing lengths and the SPH fluid equations of motion were solved by means of a hierarchical multiple time-scale leapfrog. Such a combination of methods allows the code to deal with a large range of hydrodynamic quantities. A careful treatment of gas cooling by H, H(2), CO and H II, as well as a heating mechanism by cosmic rays and by H(2) production on grains surface, were also included in the code. The gas model reproduces approximately the typical environment of dark molecular clouds. The experiments were performed by impinging two dynamically identical spherical clouds onto each other with a relative velocity of 10 km s(-1) but with a different impact parameter for each case. Each object has an initial density profile obeying an r(-1)-law with a cutoff radius of 10 pc and with an initial temperature of 20 K. As a main result, cloud-cloud collision triggers fragmentation but in expense of a large amount of energy dissipated, which occurred in the head-on case only. Off-center collision did not allow remnants to fragment along the considered time (similar to 6 Myr). However, it dissipated a considerable amount of orbital energy. Structures as small as 0.1 pc, with densities of similar to 10(4) cm(-3), were observed in the more energetic collision.
Resumo:
Understanding consciousness is one of the most fascinating challenges of our time. From ancient civilizations to modern philosophers, questions have been asked on how one is conscious of his/her own existence and about the world that surrounds him/her. Although there is no precise definition for consciousness, there is an agreement that it is strongly related to human cognitive processes such as: thinking, reasoning, emotions, wishes. One of the key processes to the arising of the consciousness is the attention, a process capable of promoting a selection of a few stimuli from a huge amount of information that reaches us constantly. Machine consciousness is the field of the artificial intelligence that investigate the possibility of the production of conscious processes in artificial devices. This work presents a review about the theme of consciousness - in both natural and artificial aspects -, discussing this theme from the philosophical and computational perspectives, and investigates the feasibility of the adoption of an attentional schema as the base to the cognitive processing. A formal computational model is proposed for conscious agents that integrates: short and long term memories, reasoning, planning, emotion, decision making, learning, motivation and volition. Computer experiments in a mobile robotics domain under USARSim simulation environment, proposed by RoboCup, suggest that the agent can be able to use these elements to acquire experiences based on environment stimuli. The adoption of the cognitive architecture over the attentional model has potential to allow the emergence of behaviours usually associated to the consciousness in the simulated mobile robots. Further implementation under this model could potentially allow the agent to express sentience, selfawareness, self-consciousness, autonoetic consciousness, mineness and perspectivalness. By performing computation over an attentional space, the model also allows the ...
Resumo:
This thesis reports on a research into the progressive development of fibrous aggregates, e.g. calcite, quartz and mica crystals in veins and strain fringes. The study is based on microstructural analysis of natural examples and on computer experiments. Investigation of fibrous looking elongate crystals in striped bedding-veins from the Orobic Alps, Italy indicate that these crystals do not track the opening trajectory of the veins but are oriented at an angle of up to 80° to the opening direction. Microstructural analysis of quartz, calcite and chlorite fibres in antitaxial strain fringes indicate that most strain fringes contain complex intergrowth of tracking (displacement-controlled) and non-tracking (face-controlled) fibres. To explain these growth features the computer program
Resumo:
Direct imaging of extra-solar planets in the visible and infrared region has generated great interest among scientists and the general public as well. However, this is a challenging problem. Diffculties of detecting a planet (faint source) are caused, mostly, by two factors: sidelobes caused by starlight diffraction from the edge of the pupil and the randomly scattered starlight caused by the phase errors from the imperfections in the optical system. While the latter diffculty can be corrected by high density active deformable mirrors with advanced phase sensing and control technology, the optimized strategy for suppressing the diffraction sidelobes is still an open question. In this thesis, I present a new approach to the sidelobe reduction problem: pupil phase apodization. It is based on a discovery that an anti-symmetric spatial phase modulation pattern imposed over a pupil or a relay plane causes diffracted starlight suppression sufficient for imaging of extra-solar planets. Numerical simulations with specific square pupil (side D) phase functions, such as ... demonstrate annulling in at least one quadrant of the diffraction plane to the contrast level of better than 10^12 with an inner working angle down to 3.5L/D (with a = 3 and e = 10^3). Furthermore, our computer experiments show that phase apodization remains effective throughout a broad spectrum (60% of the central wavelength) covering the entire visible light range. In addition to the specific phase functions that can yield deep sidelobe reduction on one quadrant, we also found that a modified Gerchberg-Saxton algorithm can help to find small sized (101 x 101 element) discrete phase functions if regional sidelobe reduction is desired. Our simulation shows that a 101x101 segmented but gapless active mirror can also generate a dark region with Inner Working Distance about 2.8L/D in one quadrant. Phase-only modulation has the additional appeal of potential implementation via active segmented or deformable mirrors, thereby combining compensation of random phase aberrations and diffraction halo removal in a single optical element.
Resumo:
We extend the concept of eigenvector centrality to multiplex networks, and introduce several alternative parameters that quantify the importance of nodes in a multi-layered networked system, including the definition of vectorial-type centralities. In addition, we rigorously show that, under reasonable conditions, such centrality measures exist and are unique. Computer experiments and simulations demonstrate that the proposed measures provide substantially different results when applied to the same multiplex structure, and highlight the non-trivial relationships between the different measures of centrality introduced.
Resumo:
A quantum random walk on the integers exhibits pseudo memory effects, in that its probability distribution after N steps is determined by reshuffling the first N distributions that arise in a classical random walk with the same initial distribution. In a classical walk, entropy increase can be regarded as a consequence of the majorization ordering of successive distributions. The Lorenz curves of successive distributions for a symmetric quantum walk reveal no majorization ordering in general. Nevertheless, entropy can increase, and computer experiments show that it does so on average. Varying the stages at which the quantum coin system is traced out leads to new quantum walks, including a symmetric walk for which majorization ordering is valid but the spreading rate exceeds that of the usual symmetric quantum walk.