997 resultados para adaptive optics
Resumo:
The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1–20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.
Resumo:
The Iowa gambling task (IGT) is one of the most influential behavioral paradigms in reward-related decision making and has been, most notably, associated with ventromedial prefrontal cortex function. However, performance in the IGT relies on a complex set of cognitive subprocesses, in particular integrating information about the outcome of choices into a continuously updated decision strategy under ambiguous conditions. The complexity of the task has made it difficult for neuroimaging studies to disentangle the underlying neurocognitive processes. In this study, we used functional magnetic resonance imaging in combination with a novel adaptation of the task, which allowed us to examine separately activation associated with the moment of decision or the evaluation of decision outcomes. Importantly, using whole-brain regression analyses with individual performance, in combination with the choice/outcome history of individual subjects, we aimed to identify the neural overlap between areas that are involved in the evaluation of outcomes and in the progressive discrimination of the relative value of available choice options, thus mapping the two fundamental cognitive processes that lead to adaptive decision making. We show that activation in right ventromedial and dorsolateral prefrontal cortex was predictive of adaptive performance, in both discriminating disadvantageous from advantageous decisions and confirming negative decision outcomes. We propose that these two prefrontal areas mediate shifting away from disadvantageous choices through their sensitivity to accumulating negative outcomes. These findings provide functional evidence of the underlying processes by which these prefrontal subregions drive adaptive choice in the task, namely through contingency-sensitive outcome evaluation.
Resumo:
In this article, we examine the case of a system that cooperates with a “direct” user to plan an activity that some “indirect” user, not interacting with the system, should perform. The specific application we consider is the prescription of drugs. In this case, the direct user is the prescriber and the indirect user is the person who is responsible for performing the therapy. Relevant characteristics of the two users are represented in two user models. Explanation strategies are represented in planning operators whose preconditions encode the cognitive state of the indirect user; this allows tailoring the message to the indirect user's characteristics. Expansion of optional subgoals and selection among candidate operators is made by applying decision criteria represented as metarules, that negotiate between direct and indirect users' views also taking into account the context where explanation is provided. After the message has been generated, the direct user may ask to add or remove some items, or change the message style. The system defends the indirect user's needs as far as possible by mentioning the rationale behind the generated message. If needed, the plan is repaired and the direct user model is revised accordingly, so that the system learns progressively to generate messages suited to the preferences of people with whom it interacts.
Resumo:
A one-dimensional water column model using the Mellor and Yamada level 2.5 parameterization of vertical turbulent fluxes is presented. The model equations are discretized with a mixed finite element scheme. Details of the finite element discrete equations are given and adaptive mesh refinement strategies are presented. The refinement criterion is an "a posteriori" error estimator based on stratification, shear and distance to surface. The model performances are assessed by studying the stress driven penetration of a turbulent layer into a stratified fluid. This example illustrates the ability of the presented model to follow some internal structures of the flow and paves the way for truly generalized vertical coordinates. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Flow in the world's oceans occurs at a wide range of spatial scales, from a fraction of a metre up to many thousands of kilometers. In particular, regions of intense flow are often highly localised, for example, western boundary currents, equatorial jets, overflows and convective plumes. Conventional numerical ocean models generally use static meshes. The use of dynamically-adaptive meshes has many potential advantages but needs to be guided by an error measure reflecting the underlying physics. A method of defining an error measure to guide an adaptive meshing algorithm for unstructured tetrahedral finite elements, utilizing an adjoint or goal-based method, is described here. This method is based upon a functional, encompassing important features of the flow structure. The sensitivity of this functional, with respect to the solution variables, is used as the basis from which an error measure is derived. This error measure acts to predict those areas of the domain where resolution should be changed. A barotropic wind driven gyre problem is used to demonstrate the capabilities of the method. The overall objective of this work is to develop robust error measures for use in an oceanographic context which will ensure areas of fine mesh resolution are used only where and when they are required. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
New conceptual ideas on network architectures have been proposed in the recent past. Current store-andforward routers are replaced by active intermediate systems, which are able to perform computations on transient packets, in a way that results very helpful for developing and deploying new protocols in a short time. This paper introduces a new routing algorithm, based on a congestion metric, and inspired by the behavior of ants in nature. The use of the Active Networks paradigm associated with a cooperative learning environment produces a robust, decentralized algorithm capable of adapting quickly to changing conditions.
Resumo:
Periods between predator detection and an escape response (escape delays) by prey upon attack by a predator often arise because animals trade-off the benefits such a delay gives for assessing risk accurately with the costs of not escaping as quickly as possible. We tested whether freezing behaviour (complete immobility in a previously foraging bird) observed in chaffinches before escaping from an approaching potential threat functions as a period of risk-assessment, and whether information on predator identity is gained even when time available is very short. We flew either a model of a sparrowhawk (predator) or a woodpigeon (no threat) at single chaffinches. Escape delays were significantly shorter with the hawk, except when a model first appeared close to the chaffinch. Chaffinches were significantly more vigilant when they resumed feeding after exposure to the sparrowhawk compared to the woodpigeon showing that they were able to distinguish between threats, and this applied even when time available for assessment was short (an average of 0.29 s). Our results show freezing in chaffinches functions as an effective economic risk assessment period, and that threat information is gained even when very short periods of time are available during an attack.
Resumo:
In clinical trials, situations often arise where more than one response from each patient is of interest; and it is required that any decision to stop the study be based upon some or all of these measures simultaneously. Theory for the design of sequential experiments with simultaneous bivariate responses is described by Jennison and Turnbull (Jennison, C., Turnbull, B. W. (1993). Group sequential tests for bivariate response: interim analyses of clinical trials with both efficacy and safety endpoints. Biometrics 49:741-752) and Cook and Farewell (Cook, R. J., Farewell, V. T. (1994). Guidelines for monitoring efficacy and toxicity responses in clinical trials. Biometrics 50:1146-1152) in the context of one efficacy and one safety response. These expositions are in terms of normally distributed data with known covariance. The methods proposed require specification of the correlation, ρ between test statistics monitored as part of the sequential test. It can be difficult to quantify ρ and previous authors have suggested simply taking the lowest plausible value, as this will guarantee power. This paper begins with an illustration of the effect that inappropriate specification of ρ can have on the preservation of trial error rates. It is shown that both the type I error and the power can be adversely affected. As a possible solution to this problem, formulas are provided for the calculation of correlation from data collected as part of the trial. An adaptive approach is proposed and evaluated that makes use of these formulas and an example is provided to illustrate the method. Attention is restricted to the bivariate case for ease of computation, although the formulas derived are applicable in the general multivariate case.
Resumo:
There is increasing interest in combining Phases II and III of clinical development into a single trial in which one of a small number of competing experimental treatments is ultimately selected and where a valid comparison is made between this treatment and the control treatment. Such a trial usually proceeds in stages, with the least promising experimental treatments dropped as soon as possible. In this paper we present a highly flexible design that uses adaptive group sequential methodology to monitor an order statistic. By using this approach, it is possible to design a trial which can have any number of stages, begins with any number of experimental treatments, and permits any number of these to continue at any stage. The test statistic used is based upon efficient scores, so the method can be easily applied to binary, ordinal, failure time, or normally distributed outcomes. The method is illustrated with an example, and simulations are conducted to investigate its type I error rate and power under a range of scenarios.
Resumo:
Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.
Resumo:
Sequential techniques can enhance the efficiency of the approximate Bayesian computation algorithm, as in Sisson et al.'s (2007) partial rejection control version. While this method is based upon the theoretical works of Del Moral et al. (2006), the application to approximate Bayesian computation results in a bias in the approximation to the posterior. An alternative version based on genuine importance sampling arguments bypasses this difficulty, in connection with the population Monte Carlo method of Cappe et al. (2004), and it includes an automatic scaling of the forward kernel. When applied to a population genetics example, it compares favourably with two other versions of the approximate algorithm.