802 resultados para interval-valued games
Resumo:
Standard methods for the estimation of the postmortem interval (PMI, time since death), based on the cooling of the corpse, are limited to about 48 h after death. As an alternative, noninvasive postmortem observation of alterations of brain metabolites by means of (1)H MRS has been suggested for an estimation of the PMI at room temperature, so far without including the effect of other ambient temperatures. In order to study the temperature effect, localized (1)H MRS was used to follow brain decomposition in a sheep brain model at four different temperatures between 4 and 26°C with repeated measurements up to 2100 h postmortem. The simultaneous determination of 25 different biochemical compounds at each measurement allowed the time courses of concentration changes to be followed. A sudden and almost simultaneous change of the concentrations of seven compounds was observed after a time span that decreased exponentially from 700 h at 4°C to 30 h at 26°C ambient temperature. As this represents, most probably, the onset of highly variable bacterial decomposition, and thus defines the upper limit for a reliable PMI estimation, data were analyzed only up to this start of bacterial decomposition. As 13 compounds showed unequivocal, reproducible concentration changes during this period while eight showed a linear increase with a slope that was unambiguously related to ambient temperature. Therefore, a single analytical function with PMI and temperature as variables can describe the time courses of metabolite concentrations. Using the inverse of this function, metabolite concentrations determined from a single MR spectrum can be used, together with known ambient temperatures, to calculate the PMI of a corpse. It is concluded that the effect of ambient temperature can be reliably included in the PMI determination by (1)H MRS.
Resumo:
This article examines the role of domestic spaces and images in mid-nineteenth-century science writing for children. Analyses of John Mill’s The Fossil Spirit, A.L.O.E.’s Fairy Frisket, John Cargill Brough’s The Fairy Tales of Science, Annie Carey’s “Autobiography of a Lump of Coal,” and an assortment of boxed games reveal a variety of ways in which overwhelming scientific concepts are domesticated. Moreover, juvenile science literature contributes this appeasing domestication to the broader scientific discourse, consistently framing natural history in terms of human experience.
Resumo:
We carry out some computations of vector-valued Siegel modular forms of degree two, weight (k, 2) and level one, and highlight three experimental results: (1) we identify a rational eigenform in a three-dimensional space of cusp forms; (2) we observe that non-cuspidal eigenforms of level one are not always rational; (3) we verify a number of cases of conjectures about congruences between classical modular forms and Siegel modular forms. Our approach is based on Satoh's description of the module of vector-valued Siegel modular forms of weight (k, 2) and an explicit description of the Hecke action on Fourier expansions. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.
Resumo:
Content Addressable Memory (CAM) is a special type of Complementary Metal-Oxide-Semiconductor (CMOS) storage element that allows for a parallel search operation on a memory stack in addition to the read and write operations yielded by a conventional SRAM storage array. In practice, it is often desirable to be able to store a “don’t care” state for faster searching operation. However, commercially available CAM chips are forced to accomplish this functionality by having to include two binary memory storage elements per CAM cell,which is a waste of precious area and power resources. This research presents a novel CAM circuit that achieves the “don’t care” functionality with a single ternary memory storage element. Using the recent development of multiple-voltage-threshold (MVT) CMOS transistors, the functionality of the proposed circuit is validated and characteristics for performance, power consumption, noise immunity, and silicon area are presented. This workpresents the following contributions to the field of CAM and ternary-valued logic:• We present a novel Simple Ternary Inverter (STI) transistor geometry scheme for achieving ternary-valued functionality in existing SOI-CMOS 0.18µm processes.• We present a novel Ternary Content Addressable Memory based on Three-Valued Logic (3CAM) as a single-storage-element CAM cell with “don’t care” functionality.• We explore the application of macro partitioning schemes to our proposed 3CAM array to observe the benefits and tradeoffs of architecture design in the context of power, delay, and area.
Resumo:
We evaluated the association of QT interval corrected for heart rate (QT(c)) and resting heart rate (rHR) with mortality (all-causes, cardiovascular, cardiac, and ischaemic heart disease) in subjects with type 1 and type 2 diabetes.
Resumo:
In biostatistical applications interest often focuses on the estimation of the distribution of a time-until-event variable T. If one observes whether or not T exceeds an observed monitoring time at a random number of monitoring times, then the data structure is called interval censored data. We extend this data structure by allowing the presence of a possibly time-dependent covariate process that is observed until end of follow up. If one only assumes that the censoring mechanism satisfies coarsening at random, then, by the curve of dimensionality, typically no regular estimators will exist. To fight the curse of dimensionality we follow the approach of Robins and Rotnitzky (1992) by modeling parameters of the censoring mechanism. We model the right-censoring mechanism by modeling the hazard of the follow up time, conditional on T and the covariate process. For the monitoring mechanism we avoid modeling the joint distribution of the monitoring times by only modeling a univariate hazard of the pooled monitoring times, conditional on the follow up time, T, and the covariates process, which can be estimated by treating the pooled sample of monitoring times as i.i.d. In particular, it is assumed that the monitoring times and the right-censoring times only depend on T through the observed covariate process. We introduce inverse probability of censoring weighted (IPCW) estimator of the distribution of T and of smooth functionals thereof which are guaranteed to be consistent and asymptotically normal if we have available correctly specified semiparametric models for the two hazards of the censoring process. Furthermore, given such correctly specified models for these hazards of the censoring process, we propose a one-step estimator which will improve on the IPCW estimator if we correctly specify a lower-dimensional working model for the conditional distribution of T, given the covariate process, that remains consistent and asymptotically normal if this latter working model is misspecified. It is shown that the one-step estimator is efficient if each subject is at most monitored once and the working model contains the truth. In general, it is shown that the one-step estimator optimally uses the surrogate information if the working model contains the truth. It is not optimal in using the interval information provided by the current status indicators at the monitoring times, but simulations in Peterson, van der Laan (1997) show that the efficiency loss is small.
Resumo:
In this paper we propose methods for smooth hazard estimation of a time variable where that variable is interval censored. These methods allow one to model the transformed hazard in terms of either smooth (smoothing splines) or linear functions of time and other relevant time varying predictor variables. We illustrate the use of this method on a dataset of hemophiliacs where the outcome, time to seroconversion for HIV, is interval censored and left-truncated.
Resumo:
In this paper, the NPMLE in the one-dimensional line segment problem is defined and studied, where line segments on the real line through two non-overlapping intervals are observed. The self-consistency equations for the NPMLE are defined and a quick algorithm for solving them is provided. Supnorm weak convergence to a Gaussian process and efficiency of the NPMLE is proved. The problem has a strong geological application in the study of the lifespan of species.