897 resultados para Minkowski metric
Resumo:
As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.
Resumo:
The psychometric properties of scores from the Achievement Goal Questionnaire were examined in samples of Japanese (N = 326) and Canadian (N = 307) post secondary students. Previous research found evidence of a four-factor structure of achievement goals in U.S. samples. Using confirmatory factor-analytic techniques, the authors found strong evidence for the four-factor structure of achievement goals in both the Canadian and Japanese populations. Subsequent multi group structural equation modeling indicated the metric invariance of this four-factor structure across the two populations.
Resumo:
Consider the massless Dirac operator on a 3-torus equipped with Euclidean metric and standard spin structure. It is known that the eigenvalues can be calculated explicitly: the spectrum is symmetric about zero and zero itself is a double eigenvalue. The aim of the paper is to develop a perturbation theory for the eigenvalue with smallest modulus with respect to perturbations of the metric. Here the application of perturbation techniques is hindered by the fact that eigenvalues of the massless Dirac operator have even multiplicity, which is a consequence of this operator commuting with the antilinear operator of charge conjugation (a peculiar feature of dimension 3). We derive an asymptotic formula for the eigenvalue with smallest modulus for arbitrary perturbations of the metric and present two particular families of Riemannian metrics for which the eigenvalue with smallest modulus can be evaluated explicitly. We also establish a relation between our asymptotic formula and the eta invariant.
Resumo:
Over the last decade, due to the Gravity Recovery And Climate Experiment (GRACE) mission and, more recently, the Gravity and steady state Ocean Circulation Explorer (GOCE) mission, our ability to measure the ocean’s mean dynamic topography (MDT) from space has improved dramatically. Here we use GOCE to measure surface current speeds in the North Atlantic and compare our results with a range of independent estimates that use drifter data to improve small scales. We find that, with filtering, GOCE can recover 70% of the Gulf Steam strength relative to the best drifter-based estimates. In the subpolar gyre the boundary currents obtained from GOCE are close to the drifter-based estimates. Crucial to this result is careful filtering which is required to remove small-scale errors, or noise, in the computed surface. We show that our heuristic noise metric, used to determine the degree of filtering, compares well with the quadratic sum of mean sea surface and formal geoid errors obtained from the error variance–covariance matrix associated with the GOCE gravity model. At a resolution of 100 km the North Atlantic mean GOCE MDT error before filtering is 5 cm with almost all of this coming from the GOCE gravity model.
Resumo:
We present here a straightforward method which can be used to obtain a quantitative indication of an individual research output for an academic. Different versions, selections and options are presented to enable a user to easily calculate values both for stand-alone papers and overall for the collection of outputs for a person. The procedure is particularly useful as a metric to give a quantitative indication of the research output of a person over a time window. Examples are included to show how the method works in practice and how it compares to alternative techniques.
Resumo:
In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, which can lead to the formation of persistent contrails in ice-supersaturated regions. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time; that is to say, emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate-sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. This modelling chain forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region grid points). This is performed with the chemistry–climate model EMAC, extended via the two submodels AIRTRAC (V1.0) and CONTRAIL (V1.0), which describe the contribution of emissions to the composition of the atmosphere and to contrail formation, respectively. The impact of emissions from the large number of time-region grid points is efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the global climate impact of the emission at each time-region grid point. The result of the modelling chain comprises a four-dimensional data set in space and time, which we call climate cost functions and which describes the global climate impact of an emission at each grid point and each point in time. In a third step, these climate cost functions are used in an air traffic simulator (SAAM) coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling approach and show some example results. A number of sensitivity analyses are performed to motivate the settings of individual parameters. A stepwise sanity check of the results of the modelling chain is undertaken to demonstrate the plausibility of the climate cost functions.
Resumo:
Persistent contrails are an important climate impact of aviation which could potentially be reduced by re-routing aircraft to avoid contrailing; however this generally increases both the flight length and its corresponding CO emissions. Here, we provide a simple framework to assess the trade-off between the climate impact of CO emissions and contrails for a single flight, in terms of the absolute global warming potential and absolute global temperature potential metrics for time horizons of 20, 50 and 100 years. We use the framework to illustrate the maximum extra distance (with no altitude changes) that can be added to a flight and still reduce its overall climate impact. Small aircraft can fly up to four times further to avoid contrailing than large aircraft. The results have a strong dependence on the applied metric and time horizon. Applying a conservative estimate of the uncertainty in the contrail radiative forcing and climate efficacy leads to a factor of 20 difference in the maximum extra distance that could be flown to avoid a contrail. The impact of re-routing on other climatically-important aviation emissions could also be considered in this framework.
Resumo:
Data assimilation methods which avoid the assumption of Gaussian error statistics are being developed for geoscience applications. We investigate how the relaxation of the Gaussian assumption affects the impact observations have within the assimilation process. The effect of non-Gaussian observation error (described by the likelihood) is compared to previously published work studying the effect of a non-Gaussian prior. The observation impact is measured in three ways: the sensitivity of the analysis to the observations, the mutual information, and the relative entropy. These three measures have all been studied in the case of Gaussian data assimilation and, in this case, have a known analytical form. It is shown that the analysis sensitivity can also be derived analytically when at least one of the prior or likelihood is Gaussian. This derivation shows an interesting asymmetry in the relationship between analysis sensitivity and analysis error covariance when the two different sources of non-Gaussian structure are considered (likelihood vs. prior). This is illustrated for a simple scalar case and used to infer the effect of the non-Gaussian structure on mutual information and relative entropy, which are more natural choices of metric in non-Gaussian data assimilation. It is concluded that approximating non-Gaussian error distributions as Gaussian can give significantly erroneous estimates of observation impact. The degree of the error depends not only on the nature of the non-Gaussian structure, but also on the metric used to measure the observation impact and the source of the non-Gaussian structure.
Resumo:
The DTZ metric indicates the minimaxed 'Depth To Zeroing of the ply-count' for decisive positions. Ronald de Man's DTZ50' metric is a variant of the DTZ metric as moderated by the FIDE 50-move draw-claim rule. DTZ50'-depths are given to '50-move-rule draws' as well as to unconditionally decisive positions. This note defines a two-dimensional taxonomy of positions implicitly defined by DTZ50'. 'Decisive' positions may have values of (wins/losses) v =1/-1 or v = 2/-2. A position's depth in the new DTZ50' metric may be greater than, equal to or less than its DTZ depth. The six parts of the taxonomy are examined in detail, and illustrated by some 40 positions and 16 lines. Positions, lines and the annotation of these lines are supplied in the ancillary data files.
Resumo:
Time series of global and regional mean Surface Air Temperature (SAT) anomalies are a common metric used to estimate recent climate change. Various techniques can be used to create these time series from meteorological station data. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques relative to the reanalysis reference. Kriging techniques provided the smallest errors in estimates of Arctic anomalies and Simple Kriging was often the best kriging method in this study, especially over sea ice. A linear interpolation technique had, on average, Root Mean Square Errors (RMSEs) up to 0.55 K larger than the two kriging techniques tested. Non-interpolating techniques provided the least representative anomaly estimates. Nonetheless, they serve as useful checks for confirming whether estimates from interpolating techniques are reasonable. The interaction of meteorological station coverage with estimation techniques between 1850 and 2011 was simulated using an ensemble dataset comprising repeated individual years (1979-2011). All techniques were found to have larger RMSEs for earlier station coverages. This supports calls for increased data sharing and data rescue, especially in sparsely observed regions such as the Arctic.
Resumo:
Animal models are invaluable tools which allow us to investigate the microbiome-host dialogue. However, experimental design introduces biases in the data that we collect, also potentially leading to biased conclusions. With obesity at pandemic levels animal models of this disease have been developed; we investigated the role of experimental design on one such rodent model. We used 454 pyrosequencing to profile the faecal bacteria of obese (n = 6) and lean (homozygous n = 6; heterozygous n = 6) Zucker rats over a 10 week period, maintained in mixed-genotype cages, to further understand the relationships between the composition of the intestinal bacteria and age, obesity progression, genetic background and cage environment. Phylogenetic and taxon-based univariate and multivariate analyses (non-metric multidimensional scaling, principal component analysis) showed that age was the most significant source of variation in the composition of the faecal microbiota. Second to this, cage environment was found to clearly impact the composition of the faecal microbiota, with samples from animals from within the same cage showing high community structure concordance, but large differences seen between cages. Importantly, the genetically induced obese phenotype was not found to impact the faecal bacterial profiles. These findings demonstrate that the age and local environmental cage variables were driving the composition of the faecal bacteria and were more deterministically important than the host genotype. These findings have major implications for understanding the significance of functional metagenomic data in experimental studies and beg the question; what is being measured in animal experiments in which different strains are housed separately, nature or nurture?
Resumo:
Satellite-based (e.g., Synthetic Aperture Radar [SAR]) water level observations (WLOs) of the floodplain can be sequentially assimilated into a hydrodynamic model to decrease forecast uncertainty. This has the potential to keep the forecast on track, so providing an Earth Observation (EO) based flood forecast system. However, the operational applicability of such a system for floods developed over river networks requires further testing. One of the promising techniques for assimilation in this field is the family of ensemble Kalman (EnKF) filters. These filters use a limited-size ensemble representation of the forecast error covariance matrix. This representation tends to develop spurious correlations as the forecast-assimilation cycle proceeds, which is a further complication for dealing with floods in either urban areas or river junctions in rural environments. Here we evaluate the assimilation of WLOs obtained from a sequence of real SAR overpasses (the X-band COSMO-Skymed constellation) in a case study. We show that a direct application of a global Ensemble Transform Kalman Filter (ETKF) suffers from filter divergence caused by spurious correlations. However, a spatially-based filter localization provides a substantial moderation in the development of the forecast error covariance matrix, directly improving the forecast and also making it possible to further benefit from a simultaneous online inflow error estimation and correction. Additionally, we propose and evaluate a novel along-network metric for filter localization, which is physically-meaningful for the flood over a network problem. Using this metric, we further evaluate the simultaneous estimation of channel friction and spatially-variable channel bathymetry, for which the filter seems able to converge simultaneously to sensible values. Results also indicate that friction is a second order effect in flood inundation models applied to gradually varied flow in large rivers. The study is not conclusive regarding whether in an operational situation the simultaneous estimation of friction and bathymetry helps the current forecast. Overall, the results indicate the feasibility of stand-alone EO-based operational flood forecasting.
Resumo:
We systematically compare the performance of ETKF-4DVAR, 4DVAR-BEN and 4DENVAR with respect to two traditional methods (4DVAR and ETKF) and an ensemble transform Kalman smoother (ETKS) on the Lorenz 1963 model. We specifically investigated this performance with increasing nonlinearity and using a quasi-static variational assimilation algorithm as a comparison. Using the analysis root mean square error (RMSE) as a metric, these methods have been compared considering (1) assimilation window length and observation interval size and (2) ensemble size to investigate the influence of hybrid background error covariance matrices and nonlinearity on the performance of the methods. For short assimilation windows with close to linear dynamics, it has been shown that all hybrid methods show an improvement in RMSE compared to the traditional methods. For long assimilation window lengths in which nonlinear dynamics are substantial, the variational framework can have diffculties fnding the global minimum of the cost function, so we explore a quasi-static variational assimilation (QSVA) framework. Of the hybrid methods, it is seen that under certain parameters, hybrid methods which do not use a climatological background error covariance do not need QSVA to perform accurately. Generally, results show that the ETKS and hybrid methods that do not use a climatological background error covariance matrix with QSVA outperform all other methods due to the full flow dependency of the background error covariance matrix which also allows for the most nonlinearity.
Resumo:
Background Atypical self-processing is an emerging theme in autism research, suggested by lower self-reference effect in memory, and atypical neural responses to visual self-representations. Most research on physical self-processing in autism uses visual stimuli. However, the self is a multimodal construct, and therefore, it is essential to test self-recognition in other sensory modalities as well. Self-recognition in the auditory modality remains relatively unexplored and has not been tested in relation to autism and related traits. This study investigates self-recognition in auditory and visual domain in the general population and tests if it is associated with autistic traits. Methods Thirty-nine neurotypical adults participated in a two-part study. In the first session, individual participant’s voice was recorded and face was photographed and morphed respectively with voices and faces from unfamiliar identities. In the second session, participants performed a ‘self-identification’ task, classifying each morph as ‘self’ voice (or face) or an ‘other’ voice (or face). All participants also completed the Autism Spectrum Quotient (AQ). For each sensory modality, slope of the self-recognition curve was used as individual self-recognition metric. These two self-recognition metrics were tested for association between each other, and with autistic traits. Results Fifty percent ‘self’ response was reached for a higher percentage of self in the auditory domain compared to the visual domain (t = 3.142; P < 0.01). No significant correlation was noted between self-recognition bias across sensory modalities (τ = −0.165, P = 0.204). Higher recognition bias for self-voice was observed in individuals higher in autistic traits (τ AQ = 0.301, P = 0.008). No such correlation was observed between recognition bias for self-face and autistic traits (τ AQ = −0.020, P = 0.438). Conclusions Our data shows that recognition bias for physical self-representation is not related across sensory modalities. Further, individuals with higher autistic traits were better able to discriminate self from other voices, but this relation was not observed with self-face. A narrow self-other overlap in the auditory domain seen in individuals with high autistic traits could arise due to enhanced perceptual processing of auditory stimuli often observed in individuals with autism.
Resumo:
Background American mink forage on land and in water, with aquatic prey often constituting a large proportion of their diet. Their long, thin body shape and relatively poor insulation make them vulnerable to heat loss, particularly in water, yet some individuals dive over 100 times a day. At the level of individual dives, previous research found no difference in dive depth or duration, or the total number of dives per day between seasons, but mink did appear to make more dives per active hour in winter than in summer. There was also no difference in the depth or duration of individual dives between the sexes, but there was some evidence that females made more dives per day than males. However, because individual mink dives tend to be extremely short in duration, persistence (quantified as the number of consecutive dives performed) may be a more appropriate metric with which to compare diving behaviour under different scenarios. Results Mink performed up to 28 consecutive dives, and dived continually for up to 36 min. Periods of more loosely aggregated diving (termed ‘aquatic activity sessions’) comprised up to 80 dives, carried out over up to 162.8 min. Contrary to our predictions, persistence was inversely proportional to body weight, with small animals more persistent than large ones, and (for females, but not for males) increased with decreasing temperature. For both sexes, persistence was greater during the day than during the night. Conclusions The observed body weight effect may point to inter-sexual niche partitioning, since in mink the smallest animals are females and the largest are males. The results may equally point to individual specialism’s, since persistence was also highly variable among individuals. Given the energetic costs involved, the extreme persistence of some animals observed in winter suggests that the costs of occasional prolonged activity in cold water are outweighed by the energetic gains. Analysing dive persistence can provide information on an animal’s physical capabilities for performing multiple dives and may reveal how such behaviour is affected by different conditions. Further development of monitoring and biologging methodology to allow quantification of hunting success, and thus the rewards obtained under alternative scenarios, would be insightful.