984 resultados para Validated Computations


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change during the last five decades has impacted significantly on natural ecosystems and the rate of current climate change is of great concern among conservation biologists. Species Distribution Models (SDMs) have been used widely to project changes in species’ bioclimatic envelopes under future climate scenarios. Here, we aimed to advance this technique by assessing future changes in the bioclimatic envelopes of an entire mammalian order, the Lagomorpha, using a novel framework for model validation based jointly on subjective expert evaluation and objective model evaluation statistics. SDMs were built using climatic, topographical and habitat variables for all 87 lagomorph species under past and current climate scenarios. Expert evaluation and Kappa values were used to validate past and current models and only those deemed ‘modellable’ within our framework were projected under future climate scenarios (58 species). Phylogenetically-controlled regressions were used to test whether species traits correlated with predicted responses to climate change. Climate change is likely to impact more than two-thirds of lagomorph species, with leporids (rabbits, hares and jackrabbits) likely to undertake poleward shifts with little overall change in range extent, whilst pikas are likely to show extreme shifts to higher altitudes associated with marked range declines, including the likely extinction of Kozlov’s Pika (Ochotona koslowi). Smaller-bodied species were more likely to exhibit range contractions and elevational increases, but showing little poleward movement, and fecund species were more likely to shift latitudinally and elevationally. Our results suggest that species traits may be important indicators of future climate change and we believe multi-species approaches, as demonstrated here, are likely to lead to more effective mitigation measures and conservation management. We strongly advocate studies minimising data gaps in our knowledge of the Order, specifically collecting more specimens for biodiversity archives and targeting data deficient geographic regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As data analytics are growing in importance they are also quickly becoming one of the dominant application domains that require parallel processing. This paper investigates the applicability of OpenMP, the dominant shared-memory parallel programming model in high-performance computing, to the domain of data analytics. We contrast the performance and programmability of key data analytics benchmarks against Phoenix++, a state-of-the-art shared memory map/reduce programming system. Our study shows that OpenMP outperforms the Phoenix++ system by a large margin for several benchmarks. In other cases, however, the programming model is lacking support for this application domain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is demand for an easily programmable, high performance image processing platform based on FPGAs. In previous work, a novel, high performance processor - IPPro was developed and a Histogram of Orientated Gradients (HOG) algorithm study undertaken on a Xilinx Zynq platform. Here, we identify and explore a number of mapping strategies to improve processing efficiency for soft-cores and a number of options for creation of a division coprocessor. This is demonstrated for the revised high definition HOG implementation on a Zynq platform, resulting in a performance of 328 fps which represents a 146% speed improvement over the original realization and a tenfold reduction in energy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 benchmark kernels shows that the proposed framework picks the optimal configuration with high accuracy. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study was the first attempt to carry out a validation of a temperament test (TT) for shelter dogs that addressed the topics of inter- and intra-raters agreements, test-retest reliability, and validity. The TT consisted of 22 subtests. Each dog was approached and handled by an unfamiliar person and made to interact with a same- and an opposite-gender conspecific. Dogs were tested twice in the shelter and once in their new homes 4 months after adoption to evaluate consistency in behavioral assessment. Playfulness, trainability, problem solving abilities, food possessiveness, and reactivity to sudden stimuli were also evaluated. Testers scored dogs' responses in terms of confidence, fearfulness, and aggressiveness. Results highlighted strengths and limits of this TT that was devised to help shelter staff in matching dogs' personality and owners' expectations. Methodological constraints when working with sheltered dogs are unavoidable; however, the test proved to be overall feasible, reliable, and valid although further studies are needed to address the critical issues that emerged. © 2011 Elsevier Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the environmental conditions inside a highly-glazed cross-ventilated meeting room. A 3D computational fluid dynamics (CFD) model of an indoor environment is developed with the support of the field measurements performed in a normally operating room. The work presented here follows the steps of the formal calibration methodology for the development of CFD models of naturally ventilated environments. This paper utilises the calibration methodology in order to predict environmental conditions within the highly-glazed cross-ventilated room occupied by people. The CFD model is verified and validated with field measurements performed in an operating building. Moreover, parametric analysis determines the most influential boundary conditions on indoor air temperatures and air speeds

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the past few decades, there has been an increased frequency and duration of cyanobacterial Harmful Algal Blooms (HABs) in freshwater systems globally. These can produce secondary metabolites called cyanotoxins, many of which are hepatotoxins, raising concerns about repeated exposure through ingestion of contaminated drinking water or food or through recreational activities such as bathing/ swimming. An ultra-performance liquid chromatography tandem mass spectrometry (UPLC–MS/MS) multi-toxin method has been developed and validated for freshwater cyanotoxins; microcystins-LR, -YR, -RR, -LA, -LY and -LF, nodularin, cylindrospermopsin, anatoxin-a and the marine diatom toxin domoic acid. Separation was achieved in around 9 min and dual SPE was incorporated providing detection limits of between 0.3 and 5.6 ng/L of original sample. Intra- and inter-day precision analysis showed relative
standard deviations (RSD) of 1.2–9.6% and 1.3–12.0% respectively. The method was applied to the analysis of aquatic samples (n = 206) from six European countries. The main class detected were the hepatotoxins; microcystin-YR (n = 22), cylindrospermopsin (n = 25), microcystin-RR (n = 17), microcystin-LR (n = 12), microcystin-LY (n = 1), microcystin-LF (n = 1) and nodularin (n = 5). For microcystins, the levels detected ranged from 0.001 to 1.51 mg/L, with two samples showing combined levels above the guideline set by the WHO of 1 mg/L for microcystin-LR. Several samples presented with multiple toxins indicating the potential for synergistic effects and possibly enhanced toxicity. This is the first published pan European survey of freshwater bodies for multiple biotoxins, including two identified for the first time; cylindrospermopsin in Ireland and nodularin in Germany, presenting further incentives for improved monitoring and development of strategies to mitigate human exposure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We focus on large-scale and dense deeply embedded systems where, due to the large amount of information generated by all nodes, even simple aggregate computations such as the minimum value (MIN) of the sensor readings become notoriously expensive to obtain. Recent research has exploited a dominance-based medium access control(MAC) protocol, the CAN bus, for computing aggregated quantities in wired systems. For example, MIN can be computed efficiently and an interpolation function which approximates sensor data in an area can be obtained efficiently as well. Dominance-based MAC protocols have recently been proposed for wireless channels and these protocols can be expected to be used for achieving highly scalable aggregate computations in wireless systems. But no experimental demonstration is currently available in the research literature. In this paper, we demonstrate that highly scalable aggregate computations in wireless networks are possible. We do so by (i) building a new wireless hardware platform with appropriate characteristics for making dominance-based MAC protocols efficient, (ii) implementing dominance-based MAC protocols on this platform, (iii) implementing distributed algorithms for aggregate computations (MIN, MAX, Interpolation) using the new implementation of the dominance-based MAC protocol and (iv) performing experiments to prove that such highly scalable aggregate computations in wireless networks are possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The results of an investigation on the limits of the random errors contained in the basic data of Physical Oceanography and their propagation through the computational procedures are presented in this thesis. It also suggest a method which increases the reliability of the derived results. The thesis is presented in eight chapters including the introductory chapter. Chapter 2 discusses the general theory of errors that are relevant in the context of the propagation of errors in Physical Oceanographic computations. The error components contained in the independent oceanographic variables namely, temperature, salinity and depth are deliniated and quantified in chapter 3. Chapter 4 discusses and derives the magnitude of errors in the computation of the dependent oceanographic variables, density in situ, gt, specific volume and specific volume anomaly, due to the propagation of errors contained in the independent oceanographic variables. The errors propagated into the computed values of the derived quantities namely, dynamic depth and relative currents, have been estimated and presented chapter 5. Chapter 6 reviews the existing methods for the identification of level of no motion and suggests a method for the identification of a reliable zero reference level. Chapter 7 discusses the available methods for the extension of the zero reference level into shallow regions of the oceans and suggests a new method which is more reliable. A procedure of graphical smoothening of dynamic topographies between the error limits to provide more reliable results is also suggested in this chapter. Chapter 8 deals with the computation of the geostrophic current from these smoothened values of dynamic heights, with reference to the selected zero reference level. The summary and conclusion are also presented in this chapter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let G be finite group and K a number field or a p-adic field with ring of integers O_K. In the first part of the manuscript we present an algorithm that computes the relative algebraic K-group K_0(O_K[G],K) as an abstract abelian group. We solve the discrete logarithm problem, both in K_0(O_K[G],K) and the locally free class group cl(O_K[G]). All algorithms have been implemented in MAGMA for the case K = \IQ. In the second part of the manuscript we prove formulae for the torsion subgroup of K_0(\IZ[G],\IQ) for large classes of dihedral and quaternion groups.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tuna species of the genus Thunnus, such as the bluefin tunas, are some of the most important and yet most endangered trade fish in the world. Identification of these species in traded forms, however, may be difficult depending on the presentation of the products, which may hamper conservation efforts on trade control. In this paper, we validated a genetic methodology that can fully distinguish between the eight Thunnus species from any kind of processed tissue. Methodology: After testing several genetic markers, a complete discrimination of the eight tuna species was achieved using Forensically Informative Nucleotide Sequencing based primarily on the sequence variability of the hypervariable genetic marker mitochondrial DNA control region (mtDNA CR), followed, in some specific cases, by a second validation by a nuclear marker rDNA first internal transcribed spacer (ITS1). This methodology was able to distinguish all tuna species, including those belonging to the subgenus Neothunnus that are very closely related, and in consequence can not be differentiated with other genetic markers of lower variability. This methodology also took into consideration the presence of introgression that has been reported in past studies between T. thynnus, T. orientalis and T. alalunga. Finally, we applied the methodology to cross-check the species identity of 26 processed tuna samples. Conclusions: Using the combination of two genetic markers, one mitochondrial and another nuclear, allows a full discrimination between all eight tuna species. Unexpectedly, the genetic marker traditionally used for DNA barcoding, cytochrome oxidase 1, could not differentiate all species, thus its use as a genetic marker for tuna species identification is questioned