833 resultados para Multi-model inference
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Most current state-of-the-art haptic devices render only a single force, however almost all human grasps are characterised by multiple forces and torques applied by the fingers and palms of the hand to the object. In this chapter we will begin by considering the different types of grasp and then consider the physics of rigid objects that will be needed for correct haptic rendering. We then describe an algorithm to represent the forces associated with grasp in a natural manner. The power of the algorithm is that it considers only the capabilities of the haptic device and requires no model of the hand, thus applies to most practical grasp types. The technique is sufficiently general that it would also apply to multi-hand interactions, and hence to collaborative interactions where several people interact with the same rigid object. Key concepts in friction and rigid body dynamics are discussed and applied to the problem of rendering multiple forces to allow the person to choose their grasp on a virtual object and perceive the resulting movement via the forces in a natural way. The algorithm also generalises well to support computation of multi-body physics
Resumo:
Break crops and multi-crop rotations are common in arable farm management, and the soil quality inherited from a previous crop is one of the parameters that determine the gross margin that is achieved with a given crop from a given parcel of land. In previous work we developed a dynamic economic model to calculate the potential yield and gross margin of a set of crops grown in a selection of typical rotation scenarios, and we reported use of the model to calculate coexistence costs for GM maize grown in a crop rotation. The model predicts economic effects of pest and weed pressures in monthly time steps. Validation of the model in respect of specific traits is proceeding as data from trials with novel crop varieties is published. Alongside this aspect of the validation process, we are able to incorporate data representing the economic impact of abiotic stresses on conventional crops, and then use the model to predict the cumulative gross margin achievable from a sequence of conventional crops grown at varying levels of abiotic stress. We report new progress with this aspect of model validation. In this paper, we report the further development of the model to take account of abiotic stress arising from drought, flood, heat or frost; such stresses being introduced in addition to variable pest and weed pressure. The main purpose is to assess the economic incentive for arable farmers to adopt novel crop varieties having multiple ‘stacked’ traits introduced by means of various biotechnological tools available to crop breeders.
Resumo:
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model
Resumo:
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
Currently researchers in the field of personalized recommendations bear little consideration on users' interest differences in resource attributes although resource attribute is usually one of the most important factors in determining user preferences. To solve this problem, the paper builds an evaluation model of user interest based on resource multi-attributes, proposes a modified Pearson-Compatibility multi-attribute group decision-making algorithm, and introduces an algorithm to solve the recommendation problem of k-neighbor similar users. Considering the characteristics of collaborative filtering recommendation, the paper addresses the issues on the preference differences of similar users, incomplete values, and advanced converge of the algorithm. Thus the paper realizes multi-attribute collaborative filtering. Finally, the effectiveness of the algorithm is proved by an experiment of collaborative recommendation among multi-users based on virtual environment. The experimental results show that the algorithm has a high accuracy on predicting target users' attribute preferences and has a strong anti-interference ability on deviation and incomplete values.
Resumo:
The cloud is playing a very important role in wireless sensor network, crowd sensing and IoT data collection and processing. However, current cloud solutions lack of some features that hamper the innovation a number of other new services. We propose a cloud solution that provides these missing features as multi-cloud and device multi-tenancy relying in a whole different fully distributed paradigm, the actor model.
Resumo:
We compare measurements of integrated water vapour (IWV) over a subarctic site (Kiruna, Northern Sweden) from five different sensors and retrieval methods: Radiosondes, Global Positioning System (GPS), ground-based Fourier-transform infrared (FTIR) spectrometer, ground-based microwave radiometer, and satellite-based microwave radiometer (AMSU-B). Additionally, we compare also to ERA-Interim model reanalysis data. GPS-based IWV data have the highest temporal coverage and resolution and are chosen as reference data set. All datasets agree reasonably well, but the ground-based microwave instrument only if the data are cloud-filtered. We also address two issues that are general for such intercomparison studies, the impact of different lower altitude limits for the IWV integration, and the impact of representativeness error. We develop methods for correcting for the former, and estimating the random error contribution of the latter. A literature survey reveals that reported systematic differences between different techniques are study-dependent and show no overall consistent pattern. Further improving the absolute accuracy of IWV measurements and providing climate-quality time series therefore remain challenging problems.
Resumo:
Introducing a parameterization of the interactions between wind-driven snow depth changes and melt pond evolution allows us to improve large scale models. In this paper we have implemented an explicit melt pond scheme and, for the first time, a wind dependant snow redistribution model and new snow thermophysics into a coupled ocean–sea ice model. The comparison of long-term mean statistics of melt pond fractions against observations demonstrates realistic melt pond cover on average over Arctic sea ice, but a clear underestimation of the pond coverage on the multi-year ice (MYI) of the western Arctic Ocean. The latter shortcoming originates from the concealing effect of persistent snow on forming ponds, impeding their growth. Analyzing a second simulation with intensified snow drift enables the identification of two distinct modes of sensitivity in the melt pond formation process. First, the larger proportion of wind-transported snow that is lost in leads directly curtails the late spring snow volume on sea ice and facilitates the early development of melt ponds on MYI. In contrast, a combination of higher air temperatures and thinner snow prior to the onset of melting sometimes make the snow cover switch to a regime where it melts entirely and rapidly. In the latter situation, seemingly more frequent on first-year ice (FYI), a smaller snow volume directly relates to a reduced melt pond cover. Notwithstanding, changes in snow and water accumulation on seasonal sea ice is naturally limited, which lessens the impacts of wind-blown snow redistribution on FYI, as compared to those on MYI. At the basin scale, the overall increased melt pond cover results in decreased ice volume via the ice-albedo feedback in summer, which is experienced almost exclusively by MYI.
Resumo:
Reconstructions of salinity are used to diagnose changes in the hydrological cycle and ocean circulation. A widely used method of determining past salinity uses oxygen isotope (δOw) residuals after the extraction of the global ice volume and temperature components. This method relies on a constant relationship between δOw and salinity throughout time. Here we use the isotope-enabled fully coupled General Circulation Model (GCM) HadCM3 to test the application of spatially and time-independent relationships in the reconstruction of past ocean salinity. Simulations of the Late Holocene (LH), Last Glacial Maximum (LGM), and Last Interglacial (LIG) climates are performed and benchmarked against existing compilations of stable oxygen isotopes in carbonates (δOc), which primarily reflect δOw and temperature. We find that HadCM3 produces an accurate representation of the surface ocean δOc distribution for the LH and LGM. Our simulations show considerable variability in spatial and temporal δOw-salinity relationships. Spatial gradients are generally shallower but within ∼50% of the actual simulated LH to LGM and LH to LIG temporal gradients and temporal gradients calculated from multi-decadal variability are generally shallower than both spatial and actual simulated gradients. The largest sources of uncertainty in salinity reconstructions are found to be caused by changes in regional freshwater budgets, ocean circulation, and sea ice regimes. These can cause errors in salinity estimates exceeding 4 psu. Our results suggest that paleosalinity reconstructions in the South Atlantic, Indian and Tropical Pacific Oceans should be most robust, since these regions exhibit relatively constant δOw-salinity relationships across spatial and temporal scales. Largest uncertainties will affect North Atlantic and high latitude paleosalinity reconstructions. Finally, the results show that it is difficult to generate reliable salinity estimates for regions of dynamic oceanography, such as the North Atlantic, without additional constraints.
Resumo:
Phylogenetic analyses of chloroplast DNA sequences, morphology, and combined data have provided consistent support for many of the major branches within the angiosperm, clade Dipsacales. Here we use sequences from three mitochondrial loci to test the existing broad scale phylogeny and in an attempt to resolve several relationships that have remained uncertain. Parsimony, maximum likelihood, and Bayesian analyses of a combined mitochondrial data set recover trees broadly consistent with previous studies, although resolution and support are lower than in the largest chloroplast analyses. Combining chloroplast and mitochondrial data results in a generally well-resolved and very strongly supported topology but the previously recognized problem areas remain. To investigate why these relationships have been difficult to resolve we conducted a series of experiments using different data partitions and heterogeneous substitution models. Usually more complex modeling schemes are favored regardless of the partitions recognized but model choice had little effect on topology or support values. In contrast there are consistent but weakly supported differences in the topologies recovered from coding and non-coding matrices. These conflicts directly correspond to relationships that were poorly resolved in analyses of the full combined chloroplast-mitochondrial data set. We suggest incongruent signal has contributed to our inability to confidently resolve these problem areas. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
So Paulo is the most developed state in Brazil and contains few fragments of native ecosystems, generally surrounded by intensive agriculture lands. Despite this, some areas still shelter large native animals. We aimed at understanding how medium and large carnivores use a mosaic landscape of forest/savanna and agroecosystems, and how the species respond to different landscape parameters (percentage of landcover and edge density), in a multi-scale perspective. The response variables were: species richness, carnivore frequency and frequency for the three most recorded species (Puma concolor, Chrysocyon brachyurus and Leopardus pardalis). We compared 11 competing models using Akaike`s information criterion (AIC) and assessed model support using weight of AIC. Concurrent models were combinations of landcover types (native vegetation, ""cerrado"" formations, ""cerrado"" and eucalypt plantation), landscape feature (percentage of landcover and edge density) and spatial scale. Herein, spatial scale refers to the radius around a sampling point defining a circular landscape. The scales analyzed were 250 (fine), 1,000 (medium) and 2,000 m (coarse). The shape of curves for response variables (linear, exponential and power) was also assessed. Our results indicate that species with high mobility, P. concolor and C. brachyurus, were best explained by edge density of the native vegetation at a coarse scale (2,000 m). The relationship between P. concolor and C. brachyurus frequency had a negative power-shaped response to explanatory variables. This general trend was also observed for species richness and carnivore frequency. Species richness and P. concolor frequency were also well explained by a second concurrent model: edge density of cerrado at the fine (250 m) scale. A different response was recorded for L. pardalis, as the frequency was best explained for the amount of cerrado at the fine (250 m) scale. The curve of response was linearly positive. The contrasting results (P. concolor and C. brachyurus vs L. pardalis) may be due to the much higher mobility of the two first species, in comparison with the third. Still, L. pardalis requires habitat with higher quality when compared with other two species. This study highlights the importance of considering multiple spatial scales when evaluating species responses to different habitats. An important and new finding was the prevalence of edge density over the habitat extension to explain overall carnivore distribution, a key information for planning and management of protected areas.
Resumo:
Background and Objectives: There are some indications that low-level laser therapy (LLLT) may delay the development of skeletal muscle fatigue during high-intensity exercise. There have also been claims that LED cluster probes may be effective for this application however there are differences between LED and laser sources like spot size, spectral width, power output, etc. In this study we wanted to test if light emitting diode therapy (LEDT) can alter muscle performance, fatigue development and biochemical markers for skeletal muscle recovery in an experimental model of biceps humeri muscle contractions. Study Design/Materials and Methods: Ten male professional volleyball players (23.6 [SD +/- 5.6] years old) entered a randomized double-blinded placebo-controlled crossover trial. Active cluster LEDT (69 LEDs with wavelengths 660/850 nm, 10/30 mW, 30 seconds total irradiation time, 41.7J of total energy irradiated) or an identical placebo LEDT was delivered under double-blinded conditions to the middle of biceps humeri muscle immediately before exercise. All subjects performed voluntary biceps humeri contractions with a workload of 75% of their maximal voluntary contraction force (MVC) until exhaustion. Results: Active LEDT increased the number of biceps humeri contractions by 12.9% (38.60 [SD +/- 9.03] vs. 34.20 [SD +/- 8.68], P = 0.021) and extended the elapsed time to perform contractions by 11.6% (P = 0.036) versus placebo. In addition, post-exercise levels of biochemical markers decreased significantly with active LEDT: Blood Lactate (P = 0.042), Creatine Kinase (P = 0.035), and C-Reative Protein levels (P = 0.030), when compared to placebo LEDT. Conclusion: We conclude that this particular procedure and dose of LEDT immediately before exhaustive biceps humeri contractions, causes a slight delay in the development of skeletal muscle fatigue, decreases post-exercise blood lactate levels and inhibits the release of Creatine Kinase and C-Reative Protein. Lasers Surg. Med. 41:572-577, 2009. (C) 2009 Wiley-Liss, Inc.
Resumo:
In this paper, we present an algorithm for cluster analysis that integrates aspects from cluster ensemble and multi-objective clustering. The algorithm is based on a Pareto-based multi-objective genetic algorithm, with a special crossover operator, which uses clustering validation measures as objective functions. The algorithm proposed can deal with data sets presenting different types of clusters, without the need of expertise in cluster analysis. its result is a concise set of partitions representing alternative trade-offs among the objective functions. We compare the results obtained with our algorithm, in the context of gene expression data sets, to those achieved with multi-objective Clustering with automatic K-determination (MOCK). the algorithm most closely related to ours. (C) 2009 Elsevier B.V. All rights reserved.