923 resultados para Dynamic storage allocation (Computer science)
Resumo:
Nine chess programs competed in July 2015 in the ICGA's World Computer Chess Championship at the Computer Science department of Leiden University. This is the official report of the event.
Resumo:
Nonlinear data assimilation is high on the agenda in all fields of the geosciences as with ever increasing model resolution and inclusion of more physical (biological etc.) processes, and more complex observation operators the data-assimilation problem becomes more and more nonlinear. The suitability of particle filters to solve the nonlinear data assimilation problem in high-dimensional geophysical problems will be discussed. Several existing and new schemes will be presented and it is shown that at least one of them, the Equivalent-Weights Particle Filter, does indeed beat the curse of dimensionality and provides a way forward to solve the problem of nonlinear data assimilation in high-dimensional systems.
Resumo:
Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper. we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This work proposes and discusses an approach for inducing Bayesian classifiers aimed at balancing the tradeoff between the precise probability estimates produced by time consuming unrestricted Bayesian networks and the computational efficiency of Naive Bayes (NB) classifiers. The proposed approach is based on the fundamental principles of the Heuristic Search Bayesian network learning. The Markov Blanket concept, as well as a proposed ""approximate Markov Blanket"" are used to reduce the number of nodes that form the Bayesian network to be induced from data. Consequently, the usually high computational cost of the heuristic search learning algorithms can be lessened, while Bayesian network structures better than NB can be achieved. The resulting algorithms, called DMBC (Dynamic Markov Blanket Classifier) and A-DMBC (Approximate DMBC), are empirically assessed in twelve domains that illustrate scenarios of particular interest. The obtained results are compared with NB and Tree Augmented Network (TAN) classifiers, and confinn that both proposed algorithms can provide good classification accuracies and better probability estimates than NB and TAN, while being more computationally efficient than the widely used K2 Algorithm.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Process scheduling techniques consider the current load situation to allocate computing resources. Those techniques make approximations such as the average of communication, processing, and memory access to improve the process scheduling, although processes may present different behaviors during their whole execution. They may start with high communication requirements and later just processing. By discovering how processes behave over time, we believe it is possible to improve the resource allocation. This has motivated this paper which adopts chaos theory concepts and nonlinear prediction techniques in order to model and predict process behavior. Results confirm the radial basis function technique which presents good predictions and also low processing demands show what is essential in a real distributed environment.
Resumo:
This work presents a numerical method suitable for the study of the development of internal boundary layers (IBL) and their characteristics for flows over various types of coastal cliffs. The IBL is an important meteorological occurrence for flows with surface roughness and topographical step changes. A two-dimensional flow program was used for this study. The governing equations were written using the vorticity-velocity formulation. The spatial derivatives were discretized by high-order compact finite differences schemes. The time integration was performed with a low storage fourth-order Runge-Kutta scheme. The coastal cliff (step) was specified through an immersed boundary method. The validation of the code was done by comparison of the results with experimental and observational data. The numerical simulations were carried out for different coastal cliff heights and inclinations. The results show that the predominant factors for the height of the IBL and its characteristics are the upstream velocity, and the height and form (inclination) of the coastal cliff. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
We have studied by numerical simulations the relaxation of the stochastic seven-state Potts model after a quench from a high temperature down to a temperature below the first-order transition. For quench temperatures just below the transition temperature the phase ordering occurs by simple coarsening under the action of surface tension. For sufficient low temperatures however the straightening of the interface between domains drives the system toward a metastable disordered state, identified as a glassy state. Escaping from this state occurs, if the quench temperature is nonzero, by a thermal activated dynamics that eventually drives the system toward the equilibrium state. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Shape provides one of the most relevant information about an object. This makes shape one of the most important visual attributes used to characterize objects. This paper introduces a novel approach for shape characterization, which combines modeling shape into a complex network and the analysis of its complexity in a dynamic evolution context. Descriptors computed through this approach show to be efficient in shape characterization, incorporating many characteristics, such as scale and rotation invariant. Experiments using two different shape databases (an artificial shapes database and a leaf shape database) are presented in order to evaluate the method. and its results are compared to traditional shape analysis methods found in literature. (C) 2009 Published by Elsevier B.V.
Resumo:
This paper introduces a novel methodology to shape boundary characterization, where a shape is modeled into a small-world complex network. It uses degree and joint degree measurements in a dynamic evolution network to compose a set of shape descriptors. The proposed shape characterization method has all efficient power of shape characterization, it is robust, noise tolerant, scale invariant and rotation invariant. A leaf plant classification experiment is presented on three image databases in order to evaluate the method and compare it with other descriptors in the literature (Fourier descriptors, Curvature, Zernike moments and multiscale fractal dimension). (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.
Resumo:
When modeling real-world decision-theoretic planning problems in the Markov Decision Process (MDP) framework, it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, natural uncertainty arises in the transition specification due to elicitation of MOP transition models from an expert or estimation from data, or non-stationary transition distributions arising from insufficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, the Markov Decision Process with Imprecise Transition Probabilities (MDP-IPs) has been introduced to model such scenarios. Unfortunately, while various solution algorithms exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose efficient dynamic programming methods to exploit its structure. Noting that the key computational bottleneck in the solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional ""flat"" dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error of any approximation algorithm evaluated. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Objective: To investigate whether advanced visualizations of spirography-based objective measures are useful in differentiating drug-related motor dysfunctions between Off and dyskinesia in Parkinson’s disease (PD). Background: During the course of a 3 year longitudinal clinical study, in total 65 patients (43 males and 22 females with mean age of 65) with advanced PD and 10 healthy elderly (HE) subjects (5 males and 5 females with mean age of 61) were assessed. Both patients and HE subjects performed repeated and time-stamped assessments of their objective health indicators using a test battery implemented on a telemetry touch screen handheld computer, in their home environment settings. Among other tasks, the subjects were asked to trace a pre-drawn Archimedes spiral using the dominant hand and repeat the test three times per test occasion. Methods: A web-based framework was developed to enable a visual exploration of relevant spirography-based kinematic features by clinicians so they can in turn evaluate the motor states of the patients i.e. Off and dyskinesia. The system uses different visualization techniques such as time series plots, animation, and interaction and organizes them into different views to aid clinicians in measuring spatial and time-dependent irregularities that could be associated with the motor states. Along with the animation view, the system displays two time series plots for representing drawing speed (blue line) and displacement from ideal trajectory (orange line). The views are coordinated and linked i.e. user interactions in one of the views will be reflected in other views. For instance, when the user points in one of the pixels in the spiral view, the circle size of the underlying pixel increases and a vertical line appears in the time series views to depict the corresponding position. In addition, in order to enable clinicians to observe erratic movements more clearly and thus improve the detection of irregularities, the system displays a color-map which gives an idea of the longevity of the spirography task. Figure 2 shows single randomly selected spirals drawn by a: A) patient who experienced dyskinesias, B) HE subject, and C) patient in Off state. Results: According to a domain expert (DN), the spirals drawn in the Off and dyskinesia motor states are characterized by different spatial and time features. For instance, the spiral shown in Fig. 2A was drawn by a patient who showed symptoms of dyskinesia; the drawing speed was relatively high (cf. blue-colored time series plot and the short timestamp scale in the x axis) and the spatial displacement was high (cf. orange-colored time series plot) associated with smooth deviations as a result of uncontrollable movements. The patient also exhibited low amount of hesitation which could be reflected both in the animation of the spiral as well as time series plots. In contrast, the patient who was in the Off state exhibited different kinematic features, as shown in Fig. 2C. In the case of spirals drawn by a HE subject, there was a great precision during the drawing process as well as unchanging levels of time-dependent features over the test trial, as seen in Fig. 2B. Conclusions: Visualizing spirography-based objective measures enables identification of trends and patterns of drug-related motor dysfunctions at the patient’s individual level. Dynamic access of visualized motor tests may be useful during the evaluation of drug-related complications such as under- and over-medications, providing decision support to clinicians during evaluation of treatment effects as well as improve the quality of life of patients and their caregivers. In future, we plan to evaluate the proposed approach by assessing within- and between-clinician variability in ratings in order to determine its actual usefulness and then use these ratings as target outcomes in supervised machine learning, similarly as it was previously done in the study performed by Memedi et al. (2013).
Resumo:
In e-Science experiments, it is vital to record the experimental process for later use such as in interpreting results, verifying that the correct process took place or tracing where data came from. The process that led to some data is called the provenance of that data, and a provenance architecture is the software architecture for a system that will provide the necessary functionality to record, store and use process documentation. However, there has been little principled analysis of what is actually required of a provenance architecture, so it is impossible to determine the functionality they would ideally support. In this paper, we present use cases for a provenance architecture from current experiments in biology, chemistry, physics and computer science, and analyse the use cases to determine the technical requirements of a generic, technology and application-independent architecture. We propose an architecture that meets these requirements and evaluate a preliminary implementation by attempting to realise two of the use cases.