888 resultados para Grid computing environment
Resumo:
For the execution of the scientific applications, different methods have been proposed to dynamically provide execution environments for such applications that hide the complexity of underlying distributed and heterogeneous infrastructures. Recently virtualization has emerged as a promising technology to provide such environments. Virtualization is a technology that abstracts away the details of physical hardware and provides virtualized resources for high-level scientific applications. Virtualization offers a cost-effective and flexible way to use and manage computing resources. Such an abstraction is appealing in Grid computing and Cloud computing for better matching jobs (applications) to computational resources. This work applies the virtualization concept to the Condor dynamic resource management system by using Condor Virtual Universe to harvest the existing virtual computing resources to their maximum utility. It allows existing computing resources to be dynamically provisioned at run-time by users based on application requirements instead of statically at design-time thereby lay the basis for efficient use of the available resources, thus providing way for the efficient use of the available resources.
Resumo:
This study examines how firms interpret new, potentially disruptive technologies in their own strategic context. The work presents a cross-case analysis of four potentially disruptive technologies or technical operating models: Bluetooth, WLAN, Grid computing and Mobile Peer-to-peer paradigm. The technologies were investigated from the perspective of three mobile operators, a device manufacturer and a software company in the ICT industry. The theoretical background for the study consists of the resource-based view of the firm with dynamic perspective, the theories on the nature of technology and innovations, and the concept of business model. The literature review builds up a propositional framework for estimating the amount of radical change in the companies' business model with two middle variables, the disruptiveness potential of a new technology, and the strategic importance of a new technology to a firm. The data was gathered in group discussion sessions in each company. The results of each case analysis were brought together to evaluate, how firms interpret the potential disruptiveness in terms of changes in product characteristics and added value, technology and market uncertainty, changes in product-market positions, possible competence disruption and changes in value network positions. The results indicate that the perceived disruptiveness in terms ofproduct characteristics does not necessarily translate into strategic importance. In addition, firms did not see the new technologies as a threat in terms of potential competence disruption.
Resumo:
Peer-reviewed
Resumo:
This manuscript aims to show the basic concepts and practical application of Principal Component Analysis (PCA) as a tutorial, using Matlab or Octave computing environment for beginners, undergraduate and graduate students. As a practical example it is shown the exploratory analysis of edible vegetable oils by mid infrared spectroscopy.
Resumo:
The aim of this manuscript was to show the basic concepts and practical application of Partial Least Squares (PLS) as a tutorial, using the Matlab computing environment for beginners, undergraduate and graduate students. As a practical example, the determination of the drug paracetamol in commercial tablets using Near-Infrared (NIR) spectroscopy and Partial Least Squares (PLS) regression was shown, an experiment that has been successfully carried out at the Chemical Institute of Campinas State University for chemistry undergraduate course students to introduce the basic concepts of multivariate calibration in a practical way.
Resumo:
The objective of this manuscript is to describe a practical experiment that can be employed for teaching concepts related to design of experiments using Matlab or Octave computing environment to beginners, undergraduate and graduate students. The classical experiment for determination of Fe (II) using o-phenanthroline was selected because it is easy to understand, and all the required materials are readily available in most analytical laboratories. The approach used in this tutorial is divided in two steps: first, the students are introduced to the concept of multivariate effects, how to calculate and interpret them, and the construction and evaluation of a linear model to describe the experimental domain by using a 2³ factorial design. Second, an extension of the factorial design by adding axial points is described, thereby, providing a central composite design. The quadratic model is then introduced and used to build the response surface.
Resumo:
The modern society is getting increasingly dependent on software applications. These run on processors, use memory and account for controlling functionalities that are often taken for granted. Typically, applications adjust the functionality in response to a certain context that is provided or derived from the informal environment with various qualities. To rigorously model the dependence of an application on a context, the details of the context are abstracted and the environment is assumed stable and fixed. However, in a context-aware ubiquitous computing environment populated by autonomous agents, a context and its quality parameters may change at any time. This raises the need to derive the current context and its qualities at runtime. It also implies that a context is never certain and may be subjective, issues captured by the context’s quality parameter of experience-based trustworthiness. Given this, the research question of this thesis is: In what logical topology and by what means may context provided by autonomous agents be derived and formally modelled to serve the context-awareness requirements of an application? This research question also stipulates that the context derivation needs to incorporate the quality of the context. In this thesis, we focus on the quality of context parameter of trustworthiness based on experiences having a level of certainty and referral experiences, thus making trustworthiness reputation based. Hence, in this thesis we seek a basis on which to reason and analyse the inherently inaccurate context derived by autonomous agents populating a ubiquitous computing environment in order to formally model context-awareness. More specifically, the contribution of this thesis is threefold: (i) we propose a logical topology of context derivation and a method of calculating its trustworthiness, (ii) we provide a general model for storing experiences and (iii) we formalise the dependence between the logical topology of context derivation and its experience-based trustworthiness. These contributions enable abstraction of a context and its quality parameters to a Boolean decision at runtime that may be formally reasoned with. We employ the Action Systems framework for modelling this. The thesis is a compendium of the author’s scientific papers, which are republished in Part II. Part I introduces the field of research by providing the mending elements for the thesis to be a coherent introduction for addressing the research question. In Part I we also review a significant body of related literature in order to better illustrate our contributions to the research field.
Resumo:
Parameter estimation still remains a challenge in many important applications. There is a need to develop methods that utilize achievements in modern computational systems with growing capabilities. Owing to this fact different kinds of Evolutionary Algorithms are becoming an especially perspective field of research. The main aim of this thesis is to explore theoretical aspects of a specific type of Evolutionary Algorithms class, the Differential Evolution (DE) method, and implement this algorithm as codes capable to solve a large range of problems. Matlab, a numerical computing environment provided by MathWorks inc., has been utilized for this purpose. Our implementation empirically demonstrates the benefits of a stochastic optimizers with respect to deterministic optimizers in case of stochastic and chaotic problems. Furthermore, the advanced features of Differential Evolution are discussed as well as taken into account in the Matlab realization. Test "toycase" examples are presented in order to show advantages and disadvantages caused by additional aspects involved in extensions of the basic algorithm. Another aim of this paper is to apply the DE approach to the parameter estimation problem of the system exhibiting chaotic behavior, where the well-known Lorenz system with specific set of parameter values is taken as an example. Finally, the DE approach for estimation of chaotic dynamics is compared to the Ensemble prediction and parameter estimation system (EPPES) approach which was recently proposed as a possible solution for similar problems.
Resumo:
L’infonuage est un nouveau paradigme de services informatiques disponibles à la demande qui a connu une croissance fulgurante au cours de ces dix dernières années. Le fournisseur du modèle de déploiement public des services infonuagiques décrit le service à fournir, le prix, les pénalités en cas de violation des spécifications à travers un document. Ce document s’appelle le contrat de niveau de service (SLA). La signature de ce contrat par le client et le fournisseur scelle la garantie de la qualité de service à recevoir. Ceci impose au fournisseur de gérer efficacement ses ressources afin de respecter ses engagements. Malheureusement, la violation des spécifications du SLA se révèle courante, généralement en raison de l’incertitude sur le comportement du client qui peut produire un nombre variable de requêtes vu que les ressources lui semblent illimitées. Ce comportement peut, dans un premier temps, avoir un impact direct sur la disponibilité du service. Dans un second temps, des violations à répétition risquent d'influer sur le niveau de confiance du fournisseur et sur sa réputation à respecter ses engagements. Pour faire face à ces problèmes, nous avons proposé un cadre d’applications piloté par réseau bayésien qui permet, premièrement, de classifier les fournisseurs dans un répertoire en fonction de leur niveau de confiance. Celui-ci peut être géré par une entité tierce. Un client va choisir un fournisseur dans ce répertoire avant de commencer à négocier le SLA. Deuxièmement, nous avons développé une ontologie probabiliste basée sur un réseau bayésien à entités multiples pouvant tenir compte de l’incertitude et anticiper les violations par inférence. Cette ontologie permet de faire des prédictions afin de prévenir des violations en se basant sur les données historiques comme base de connaissances. Les résultats obtenus montrent l’efficacité de l’ontologie probabiliste pour la prédiction de violation dans l’ensemble des paramètres SLA appliqués dans un environnement infonuagique.
Resumo:
In today's complicated computing environment, managing data has become the primary concern of all industries. Information security is the greatest challenge and it has become essential to secure the enterprise system resources like the databases and the operating systems from the attacks of the unknown outsiders. Our approach plays a major role in detecting and managing vulnerabilities in complex computing systems. It allows enterprises to assess two primary tiers through a single interface as a vulnerability scanner tool which provides a secure system which is also compatible with the security compliance of the industry. It provides an overall view of the vulnerabilities in the database, by automatically scanning them with minimum overhead. It gives a detailed view of the risks involved and their corresponding ratings. Based on these priorities, an appropriate mitigation process can be implemented to ensure a secured system. The results show that our approach could effectively optimize the time and cost involved when compared to the existing systems
Resumo:
Las herramientas ETL (Extract, Transform, Load – extraer, transformar, cargar) permiten modelizar flujos de datos, facilitando la ejecución automática de procesos repetitivos. El intercambio de información entre dos modelos de datos heterogéneos es un claro ejemplo del tipo de tareas que pueden abordarse con software ETL. El proyecto Kettle es una herramienta ETL con licencia LGPL (Library General Public License) que utiliza técnicas de computación grid (ejecución paralela y distribuida) para poder procesar grandes cantidades de datos en un tiempo reducido. Kettle combina una potente ejecución en modo servidor con una intuitiva herramienta de escritorio para modelar los procesos y configurar los parámetros de ejecución. GeoKettle es una extensión de Kettle, que añade la posibilidad de tratar datos con componente geográfica, si bien está limitado a datos vectoriales y a ciertas operaciones espaciales muy concreta. El Centro Temático Europeo de Usos del Suelo e Información Espacial (ETC-LUSI) está impulsando un proyecto complementario, llamado BeETLe, que pretende ampliar drásticamente las capacidades de análisis y transformación espacial de GeoKettle. Para ello se ha elegido el proyecto Sextante, una librería de análisis espacial que incluye más de doscientos algoritmos ráster y vectoriales. La intención del proyecto BeETLe es integrar el conjunto de algoritmos de Sextante en GeoKettle, de forma que estén disponibles como transformaciones de GeoKettle. Las principales características de la herramienta BeETLe incluyen: automatización de procesos de análisis espacial o de transformaciones repetitivas de datos espaciales, ejecución paralela y distribuida (grid computing), capacidad para procesar grandes cantidades de datos sin limitaciones de memoria, y soporte de datos ráster y vectorial. Los usuarios actuales de Sextante descubrirán que BeETLe les propone una forma de trabajo sencilla e intuitiva, que añade a Sextante toda la potencia que ofrecen las herramientas ETL para procesar y transformar información en bases de datos
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them also involves complicated workflows implemented as shell scripts. A new grid middleware system that is well suited to climate modelling applications is presented in this paper. Grid Remote Execution (G-Rex) allows climate models to be deployed as Web services on remote computer systems and then launched and controlled as if they were running on the user's own computer. Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model. G-Rex has a REST architectural style, featuring a Java client program that can easily be incorporated into existing scientific workflow scripts. Some technical details of G-Rex are presented, with examples of its use by climate modellers.
Resumo:
In this paper, we present a distributed computing framework for problems characterized by a highly irregular search tree, whereby no reliable workload prediction is available. The framework is based on a peer-to-peer computing environment and dynamic load balancing. The system allows for dynamic resource aggregation, does not depend on any specific meta-computing middleware and is suitable for large-scale, multi-domain, heterogeneous environments, such as computational Grids. Dynamic load balancing policies based on global statistics are known to provide optimal load balancing performance, while randomized techniques provide high scalability. The proposed method combines both advantages and adopts distributed job-pools and a randomized polling technique. The framework has been successfully adopted in a parallel search algorithm for subgraph mining and evaluated on a molecular compounds dataset. The parallel application has shown good calability and close-to linear speedup in a distributed network of workstations.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
Collaborative mining of distributed data streams in a mobile computing environment is referred to as Pocket Data Mining PDM. Hoeffding trees techniques have been experimentally and analytically validated for data stream classification. In this paper, we have proposed, developed and evaluated the adoption of distributed Hoeffding trees for classifying streaming data in PDM applications. We have identified a realistic scenario in which different users equipped with smart mobile devices run a local Hoeffding tree classifier on a subset of the attributes. Thus, we have investigated the mining of vertically partitioned datasets with possible overlap of attributes, which is the more likely case. Our experimental results have validated the efficiency of our proposed model achieving promising accuracy for real deployment.