927 resultados para peer-to-peer (P2P) computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A utilização massiva da internet e dos serviços que oferece por parte do utilizador final potencia a evolução dos mesmos, motivando as empresas a apostarem no desenvolvimento deste tipo de soluções. Requisitos como o poder de computação, flexibilidade e escalabilidade tornam-se cada vez mais indissociáveis do desenvolvimento aplicacional, o que leva ao surgimento de paradigmas como o de Cloud Computing. É neste âmbito que surge o presente trabalho. Com o objetivo de estudar o paradigma de Cloud Computing inicia-se um estudo sobre esta temática, onde é detalhado o seu conceito, a sua evolução histórica e comparados os diferentes tipos de implementações que suporta. O estudo detalha posteriormente a plataforma Azure, sendo analisada a sua topologia e arquitetura, detalhando-se os seus componentes e a forma como esta mitiga alguns dos problemas mencionados. Com o conhecimento teórico é desenvolvido um protótipo prático sobre esta plataforma, em que se exploram algumas das particularidades da topologia e se interage com as principais redes sociais. O estudo culmina com uma análise sobre os benefícios e desvantagens do Azure e através de um levantamento das necessidades da empresa, determinam-se as oportunidades que a utilização da plataforma poderá proporcionar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maps of kriged soil properties for precision agriculture are often based on a variogram estimated from too few data because the costs of sampling and analysis are often prohibitive. If the variogram has been computed by the usual method of moments, it is likely to be unstable when there are fewer than 100 data. The scale of variation in soil properties should be investigated prior to sampling by computing a variogram from ancillary data, such as an aerial photograph of the bare soil. If the sampling interval suggested by this is large in relation to the size of the field there will be too few data to estimate a reliable variogram for kriging. Standardized variograms from aerial photographs can be used with standardized soil data that are sparse, provided the data are spatially structured and the nugget:sill ratio is similar to that of a reliable variogram of the property. The problem remains of how to set this ratio in the absence of an accurate variogram. Several methods of estimating the nugget:sill ratio for selected soil properties are proposed and evaluated. Standardized variograms with nugget:sill ratios set by these methods are more similar to those computed from intensive soil data than are variograms computed from sparse soil data. The results of cross-validation and mapping show that the standardized variograms provide more accurate estimates, and preserve the main patterns of variation better than those computed from sparse data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Java language first came to public attention in 1995. Within a year, it was being speculated that Java may be a good language for parallel and distributed computing. Its core features, including being objected oriented and platform independence, as well as having built-in network support and threads, has encouraged this view. Today, Java is being used in almost every type of computer-based system, ranging from sensor networks to high performance computing platforms, and from enterprise applications through to complex research-based.simulations. In this paper the key features that make Java a good language for parallel and distributed computing are first discussed. Two Java-based middleware systems, namely MPJ Express, an MPI-like Java messaging system, and Tycho, a wide-area asynchronous messaging framework with an integrated virtual registry are then discussed. The paper concludes by highlighting the advantages of using Java as middleware to support distributed applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work reported in this paper proposes 'Intelligent Agents', a Swarm-Array computing approach focused to apply autonomic computing concepts to parallel computing systems and build reliable systems for space applications. Swarm-array computing is a robotics a swarm robotics inspired novel computing approach considered as a path to achieve autonomy in parallel computing systems. In the intelligent agent approach, a task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and can be seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-* objectives of autonomic computing. The approach is validated on a multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent-based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve self-managing distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent-based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve self-managing distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work reported in this paper proposes Swarm-Array computing, a novel technique inspired by swarm robotics, and built on the foundations of autonomic and parallel computing. The approach aims to apply autonomic computing constructs to parallel computing systems and in effect achieve the self-ware objectives that describe self-managing systems. The constitution of swarm-array computing comprising four constituents, namely the computing system, the problem/task, the swarm and the landscape is considered. Approaches that bind these constituents together are proposed. Space applications employing FPGAs are identified as a potential area for applying swarm-array computing for building reliable systems. The feasibility of a proposed approach is validated on the SeSAm multi-agent simulator and landscapes are generated using the MATLAB toolkit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work reported in this paper proposes ‘Intelligent Agents’, a Swarm-Array computing approach focused to apply autonomic computing concepts to parallel computing systems and build reliable systems for space applications. Swarm-array computing is a robotics a swarm robotics inspired novel computing approach considered as a path to achieve autonomy in parallel computing systems. In the intelligent agent approach, a task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and can be seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-* objectives of autonomic computing. The approach is validated on a multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Blueprint for Affective Computing: A sourcebook and manual is the very first attempt to ground affective computing within the disciplines of psychology, affective neuroscience, and philosophy. This book illustrates the contributions of each of these disciplines to the development of the ever-growing field of affective computing. In addition, it demonstrates practical examples of cross-fertilization between disciplines in order to highlight the need for integration of computer science, engineering and the affective sciences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The single factor limiting the harnessing of the enormous computing power of clusters for parallel computing is the lack of appropriate software. Present cluster operating systems are not built to support parallel computing – they do not provide services to manage parallelism. The cluster operating environments that are used to assist the execution of parallel applications do not provide support for both Message Passing (MP) or Distributed Shared Memory (DSM) paradigms. They are only offered as separate components implemented at the user level as library and independent servers. Due to poor operating systems users must deal with computers of a cluster rather than to see this cluster as a single powerful computer. A Single System Image of the cluster is not offered to users. There is a need for an operating system for clusters. We claim and demonstrate that it is possible to develop a cluster operating system that is
able to efficiently manage parallelism, support Message Passing and DSM and offer the Single System Image. In order to substantiate the claim the first version of a cluster operating system, called GENESIS, that manages parallelism and offers the Single System Image has been developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Present operating systems are not built to support parallel computing––they do not provide services to manage parallelism, i.e., to globally manage parallel processes and computational resources. The cluster operating environments that are used to assist the execution of parallel applications do not provide support for both programming paradigms, message passing (MP) or distributed shared memory (DSM)––they are mainly offered as separate components implemented at the user level as library and independent server processes. Due to poor operating systems users must deal with clusters as a set of independent computers rather than to see this cluster as a single powerful computer. A single system image (SSI) of the cluster is not offered to users. There is a need for an operating system for clusters. We claim and demonstrate in this paper that it is possible to develop a cluster operating system that is able to efficiently manage parallelism; use cluster resources efficiently; support MP in the form of standard MP and PVM, and DSM; offer SSI; and make it easy to use. We show that to achieve these aims this operating system should inherit many features of a distributed operating system and provide new services which address the needs of parallel processes, cluster's resources, and application developers. In order to substantiate the claim the first version of a cluster operating system managing parallelism and offering SSI, called GENESIS, has been developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to introduce a wireless web-based ordering system called iMenu in the restaurant industry. Design/methodology/approach – By using wireless devices such as personal digital assistants and WebPads, this system realizes the paradigm of pervasive computing at tableside. Detailed system requirements, design, implementation and evaluation of iMenu are presented.

Findings – The evaluation of iMenu shows it explicitly increases productivity of restaurant staff. It also has other desirable features such as integration, interoperation and scalability. Compared to traditional restaurant ordering process, by using this system customers get faster and better services, restaurant staff cooperate more efficiently with less working mistakes, and enterprise owners thus receive more business profits.

Originality/value – While many researchers have explored using wireless web-based information systems in different industries, this paper presents a system that employs wireless multi-tiered web-based architecture to build pervasive computing systems. Instead of discussing theoretical issues on pervasive computing, we focus on practical issues of developing a real system, such as choosing of web-based architecture, design of input methods in small screens, and response time in wireless web-based systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis reviews the literature relating to girls and computing within a framework which is structured around three specific questions. First, are there differences between girls and boys in their participation in class computing activities and/or in non-class computing activities? Second, do these differences in participation in computing activities have broader implications which justify the growing concern about the under-representation of girls? Third, wahy are girls under-represented in these activities? Although the available literature is predominantly descriptive, the underlying implicit theoretical model is essentially a social learning model. Girl's differential participation is attributed to learned attitudes towards computing rathan to differences between girls and boys in general ability. These attitudes, which stress the masculine, mathematical, technological aspects of computing are developed through modelling, direct experience, intrinsic and extrinsic reinforcement and generalisation from pre-existing, attitudes to related curriculum areas. In the literature it is implicitly assumed that these attitudes underlie girl's decisions to self-select out of computing activities. In this thesis predictions from a social learning model are complemented by predictions derived from expectancy-value, cognitive dissonance and self-perception theories. These are tested in three separate studies. Study one provides data from a pretest-posttest study of 24 children in a year four class learning BASIC. It examines pre- and posttest differences between girls and boys in computing experience, knowledge and achievement as well as the factors relating to computing achievement. Study two uses a pretest-posttest control group design to study the gender differences in the impact of the introduction of Logo into years 1, 3, 5 and 7 in both a coeducational and single-sex setting using a sample of 222 children from three schools. Study three utilises a larger sample of 1176 students, drawn from three secondary schools and five primary schools, enabling an evaluation of gender differences in relation to a wide range of class computing experiences and in a broader range of school contexts. The overall results are consistent across the three studies, supporting the contention that social factors, rather than ability differences influence girls' participation and achievement in computing. The more global theoretical framework, drawing on social learning, expectancy-value, cognitive dissonance and self-perception theories, provides a more adequate explanation of gender differences in participation than does any one of these models.