71 resultados para Mobile and ubiquitous computing
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
This paper introduces an architecture for identifying and modelling in real-time at a copper mine using new technologies as M2M and cloud computing with a server in the cloud and an Android client inside the mine. The proposed design brings up pervasive mining, a system with wider coverage, higher communication efficiency, better fault-tolerance, and anytime anywhere availability. This solution was designed for a plant inside the mine which cannot tolerate interruption and for which their identification in situ, in real time, is an essential part of the system to control aspects such as instability by adjusting their corresponding parameters without stopping the process.
Resumo:
The use of virtualization in high-performance computing (HPC) has been suggested as a means to provide tailored services and added functionality that many users expect from full-featured Linux cluster environments. The use of virtual machines in HPC can offer several benefits, but maintaining performance is a crucial factor. In some instances the performance criteria are placed above the isolation properties. This selective relaxation of isolation for performance is an important characteristic when considering resilience for HPC environments that employ virtualization. In this paper we consider some of the factors associated with balancing performance and isolation in configurations that employ virtual machines. In this context, we propose a classification of errors based on the concept of “error zones”, as well as a detailed analysis of the trade-offs between resilience and performance based on the level of isolation provided by virtualization solutions. Finally, a set of experiments are performed using different virtualization solutions to elucidate the discussion.
Resumo:
The NERC UK SOLAS-funded Reactive Halogens in the Marine Boundary Layer (RHaMBLe) programme comprised three field experiments. This manuscript presents an overview of the measurements made within the two simultaneous remote experiments conducted in the tropical North Atlantic in May and June 2007. Measurements were made from two mobile and one ground-based platforms. The heavily instrumented cruise D319 on the RRS Discovery from Lisbon, Portugal to São Vicente, Cape Verde and back to Falmouth, UK was used to characterise the spatial distribution of boundary layer components likely to play a role in reactive halogen chemistry. Measurements onboard the ARSF Dornier aircraft were used to allow the observations to be interpreted in the context of their vertical distribution and to confirm the interpretation of atmospheric structure in the vicinity of the Cape Verde islands. Long-term ground-based measurements at the Cape Verde Atmospheric Observatory (CVAO) on São Vicente were supplemented by long-term measurements of reactive halogen species and characterisation of additional trace gas and aerosol species during the intensive experimental period. This paper presents a summary of the measurements made within the RHaMBLe remote experiments and discusses them in their meteorological and chemical context as determined from these three platforms and from additional meteorological analyses. Air always arrived at the CVAO from the North East with a range of air mass origins (European, Atlantic and North American continental). Trace gases were present at stable and fairly low concentrations with the exception of a slight increase in some anthropogenic components in air of North American origin, though NOx mixing ratios during this period remained below 20 pptv. Consistency with these air mass classifications is observed in the time series of soluble gas and aerosol composition measurements, with additional identification of periods of slightly elevated dust concentrations consistent with the trajectories passing over the African continent. The CVAO is shown to be broadly representative of the wider North Atlantic marine boundary layer; measurements of NO, O3 and black carbon from the ship are consistent with a clean Northern Hemisphere marine background. Aerosol composition measurements do not indicate elevated organic material associated with clean marine air. Closer to the African coast, black carbon and NO levels start to increase, indicating greater anthropogenic influence. Lower ozone in this region is possibly associated with the increased levels of measured halocarbons, associated with the nutrient rich waters of the Mauritanian upwelling. Bromide and chloride deficits in coarse mode aerosol at both the CVAO and on D319 and the continuous abundance of inorganic gaseous halogen species at CVAO indicate significant reactive cycling of halogens. Aircraft measurements of O3 and CO show that surface measurements are representative of the entire boundary layer in the vicinity both in diurnal variability and absolute levels. Above the inversion layer similar diurnal behaviour in O3 and CO is observed at lower mixing ratios in the air that had originated from south of Cape Verde, possibly from within the ITCZ. ECMWF calculations on two days indicate very different boundary layer depths and aircraft flights over the ship replicate this, giving confidence in the calculated boundary layer depth.
Resumo:
The state of river water deterioration in the Agueda hydrographic basin, mostly in the western part, partly reflects the high rate of housing and industrial development in this area in recent years. The streams have acted as a sink for organic and inorganic loads from several origins: domestic and industrial sewage and agricultural waste. The contents of the heavy metals Cr, Cd, Ni, Cu, Pb, and Zn were studied by sequential chemical extraction of the principal geochemical phases of streambed sediments, in the <63 mum fraction, in order to assess their potential availability to the environment, investigating, the metal concentrations, assemblages, and trends. The granulometric and mineralogical characteristics of this sediment fraction were also studied. This study revealed clear pollution by Cr, Cd, Ni, Cu, Zn, and Pb, as a result from both natural and anthropogenic origins. The chemical transport of metals appears to be essentially by the following geochemical phases, in decreasing order of significance: (exchangeable + carbonates) much greater than (organics) much greater than (Mn and Fe oxides and hydroxides). The (exchangeable + carbonate) phase plays an important part in the fixation of Cu, Ni, Zn, and Cd. The organic phase is important in the fixation of Cr, Pb, and also Cu and Ni. Analyzing the metal contents in the residual fraction, we conclude that Zn and Cd are the most mobile, and Cr and Pb are less mobile than Cu and Ni. The proximity of the pollutant sources and the timing of the influx of contaminated material control the distribution of the contaminant-related sediments locally and on the network scale.
Resumo:
This paper presents a parallel genetic algorithm to the Steiner Problem in Networks. Several previous papers have proposed the adoption of GAs and others metaheuristics to solve the SPN demonstrating the validity of their approaches. This work differs from them for two main reasons: the dimension and the characteristics of the networks adopted in the experiments and the aim from which it has been originated. The reason that aimed this work was namely to build a comparison term for validating deterministic and computationally inexpensive algorithms which can be used in practical engineering applications, such as the multicast transmission in the Internet. On the other hand, the large dimensions of our sample networks require the adoption of a parallel implementation of the Steiner GA, which is able to deal with such large problem instances.
Resumo:
Frequent pattern discovery in structured data is receiving an increasing attention in many application areas of sciences. However, the computational complexity and the large amount of data to be explored often make the sequential algorithms unsuitable. In this context high performance distributed computing becomes a very interesting and promising approach. In this paper we present a parallel formulation of the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The application is characterized by a highly irregular tree-structured computation. No estimation is available for task workloads, which show a power-law distribution in a wide range. The proposed approach allows dynamic resource aggregation and provides fault and latency tolerance. These features make the distributed application suitable for multi-domain heterogeneous environments, such as computational Grids. The distributed application has been evaluated on the well known National Cancer Institute’s HIV-screening dataset.
Resumo:
Traditionally, applications and tools supporting collaborative computing have been designed only with personal computers in mind and support a limited range of computing and network platforms. These applications are therefore not well equipped to deal with network heterogeneity and, in particular, do not cope well with dynamic network topologies. Progress in this area must be made if we are to fulfil the needs of users and support the diversity, mobility, and portability that are likely to characterise group work in future. This paper describes a groupware platform called Coco that is designed to support collaboration in a heterogeneous network environment. The work demonstrates that progress in the p development of a generic supporting groupware is achievable, even in the context of heterogeneous and dynamic networks. The work demonstrates the progress made in the development of an underlying communications infrastructure, building on peer-to-peer concept and topologies to improve scalability and robustness.
Resumo:
The design space of emerging heterogenous multi-core architectures with re-configurability element makes it feasible to design mixed fine-grained and coarse-grained parallel architectures. This paper presents a hierarchical composite array design which extends the curret design space of regular array design by combining a sequence of transformations. This technique is applied to derive a new design of a pipelined parallel regular array with different dataflow between phases of computation.
Resumo:
Space applications are challenged by the reliability of parallel computing systems (FPGAs) employed in space crafts due to Single-Event Upsets. The work reported in this paper aims to achieve self-managing systems which are reliable for space applications by applying autonomic computing constructs to parallel computing systems. A novel technique, 'Swarm-Array Computing' inspired by swarm robotics, and built on the foundations of autonomic and parallel computing is proposed as a path to achieve autonomy. The constitution of swarm-array computing comprising for constituents, namely the computing system, the problem / task, the swarm and the landscape is considered. Three approaches that bind these constituents together are proposed. The feasibility of one among the three proposed approaches is validated on the SeSAm multi-agent simulator and landscapes representing the computing space and problem are generated using the MATLAB.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
This paper addresses the impact of imperfect synchronisation on D-STBC when combined with incremental relay. To suppress such an impact, a novel detection scheme is proposed, which retains the two key features of the STBC principle: simplicity (i.e. linear computational complexity), and optimality (i.e. maximum likelihood). These two features make the new detector very suitable for low power wireless networks (e.g. sensor networks).