119 resultados para parallel processing systems
Resumo:
This paper presents a review of the design and development of the Yorick series of active stereo camera platforms and their integration into real-time closed loop active vision systems, whose applications span surveillance, navigation of autonomously guided vehicles (AGVs), and inspection tasks for teleoperation, including immersive visual telepresence. The mechatronic approach adopted for the design of the first system, including head/eye platform, local controller, vision engine, gaze controller and system integration, proved to be very successful. The design team comprised researchers with experience in parallel computing, robot control, mechanical design and machine vision. The success of the project has generated sufficient interest to sanction a number of revisions of the original head design, including the design of a lightweight compact head for use on a robot arm, and the further development of a robot head to look specifically at increasing visual resolution for visual telepresence. The controller and vision processing engines have also been upgraded, to include the control of robot heads on mobile platforms and control of vergence through tracking of an operator's eye movement. This paper details the hardware development of the different active vision/telepresence systems.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.
Resumo:
It is well known that atmospheric concentrations of carbon dioxide (CO2) (and other greenhouse gases) have increased markedly as a result of human activity since the industrial revolution. It is perhaps less appreciated that natural and managed soils are an important source and sink for atmospheric CO2 and that, primarily as a result of the activities of soil microorganisms, there is a soil-derived respiratory flux of CO2 to the atmosphere that overshadows by tenfold the annual CO2 flux from fossil fuel emissions. Therefore small changes in the soil carbon cycle could have large impacts on atmospheric CO2 concentrations. Here we discuss the role of soil microbes in the global carbon cycle and review the main methods that have been used to identify the microorganisms responsible for the processing of plant photosynthetic carbon inputs to soil. We discuss whether application of these techniques can provide the information required to underpin the management of agro-ecosystems for carbon sequestration and increased agricultural sustainability. We conclude that, although crucial in enabling the identification of plant-derived carbon-utilising microbes, current technologies lack the high-throughput ability to quantitatively apportion carbon use by phylogentic groups and its use efficiency and destination within the microbial metabolome. It is this information that is required to inform rational manipulation of the plant–soil system to favour organisms or physiologies most important for promoting soil carbon storage in agricultural soil.
Resumo:
The time to process each of W/B processing blocks of a median calculation method on a set of N W-bit integers is improved here by a factor of three compared to the literature. Parallelism uncovered in blocks containing B-bit slices are exploited by independent accumulative parallel counters so that the median is calculated faster than any known previous method for any N, W values. The improvements to the method are discussed in the context of calculating the median for a moving set of N integers for which a pipelined architecture is developed. An extra benefit of smaller area for the architecture is also reported.
Resumo:
This paper is an initial work towards developing an e-Government benchmarking model that is user-centric. To achieve the goal then, public service delivery is discussed first including the transition to online public service delivery and the need for providing public services using electronic media. Two major e-Government benchmarking methods are critically discussed and the need to develop a standardized benchmarking model that is user-centric is presented. To properly articulate user requirements in service provision, an organizational semiotic method is suggested.
Resumo:
Context-aware multimodal interactive systems aim to adapt to the needs and behavioural patterns of users and offer a way forward for enhancing the efficacy and quality of experience (QoE) in human-computer interaction. The various modalities that constribute to such systems each provide a specific uni-modal response that is integratively presented as a multi-modal interface capable of interpretation of multi-modal user input and appropriately responding to it through dynamically adapted multi-modal interactive flow management , This paper presents an initial background study in the context of the first phase of a PhD research programme in the area of optimisation of data fusion techniques to serve multimodal interactivite systems, their applications and requirements.
Resumo:
This paper proposes a conceptual model of a context-aware group support system (GSS) to assist local council employees to perform collaborative tasks in conjunction with inter- and intra-organisational stakeholders. Most discussions about e-government focus on the use of ICT to improve the relationship between government and citizen, not on the relationship between government and employees. This paper seeks to expose the unique culture of UK local councils and to show how a GSS could support local government employer and employee needs.
Resumo:
The history of using vesicular systems for drug delivery to and through skin started nearly three decades ago with a study utilizing phospholipid liposomes to improve skin deposition and reduce systemic effects of triamcinolone acetonide. Subsequently, many researchers evaluated liposomes with respect to skin delivery, with the majority of them recording localized effects and relatively few studies showing transdermal delivery effects. Shortly after this, Transfersomes were developed with claims about their ability to deliver their payload into and through the skin with efficiencies similar to subcutaneous administration. Since these vesicles are ultradeformable, they were thought to penetrate intact skin deep enough to reach the systemic circulation. Their mechanisms of action remain controversial with diverse processes being reported. Parallel to this development, other classes of vesicles were produced with ethanol being included into the vesicles to provide flexibility (as in ethosomes) and vesicles were constructed from surfactants and cholesterol (as in niosomes). Thee ultradeformable vesicles showed variable efficiency in delivering low molecular weight and macromolecular drugs. This article will critically evaluate vesicular systems for dermal and transdermal delivery of drugs considering both their efficacy and potential mechanisms of action.
Resumo:
This paper presents a parallel genetic algorithm to the Steiner Problem in Networks. Several previous papers have proposed the adoption of GAs and others metaheuristics to solve the SPN demonstrating the validity of their approaches. This work differs from them for two main reasons: the dimension and the characteristics of the networks adopted in the experiments and the aim from which it has been originated. The reason that aimed this work was namely to build a comparison term for validating deterministic and computationally inexpensive algorithms which can be used in practical engineering applications, such as the multicast transmission in the Internet. On the other hand, the large dimensions of our sample networks require the adoption of a parallel implementation of the Steiner GA, which is able to deal with such large problem instances.
Resumo:
Inferring population admixture from genetic data and quantifying it is a difficult but crucial task in evolutionary and conservation biology. Unfortunately state-of-the-art probabilistic approaches are computationally demanding. Effectively exploiting the computational power of modern multiprocessor systems can thus have a positive impact to Monte Carlo-based simulation of admixture modeling. A novel parallel approach is briefly described and promising results on its message passing interface (MPI)-based C implementation are reported.
Resumo:
This paper presents a theoretical model of the torsional characteristics of parallel multi-part rope systems. In such systems, the ropes may cable, or wrap around each other, depending on the combination of applied torque, rope tension, length and spacing between the rope parts. Cabling constitutes a failure that might be retrievable but as such can seriously affect the performance of the rope system. The torsional characteristics of the system are very different before and after cabling, and theoretical models are given for both situations. Laboratory tests were performed on both two and four rope systems, with measurements being made of torque at rotations from 0 to 360 deg. Tests were run with different rope spacings, tensions and lengths and the results compared with predictions from the theoretical model. The conclusion from the test results was that the theoretical model predicts both the pre- and post-cabling torsional behaviour with an acceptable level of accuracy.
Resumo:
Covariation in the structural composition of the gut microbiome and the spectroscopically derived metabolic phenotype (metabotype) of a rodent model for obesity were investigated using a range of multivariate statistical tools. Urine and plasma samples from three strains of 10-week-old male Zucker rats (obese (fa/fa, n = 8), lean (fal-, n = 8) and lean (-/-, n = 8)) were characterized via high-resolution H-1 NMR spectroscopy, and in parallel, the fecal microbial composition was investigated using fluorescence in situ hydridization (FISH) and denaturing gradient gel electrophoresis (DGGE) methods. All three Zucker strains had different relative abundances of the dominant members of their intestinal microbiota (FISH), with the novel observation of a Halomonas and a Sphingomonas species being present in the (fa/fa) obese strain on the basis of DGGE data. The two functionally and phenotypically normal Zucker strains (fal- and -/-) were readily distinguished from the (fa/fa) obese rats on the basis of their metabotypes with relatively lower urinary hippurate and creatinine, relatively higher levels of urinary isoleucine, leucine and acetate and higher plasma LDL and VLDL levels typifying the (fa/fa) obese strain. Collectively, these data suggest a conditional host genetic involvement in selection of the microbial species in each host strain, and that both lean and obese animals could have specific metabolic phenotypes that are linked to their individual microbiomes.