876 resultados para Computing and software systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work reported in this paper proposes a novel synergy between parallel computing and swarm robotics to offer a new computing paradigm, 'swarm-array computing' that can harness and apply autonomic computing for parallel computing systems. One approach among three proposed approaches in swarm-array computing based on landscapes of intelligent cores, in which the cores of a parallel computing system are abstracted to swarm agents, is investigated. A task is executed and transferred seamlessly between cores in the proposed approach thereby achieving self-ware properties that characterize autonomic computing. FPGAs are considered as an experimental platform taking into account its application in space robotics. The feasibility of the proposed approach is validated on the SeSAm multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Can autonomic computing concepts be applied to traditional multi-core systems found in high performance computing environments? In this paper, we propose a novel synergy between parallel computing and swarm robotics to offer a new computing paradigm, `Swarm-Array Computing' that can harness and apply autonomic computing for parallel computing systems. One approach among three proposed approaches in swarm-array computing based on landscapes of intelligent cores, in which the cores of a parallel computing system are abstracted to swarm agents, is investigated. A task gets executed and transferred seamlessly between cores in the proposed approach thereby achieving self-ware properties that characterize autonomic computing. FPGAs are considered as an experimental platform taking into account its application in space robotics. The feasibility of the proposed approach is validated on the SeSAm multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parametric software effort estimation models consisting on a single mathematical relationship suffer from poor adjustment and predictive characteristics in cases in which the historical database considered contains data coming from projects of a heterogeneous nature. The segmentation of the input domain according to clusters obtained from the database of historical projects serves as a tool for more realistic models that use several local estimation relationships. Nonetheless, it may be hypothesized that using clustering algorithms without previous consideration of the influence of well-known project attributes misses the opportunity to obtain more realistic segments. In this paper, we describe the results of an empirical study using the ISBSG-8 database and the EM clustering algorithm that studies the influence of the consideration of two process-related attributes as drivers of the clustering process: the use of engineering methodologies and the use of CASE tools. The results provide evidence that such consideration conditions significantly the final model obtained, even though the resulting predictive quality is of a similar magnitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synchronous collaborative systems allow geographically distributed participants to form a virtual work environment enabling cooperation between peers and enriching the human interaction. The technology facilitating this interaction has been studied for several years and various solutions can be found at present. In this paper, we discuss our experiences with one such widely adopted technology, namely the Access Grid. We describe our experiences with using this technology, identify key problem areas and propose our solution to tackle these issues appropriately. Moreover, we propose the integration of Access Grid with an Application Sharing tool, developed by the authors. Our approach allows these integrated tools to utilise the enhanced features provided by our underlying dynamic transport layer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to address a classic problem – pattern formation identified by researchers in the area of swarm robotic systemsand is also motivated by the need for mathematical foundations in swarm systems. Design/methodology/approach: The work is separated out as inspirations, applications, definitions, challenges and classifications of pattern formation in swarm systems based on recent literature. Further, the work proposes a mathematical model for swarm pattern formation and transformation. Findings: A swarm pattern formation model based on mathematical foundations and macroscopic primitives is proposed. A formal definition for swarm pattern transformation and four special cases of transformation are introduced. Two general methods for transforming patterns are investigated and a comparison of the two methods is presented. The validity of the proposed models, and the feasibility of the methods investigated are confirmed on the Traer Physics and Processing environment. Originality/value: This paper helps in understanding the limitations of existing research in pattern formation and the lack of mathematical foundations for swarm systems. The mathematical model and transformation methods introduce two key concepts, namely macroscopic primitives and a mathematical model. The exercise of implementing the proposed models on physics simulator is novel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Near isogenic lines (NILs) varying for reduced height (Rht) and photoperiod insensitivity (Ppd-D1) alleles in a cv. Mercia background (rht (tall), Rht-B1b, Rht-D1b, Rht-B1c, Rht8c+Ppd-D1a, Rht-D1c, Rht12) were compared for interception of photosynthetically active radiation (PAR), radiation use efficiency (RUE), above-ground biomass (AGB), harvest index (HI), height, weed prevalence, lodging and grain yield, at one field site but within contrasting (‘organic’ v ‘conventional’) rotational and agronomic contexts, in each of three years. In the final year, further NILs (rht (tall), Rht-B1b, Rht-D1b, Rht-B1c, Rht-B1b+Rht-D1b, Rht-D1b+Rht-B1c) in Maris Huntsman and Maris Widgeon backgrounds were added together with 64 lines of a doubled haploid (DH) population [Savannah (Rht-D1b) × Renesansa (Rht-8c+Ppd-D1a)]. There were highly significant genotype × system interactions for grain yield, mostly because differences were greater in the conventional system than in the organic system. Quadratic fits of NIL grain yield against height were appropriate for both systems when all NILs and years were included. Extreme dwarfing was associated with reduced PAR, RUE, AGB, HI, and increased weed prevalence. Intermediate dwarfing was often associated with improved HI in the conventional system, but not in the organic system. Heights in excess of the optimum for yield were associated particularly with reduced HI and, in the conventional system, lodging. There was no statistical evidence that optimum height for grain yield varied with system although fits peaked at 85cm and 96cm in the conventional and organic systems, respectively. Amongst the DH lines, the marker for Ppd-D1a was associated with earlier flowering, and just in the conventional system also with reduced PAR, AGB and grain yield. The marker for Rht-D1b was associated with reduced height, and again just in the conventional system, with increased HI and grain yield. The marker for Rht8c reduced height, and in the conventional system only, increased HI. When using the System × DH line means as observations grain yield was associated with height and early vegetative growth in the organic system, but not in the conventional system. In the conventional system, PAR interception after anthesis correlated with yield. Savannah was the highest yielding line in the conventional system, producing significantly more grain than several lines that out yielded it in the organic system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clusters of computers can be used together to provide a powerful computing resource. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take considerable time to execute on conventional workstations. By spreading the work of the simulation across a cluster of computers, the elapsed execution time can be greatly reduced. Thus a user has apparently the performance of a supercomputer by using the spare cycles on other workstations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In most commercially available predictive control packages, there is a separation between economic optimisation and predictive control, although both algorithms may be part of the same software system. This method is compared in this article with two alternative approaches where the economic objectives are directly included in the predictive control algorithm. Simulations are carried out using the Tennessee Eastman process model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For thousands of years, humans have inhabited locations that are highly vulnerable to the impacts of climate change, earthquakes, and floods. In order to investigate the extent to which Holocene environmental changes may have impacted on cultural evolution, we present new geologic, geomorphic, and chronologic data from the Qazvin Plain in northwest Iran that provides a backdrop of natural environmental changes for the simultaneous cultural dynamics observed on the Central Iranian Plateau. Well-resolved archaeological data from the neighbouring settlements of Zagheh (7170—6300 yr BP), Ghabristan (6215—4950 yr BP) and Sagzabad (4050—2350 yr BP) indicate that Holocene occupation of the Hajiarab alluvial fan was interrupted by a 900 year settlement hiatus. Multiproxy climate data from nearby lakes in northwest Iran suggest a transition from arid early-Holocene conditions to more humid middle-Holocene conditions from c. 7550 to 6750 yr BP, coinciding with the settlement of Zagheh, and a peak in aridity at c. 4550 yr BP during the settlement hiatus. Palaeoseismic investigations indicate that large active fault systems in close proximity to the tell sites incurred a series of large (MW ~7.1) earthquakes with return periods of ~500—1000 years during human occupation of the tells. Mapping and optically stimulated luminescence (OSL) chronology of the alluvial sequences reveals changes in depositional style from coarse-grained unconfined sheet flow deposits to proximal channel flow and distally prograding alluvial deposits sometime after c. 8830 yr BP, possibly reflecting an increase in moisture following the early-Holocene arid phase. The coincidence of major climate changes, earthquake activity, and varying sedimentation styles with changing patterns of human occupation on the Hajiarab fan indicate links between environmental and anthropogenic systems. However, temporal coincidence does not necessitate a fundamental causative dependency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliable techniques for screening large numbers of plants for root traits are still being developed, but include aeroponic, hydroponic and agar plate systems. Coupled with digital cameras and image analysis software, these systems permit the rapid measurement of root numbers, length and diameter in moderate ( typically <1000) numbers of plants. Usually such systems are employed with relatively small seedlings, and information is recorded in 2D. Recent developments in X-ray microtomography have facilitated 3D non-invasive measurement of small root systems grown in solid media, allowing angular distributions to be obtained in addition to numbers and length. However, because of the time taken to scan samples, only a small number can be screened (typically<10 per day, not including analysis time of the large spatial datasets generated) and, depending on sample size, limited resolution may mean that fine roots remain unresolved. Although agar plates allow differences between lines and genotypes to be discerned in young seedlings, the rank order may not be the same when the same materials are grown in solid media. For example, root length of dwarfing wheat ( Triticum aestivum L.) lines grown on agar plates was increased by similar to 40% relative to wild-type and semi-dwarfing lines, but in a sandy loam soil under well watered conditions it was decreased by 24-33%. Such differences in ranking suggest that significant soil environment-genotype interactions are occurring. Developments in instruments and software mean that a combination of high-throughput simple screens and more in-depth examination of root-soil interactions is becoming viable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heating, ventilation, air conditioning and refrigeration (HVAC&R) systems account for more than 60% of the energy consumption of buildings in the UK. However, the effect of the variety of HVAC&R systems on building energy performance has not yet been taken into account within the existing building energy benchmarks. In addition, the existing building energy benchmarks are not able to assist decision-makers with HVAC&R system selection. This study attempts to overcome these two deficiencies through the performance characterisation of 36 HVAC&R systems based on the simultaneous dynamic simulation of a building and a variety of HVAC&R systems using TRNSYS software. To characterise the performance of HVAC&R systems, four criteria are considered; energy consumption, CO2 emissions, thermal comfort and indoor air quality. The results of the simulations show that, all the studied systems are able to provide an acceptable level of indoor air quality and thermal comfort. However, the energy consumption and amount of CO2 emissions vary. One of the significant outcomes of this study reveals that combined heating, cooling and power systems (CCHP) have the highest energy consumption with the lowest energy related CO2 emissions among the studied HVAC&R systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematics in Defence 2011 Abstract. We review transreal arithmetic and present transcomplex arithmetic. These arithmetics have no exceptions. This leads to incremental improvements in computer hardware and software. For example, the range of real numbers, encoded by floating-point bits, is doubled when all of the Not-a-Number(NaN) states, in IEEE 754 arithmetic, are replaced with real numbers. The task of programming such systems is simplified and made safer by discarding the unordered relational operator,leaving only the operators less-than, equal-to, and greater than. The advantages of using a transarithmetic in a computation, or transcomputation as we prefer to call it, may be had by making small changes to compilers and processor designs. However, radical change is possible by exploiting the reliability of transcomputations to make pipelined dataflow machines with a large number of cores. Our initial designs are for a machine with order one million cores. Such a machine can complete the execution of multiple in-line programs each clock tick