67 resultados para Computing and software systems
Resumo:
Purpose: The purpose of this paper is to address a classic problem – pattern formation identified by researchers in the area of swarm robotic systems – and is also motivated by the need for mathematical foundations in swarm systems. Design/methodology/approach: The work is separated out as inspirations, applications, definitions, challenges and classifications of pattern formation in swarm systems based on recent literature. Further, the work proposes a mathematical model for swarm pattern formation and transformation. Findings: A swarm pattern formation model based on mathematical foundations and macroscopic primitives is proposed. A formal definition for swarm pattern transformation and four special cases of transformation are introduced. Two general methods for transforming patterns are investigated and a comparison of the two methods is presented. The validity of the proposed models, and the feasibility of the methods investigated are confirmed on the Traer Physics and Processing environment. Originality/value: This paper helps in understanding the limitations of existing research in pattern formation and the lack of mathematical foundations for swarm systems. The mathematical model and transformation methods introduce two key concepts, namely macroscopic primitives and a mathematical model. The exercise of implementing the proposed models on physics simulator is novel.
Resumo:
Near isogenic lines (NILs) varying for reduced height (Rht) and photoperiod insensitivity (Ppd-D1) alleles in a cv. Mercia background (rht (tall), Rht-B1b, Rht-D1b, Rht-B1c, Rht8c+Ppd-D1a, Rht-D1c, Rht12) were compared for interception of photosynthetically active radiation (PAR), radiation use efficiency (RUE), above-ground biomass (AGB), harvest index (HI), height, weed prevalence, lodging and grain yield, at one field site but within contrasting (‘organic’ v ‘conventional’) rotational and agronomic contexts, in each of three years. In the final year, further NILs (rht (tall), Rht-B1b, Rht-D1b, Rht-B1c, Rht-B1b+Rht-D1b, Rht-D1b+Rht-B1c) in Maris Huntsman and Maris Widgeon backgrounds were added together with 64 lines of a doubled haploid (DH) population [Savannah (Rht-D1b) × Renesansa (Rht-8c+Ppd-D1a)]. There were highly significant genotype × system interactions for grain yield, mostly because differences were greater in the conventional system than in the organic system. Quadratic fits of NIL grain yield against height were appropriate for both systems when all NILs and years were included. Extreme dwarfing was associated with reduced PAR, RUE, AGB, HI, and increased weed prevalence. Intermediate dwarfing was often associated with improved HI in the conventional system, but not in the organic system. Heights in excess of the optimum for yield were associated particularly with reduced HI and, in the conventional system, lodging. There was no statistical evidence that optimum height for grain yield varied with system although fits peaked at 85cm and 96cm in the conventional and organic systems, respectively. Amongst the DH lines, the marker for Ppd-D1a was associated with earlier flowering, and just in the conventional system also with reduced PAR, AGB and grain yield. The marker for Rht-D1b was associated with reduced height, and again just in the conventional system, with increased HI and grain yield. The marker for Rht8c reduced height, and in the conventional system only, increased HI. When using the System × DH line means as observations grain yield was associated with height and early vegetative growth in the organic system, but not in the conventional system. In the conventional system, PAR interception after anthesis correlated with yield. Savannah was the highest yielding line in the conventional system, producing significantly more grain than several lines that out yielded it in the organic system.
Resumo:
Recent research in multi-agent systems incorporate fault tolerance concepts. However, the research does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely ‘Intelligent Agents’. In the approach considered a task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The agents hence contribute towards fault tolerance and towards building reliable systems. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.
Resumo:
Clusters of computers can be used together to provide a powerful computing resource. Large Monte Carlo simulations, such as those used to model particle growth, are computationally intensive and take considerable time to execute on conventional workstations. By spreading the work of the simulation across a cluster of computers, the elapsed execution time can be greatly reduced. Thus a user has apparently the performance of a supercomputer by using the spare cycles on other workstations.
Resumo:
In most commercially available predictive control packages, there is a separation between economic optimisation and predictive control, although both algorithms may be part of the same software system. This method is compared in this article with two alternative approaches where the economic objectives are directly included in the predictive control algorithm. Simulations are carried out using the Tennessee Eastman process model.
Resumo:
For thousands of years, humans have inhabited locations that are highly vulnerable to the impacts of climate change, earthquakes, and floods. In order to investigate the extent to which Holocene environmental changes may have impacted on cultural evolution, we present new geologic, geomorphic, and chronologic data from the Qazvin Plain in northwest Iran that provides a backdrop of natural environmental changes for the simultaneous cultural dynamics observed on the Central Iranian Plateau. Well-resolved archaeological data from the neighbouring settlements of Zagheh (7170—6300 yr BP), Ghabristan (6215—4950 yr BP) and Sagzabad (4050—2350 yr BP) indicate that Holocene occupation of the Hajiarab alluvial fan was interrupted by a 900 year settlement hiatus. Multiproxy climate data from nearby lakes in northwest Iran suggest a transition from arid early-Holocene conditions to more humid middle-Holocene conditions from c. 7550 to 6750 yr BP, coinciding with the settlement of Zagheh, and a peak in aridity at c. 4550 yr BP during the settlement hiatus. Palaeoseismic investigations indicate that large active fault systems in close proximity to the tell sites incurred a series of large (MW ~7.1) earthquakes with return periods of ~500—1000 years during human occupation of the tells. Mapping and optically stimulated luminescence (OSL) chronology of the alluvial sequences reveals changes in depositional style from coarse-grained unconfined sheet flow deposits to proximal channel flow and distally prograding alluvial deposits sometime after c. 8830 yr BP, possibly reflecting an increase in moisture following the early-Holocene arid phase. The coincidence of major climate changes, earthquake activity, and varying sedimentation styles with changing patterns of human occupation on the Hajiarab fan indicate links between environmental and anthropogenic systems. However, temporal coincidence does not necessitate a fundamental causative dependency.
Resumo:
Reliable techniques for screening large numbers of plants for root traits are still being developed, but include aeroponic, hydroponic and agar plate systems. Coupled with digital cameras and image analysis software, these systems permit the rapid measurement of root numbers, length and diameter in moderate ( typically <1000) numbers of plants. Usually such systems are employed with relatively small seedlings, and information is recorded in 2D. Recent developments in X-ray microtomography have facilitated 3D non-invasive measurement of small root systems grown in solid media, allowing angular distributions to be obtained in addition to numbers and length. However, because of the time taken to scan samples, only a small number can be screened (typically<10 per day, not including analysis time of the large spatial datasets generated) and, depending on sample size, limited resolution may mean that fine roots remain unresolved. Although agar plates allow differences between lines and genotypes to be discerned in young seedlings, the rank order may not be the same when the same materials are grown in solid media. For example, root length of dwarfing wheat ( Triticum aestivum L.) lines grown on agar plates was increased by similar to 40% relative to wild-type and semi-dwarfing lines, but in a sandy loam soil under well watered conditions it was decreased by 24-33%. Such differences in ranking suggest that significant soil environment-genotype interactions are occurring. Developments in instruments and software mean that a combination of high-throughput simple screens and more in-depth examination of root-soil interactions is becoming viable.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
Heating, ventilation, air conditioning and refrigeration (HVAC&R) systems account for more than 60% of the energy consumption of buildings in the UK. However, the effect of the variety of HVAC&R systems on building energy performance has not yet been taken into account within the existing building energy benchmarks. In addition, the existing building energy benchmarks are not able to assist decision-makers with HVAC&R system selection. This study attempts to overcome these two deficiencies through the performance characterisation of 36 HVAC&R systems based on the simultaneous dynamic simulation of a building and a variety of HVAC&R systems using TRNSYS software. To characterise the performance of HVAC&R systems, four criteria are considered; energy consumption, CO2 emissions, thermal comfort and indoor air quality. The results of the simulations show that, all the studied systems are able to provide an acceptable level of indoor air quality and thermal comfort. However, the energy consumption and amount of CO2 emissions vary. One of the significant outcomes of this study reveals that combined heating, cooling and power systems (CCHP) have the highest energy consumption with the lowest energy related CO2 emissions among the studied HVAC&R systems.
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
Mathematics in Defence 2011 Abstract. We review transreal arithmetic and present transcomplex arithmetic. These arithmetics have no exceptions. This leads to incremental improvements in computer hardware and software. For example, the range of real numbers, encoded by floating-point bits, is doubled when all of the Not-a-Number(NaN) states, in IEEE 754 arithmetic, are replaced with real numbers. The task of programming such systems is simplified and made safer by discarding the unordered relational operator,leaving only the operators less-than, equal-to, and greater than. The advantages of using a transarithmetic in a computation, or transcomputation as we prefer to call it, may be had by making small changes to compilers and processor designs. However, radical change is possible by exploiting the reliability of transcomputations to make pipelined dataflow machines with a large number of cores. Our initial designs are for a machine with order one million cores. Such a machine can complete the execution of multiple in-line programs each clock tick
Resumo:
Many communication signal processing applications involve modelling and inverting complex-valued (CV) Hammerstein systems. We develops a new CV B-spline neural network approach for efficient identification of the CV Hammerstein system and effective inversion of the estimated CV Hammerstein model. Specifically, the CV nonlinear static function in the Hammerstein system is represented using the tensor product from two univariate B-spline neural networks. An efficient alternating least squares estimation method is adopted for identifying the CV linear dynamic model’s coefficients and the CV B-spline neural network’s weights, which yields the closed-form solutions for both the linear dynamic model’s coefficients and the B-spline neural network’s weights, and this estimation process is guaranteed to converge very fast to a unique minimum solution. Furthermore, an accurate inversion of the CV Hammerstein system can readily be obtained using the estimated model. In particular, the inversion of the CV nonlinear static function in the Hammerstein system can be calculated effectively using a Gaussian-Newton algorithm, which naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. The effectiveness of our approach is demonstrated using the application to equalisation of Hammerstein channels.
Resumo:
Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.
Resumo:
Rhythms are manifested ubiquitously in dynamical biological processes. These fundamental processes which are necessary for the survival of living organisms include metabolism, breathing, heart beat, and, above all, the circadian rhythm coupled to the diurnal cycle. Thus, in mathematical biology, biological processes are often represented as linear or nonlinear oscillators. In the framework of nonlinear and dissipative systems (ie. the flow of energy, substances, or sensory information), they generate stable internal oscillations as a response to environmental input and, in turn, utilise such output as a means of coupling with the environment.
Resumo:
A universal systems design process is specified, tested in a case study and evaluated. It links English narratives to numbers using a categorical language framework with mathematical mappings taking the place of conjunctions and numbers. The framework is a ring of English narrative words between 1 (option) and 360 (capital); beyond 360 the ring cycles again to 1. English narratives are shown to correspond to the field of fractional numbers. The process can enable the development, presentation and communication of complex narrative policy information among communities of any scale, on a software implementation known as the "ecoputer". The information is more accessible and comprehensive than that in conventional decision support, because: (1) it is expressed in narrative language; and (2) the narratives are expressed as compounds of words within the framework. Hence option generation is made more effective than in conventional decision support processes including Multiple Criteria Decision Analysis, Life Cycle Assessment and Cost-Benefit Analysis.The case study is of a participatory workshop in UK bioenergy project objectives and criteria, at which attributes were elicited in environmental, economic and social systems. From the attributes, the framework was used to derive consequences at a range of levels of precision; these are compared with the project objectives and criteria as set out in the Case for Support. The design process is to be supported by a social information manipulation, storage and retrieval system for numeric and verbal narratives attached to the "ecoputer". The "ecoputer" will have an integrated verbal and numeric operating system. Novel design source code language will assist the development of narrative policy. The utility of the program, including in the transition to sustainable development and in applications at both community micro-scale and policy macro-scale, is discussed from public, stakeholder, corporate, Governmental and regulatory perspectives.