16 resultados para Processor cores
Resumo:
Processor virtualization for process migration in distributed parallel computing systems has formed a significant component of research on load balancing. In contrast, the potential of processor virtualization for fault tolerance has been addressed minimally. The work reported in this paper is motivated towards extending concepts of processor virtualization towards ‘intelligent cores’ as a means to achieve fault tolerance in distributed parallel computing systems. Intelligent cores are an abstraction of the hardware processing cores, with the incorporation of cognitive capabilities, on which parallel tasks can be executed and migrated. When a processing core executing a task is predicted to fail the task being executed is proactively transferred onto another core. A parallel reduction algorithm incorporating concepts of intelligent cores is implemented on a computer cluster using Adaptive MPI and Charm ++. Preliminary results confirm the feasibility of the approach.
Resumo:
Displacement studies on leaching of potassium (K+) were conducted under unsaturated steady state flow conditions in nine undisturbed soil columns (15.5 cm in diameter and 25 cm long). Pulses of K+ applied to columns of undisturbed soil were leached with distilled water or calcium chloride (CaCl2) at a rate of 18 mm h(-1). The movement of K+ in gypsum treated soil leached with distilled water was at a similar rate to that of the untreated soil leached with 15 mM CaCl2. The Ca2+ concentrations in the leachates were about 15 mM, the expected values for the dissolution of the gypsum. When applied K+ was displaced with the distilled water, K+ was retained in the top 10-12.5 cm depth of soil. In the undisturbed soil cores there is possibility of preferential flow and lack of K+ sorption. The application of gypsum and CaCl2 in the reclamation of sodic soils would be expected to leach K+ from soils. It can also be concluded that the use of sources of water for irrigation which have a high Ca2+ concentration can also lead to leaching of K+ from soil. Average effluent concentration of K+ during leaching period was 30.2 and 28.6 mg l(-1) for the gypsum and CaCl2 treated soils, respectively. These concentrations are greater than the recommended guideline of the World Health Organisation (12 mg K+ l(-1)).
Resumo:
Coral growth rate can be affected by environmental parameters such as seawater temperature, depth, and light intensity. The natural reef environment is also disturbed by human influences such as anthropogenic pollutants, which in Barbados are released close to the reefs. Here we describe a relatively new method of assessing the history of pollution and explain how these effects have influenced the coral communities off the west coast of Barbados. We evaluate the relative impact of both anthropogenic pollutants and natural stresses. Sclerochronology documents framework and skeletal growth rate and records pollution history (recorded as reduced growth) for a suite of sampled Montastraea annularis coral cores. X-radiography shows annual growth band patterns of the corals extending back over several decades and indicates significantly lower growth rate in polluted sites. Results using laser-ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) on the whole sample (aragonite, organic matter, trapped particulate matter, etc.), have shown contrasting concentrations of the trace elements (Cu, Sn, Zn, and Pb) between corals at different locations and within a single coral. Deepwater corals 7 km apart, record different levels of Pb and Sn, suggesting that a current transported the metal pollution in the water. In addition, the 1995 hurricanes are associated with anomalous values for Sn and Cu from most sites. These are believed to result from dispersion of nearshore polluted water. We compared the concentrations of trace elements in the coral growth of particular years to those in the relevant contemporaneous seawater. Mean values for the concentration factor in the coral, relative to the water, ranged from 10 for Cu and Ni to 2.4 and 0.7 for Cd and Zn, respectively. Although the uncertainties are large (60-80%), the coral record enabled us to demonstrate the possibility of calculating a history of seawater pollution for these elements from the 1940s to 1997. Our values were much higher than those obtained from analysis of carefully cleaned coral aragonite; they demonstrate the incorporation of more contamination including that from particulate material as well as dissolved metals.
Resumo:
Purpose - The purpose of this paper is to offer an exploratory case study comparing one Brazilian beef processor's relationships supplying two different distribution channels, an EU importer and an EU retail chain operating in Brazil. Design/methodology/approach - The paper begins with a short review of global value chains and the recent literature on trust. It gives the background to the Brazilian beef chain and presents data obtained through in-depth interviews, annual reports and direct observation with the Brazilian beef processor, the EU importer and the retailer. The interviews were conducted with individual firms, but the analysis places them in a chain context, identifying the links and relationships between the agents of the chains and aiming to describe each distribution channel. Findings - Executive chain governance exercised by the domestic retailer stimulates technical upgrading and transferring of best practices to local. suppliers. Consequently, this kind of relationship results in more trust within the global value chain. Practical implications - There are difficulties and challenges facing this Brazilian beef processor that are party related to the need to comply with increasingly complex and demanding food safety and food quality standards. There is still a gap between practices adopted for the export market and practices adopted locally. The strategies of transnational retailers in offering differentiated beef should be taken in account. Originality/value - The research outlines an interdisciplinary framework able to explain chain relationships and the kind of trust that emerges in relationships between EU importer/retail and a developing country supplier.
Resumo:
[Ru(2,2'-bipyridine)(2)(Hdpa)](BF4)(2) center dot 2H(2)O (1), [Ru(1,10-phenanthroline)(2)(Hdpa)] (PF6)(2) center dot CH2Cl2 (2) and [Ru(4,4,4',4'-tetramethyl-2,2'- bisoxazoline)(2)(Hdpa)] (PF6)(2) (3) are synthesized where Hdpa is 2,2'-dipyridylamine. The X-ray crystal structures of 1 and 2 have been determined. Hdpa in 1 and 2 is found to bind the metal via the two pyridyl N ends. Comparing the NMR spectra in DMSO-d(6), it is concluded that 3 has a similar structure. The pK(a) values (for the dissociation of the NH proton in Hdpa) of free Hdpa and its complexes are determined in acetonitrile by exploiting molar conductance. These correlate linearly with the chemical shift of the NH proton in the respective entities. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Copper(l) complexes of 1:3 condensates of tris(2-aminoethyl)amine and p-X-benzaldehydes (X = K Cl, NMe2 and NO2) of the type [Cu(ligand)]ClO4 are synthesised. The X-ray crystal structures of the copper(l) complexes with X = K, Cl and NMe2 are determined. In these complexes copper(l) is found to have trigonal pyramidal N-4 coordination sphere with the apical N forming a longer bond (2.191-2.202 Angstrom) than the trigonal ones (2.003-2.026 Angstrom). The Cu(II/I) potentials in these complexes span a range of 0.71-0.90 V vs SCE increasing linearly with the resonance component of the Hammett sigma for the para substituent X. It is concluded that trigonal pyramidal geometry is destabilising for copper(II).
Resumo:
Space applications are challenged by the reliability of parallel computing systems (FPGAs) employed in space crafts due to Single-Event Upsets. The work reported in this paper aims to achieve self-managing systems which are reliable for space applications by applying autonomic computing constructs to parallel computing systems. A novel technique, 'Swarm-Array Computing' inspired by swarm robotics, and built on the foundations of autonomic and parallel computing is proposed as a path to achieve autonomy. The constitution of swarm-array computing comprising for constituents, namely the computing system, the problem / task, the swarm and the landscape is considered. Three approaches that bind these constituents together are proposed. The feasibility of one among the three proposed approaches is validated on the SeSAm multi-agent simulator and landscapes representing the computing space and problem are generated using the MATLAB.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing systems? How can autonomic computing approaches be extended towards building reliable systems? How can existing technologies be merged to provide a solution for self-managing systems? The work reported in this paper aims to answer these questions by proposing Swarm-Array Computing, a novel technique inspired from swarm robotics and built on the foundations of autonomic and parallel computing paradigms. Two approaches based on intelligent cores and intelligent agents are proposed to achieve autonomy in parallel computing systems. The feasibility of the proposed approaches is validated on a multi-agent simulator.
Resumo:
The General Packet Radio Service (GPRS) was developed to allow packet data to be transported efficiently over an existing circuit switched radio network. The main applications for GPRS are in transporting IP datagram’s from the user’s mobile Internet browser to and from the Internet, or in telemetry equipment. A simple Error Detection and Correction (EDC) scheme to improve the GPRS Block Error Rate (BLER) performance is presented, particularly for coding scheme 4 (CS-4), however gains in other coding schemes are seen. For every GPRS radio block that is corrected by the EDC scheme, the block does not need to be retransmitted releasing bandwidth in the channel, improving throughput and the user’s application data rate. As GPRS requires intensive processing in the baseband, a viable hardware solution for a GPRS BLER co-processor is discussed that has been currently implemented in a Field Programmable Gate Array (FPGA) and presented in this paper.
Resumo:
The authors describe a learning classifier system (LCS) which employs genetic algorithms (GA) for adaptive online diagnosis of power transmission network faults. The system monitors switchgear indications produced by a transmission network, reporting fault diagnoses on any patterns indicative of faulted components. The system evaluates the accuracy of diagnoses via a fault simulator developed by National Grid Co. and adapts to reflect the current network topology by use of genetic algorithms.
Resumo:
Background and Aims Forest trees directly contribute to carbon cycling in forest soils through the turnover of their fine roots. In this study we aimed to calculate root turnover rates of common European forest tree species and to compare them with most frequently published values. Methods We compiled available European data and applied various turnover rate calculation methods to the resulting database. We used Decision Matrix and Maximum-Minimum formula as suggested in the literature. Results Mean turnover rates obtained by the combination of sequential coring and Decision Matrix were 0.86 yr−1 for Fagus sylvatica and 0.88 yr−1 for Picea abies when maximum biomass data were used for the calculation, and 1.11 yr−1 for both species when mean biomass data were used. Using mean biomass rather than maximum resulted in about 30 % higher values of root turnover. Using the Decision Matrix to calculate turnover rate doubled the rates when compared to the Maximum-Minimum formula. The Decision Matrix, however, makes use of more input information than the Maximum-Minimum formula. Conclusions We propose that calculations using the Decision Matrix with mean biomass give the most reliable estimates of root turnover rates in European forests and should preferentially be used in models and C reporting.
Resumo:
Mathematics in Defence 2011 Abstract. We review transreal arithmetic and present transcomplex arithmetic. These arithmetics have no exceptions. This leads to incremental improvements in computer hardware and software. For example, the range of real numbers, encoded by floating-point bits, is doubled when all of the Not-a-Number(NaN) states, in IEEE 754 arithmetic, are replaced with real numbers. The task of programming such systems is simplified and made safer by discarding the unordered relational operator,leaving only the operators less-than, equal-to, and greater than. The advantages of using a transarithmetic in a computation, or transcomputation as we prefer to call it, may be had by making small changes to compilers and processor designs. However, radical change is possible by exploiting the reliability of transcomputations to make pipelined dataflow machines with a large number of cores. Our initial designs are for a machine with order one million cores. Such a machine can complete the execution of multiple in-line programs each clock tick