996 resultados para parallel technique
Resumo:
Sorghum (Sorghum bicolor) was grown for 40 days in. rhizocylinder (a growth container which permitted access to rh zosphere and nonrhizosphere soil), in two soils of low P status. Soils were fertilized with different rates of ammonium and nitrate and supplemented with 40 mg phosphorus (P) kg(-1) and inoculated with either Glomus mosseae (Nicol. and Gerd.) or nonmycorrhizal root inoculum.. N-serve (2 mg kg(-1)) was added to prevent nitrification. At harvest, soil from around the roots was collected at distances of 0-5, 5-10, and 10-20 mm from the root core which was 35 mm diameter. Sorghum plants, with and without mycorrhiza, grew larger with NH4+ than with NO3- application. After measuring soil pH, 4 3 suspensions of the same sample were titrated against 0.01 M HCl or 0.01 M NaOH until soil pH reached the nonplanted pH level. The acid or base requirement for each sample was calculated as mmol H+ or OFF kg(-1) soil. The magnitude of liberated acid or base depended on the form and rate of nitrogen and soil type. When the plant root was either uninfected or infected with mycorrhiza., soil pH changes extended up to 5 mm from the root core surface. In both soils, ammonium as an N source resulted in lower soil pH than nitrate. Mycorrhizal (VAM) inoculation did not enhance this difference. In mycorrhizal inoculated soil, P depletion extended tip to 20 mm from the root surface. In non-VAM inoculated soil P depletion extended up to 10 mm from the root surface and remained unchanged at greater distances. In the mycorrhizal inoculated soils, the contribution of the 0-5 mm soil zone to P uptake was greater than the core soil, which reflects the hyphal contribution to P supply. Nitrogen (N) applications that caused acidification increased P uptake because of increased demand; there is no direct evidence that the increased uptake was due to acidity increasing the solubility of P although this may have been a minor effect.
Resumo:
A range of archaeological samples have been examined using FT-IR spectroscopy. These include suspected coprolite samples from the Neolithic site of Catalhoyuk in Turkey, pottery samples from the Roman site of Silchester, UK and the Bronze Age site of Gatas, Spain and unidentified black residues on pottery sherds from the Roman sites of Springhead and Cambourne, UK. For coprolite samples the aim of FT-IR analysis is identification. Identification of coprolites in the field is based on their distinct orange colour; however, such visual identifications can often be misleading due to their similarity with deposits such as ochre and clay. For pottery the aim is to screen those samples that might contain high levels of organic residues which would be suitable for GC-MS analysis. The experiments have shown coprolites to have distinctive spectra, containing strong peaks from calcite, phosphate and quartz; the presence of phosphorus may be confirmed by SEM-EDX analysis. Pottery containing organic residues of plant and animal origin has also been shown to generally display strong phosphate peaks. FT-IR has distinguished between organic resin and non-organic compositions for the black residues, with differences also being seen between organic samples that have the same physical appearance. Further analysis by CC-MS has confirmed the identification of the coprolites through the presence of coprostanol and bile acids, and shows that the majority of organic pottery residues are either fatty acids or mono- or di-acylglycerols from foodstuffs, or triterpenoid resin compounds exposed to high temperatures. One suspected resin sample was shown to contain no organic residues. and it is seen that resin samples with similar physical appearances have different chemical compositions. FT-IR is proposed as a quick and cheap method of screening archaeological samples before subjecting them to the more expensive and time-consuming method of GC-MS. This will eliminate inorganic samples such as clays and ochre from CC-MS analysis, and will screen those samples which are most likely to have a high concentration of preserved organic residues. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we present a distributed computing framework for problems characterized by a highly irregular search tree, whereby no reliable workload prediction is available. The framework is based on a peer-to-peer computing environment and dynamic load balancing. The system allows for dynamic resource aggregation, does not depend on any specific meta-computing middleware and is suitable for large-scale, multi-domain, heterogeneous environments, such as computational Grids. Dynamic load balancing policies based on global statistics are known to provide optimal load balancing performance, while randomized techniques provide high scalability. The proposed method combines both advantages and adopts distributed job-pools and a randomized polling technique. The framework has been successfully adopted in a parallel search algorithm for subgraph mining and evaluated on a molecular compounds dataset. The parallel application has shown good calability and close-to linear speedup in a distributed network of workstations.
Resumo:
Clustering is defined as the grouping of similar items in a set, and is an important process within the field of data mining. As the amount of data for various applications continues to increase, in terms of its size and dimensionality, it is necessary to have efficient clustering methods. A popular clustering algorithm is K-Means, which adopts a greedy approach to produce a set of K-clusters with associated centres of mass, and uses a squared error distortion measure to determine convergence. Methods for improving the efficiency of K-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting a more efficient data structure, notably a multi-dimensional binary search tree (KD-Tree) to store either centroids or data points. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient K-Means techniques in parallel computational environments. In this work, we provide a parallel formulation for the KD-Tree based K-Means algorithm and address its load balancing issues.
Resumo:
Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.
Resumo:
An eddy current testing system consists of a multi-sensor probe, a computer and a special expansion card and software for data-collection and analysis. The probe incorporates an excitation coil, and sensor coils; at least one sensor coil is a lateral current-normal coil and at least one is a current perturbation coil.
Resumo:
An eddy current testing system consists of a multi-sensor probe, computer and a special expansion card and software for data collection and analysis. The probe incorporates an excitation coil, and sensor coils; at least one sensor coil is a lateral current-normal coil and at least one is a current perturbation coil.
Resumo:
A simple and practical technique for assessing the risks, that is, the potential for error, and consequent loss, in software system development, acquired during a requirements engineering phase is described. The technique uses a goal-based requirements analysis as a framework to identify and rate a set of key issues in order to arrive at estimates of the feasibility and adequacy of the requirements. The technique is illustrated and how it has been applied to a real systems development project is shown. How problems in this project could have been identified earlier is shown, thereby avoiding costly additional work and unhappy users.
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
This paper presents the results of the application of a parallel Genetic Algorithm (GA) in order to design a Fuzzy Proportional Integral (FPI) controller for active queue management on Internet routers. The Active Queue Management (AQM) policies are those policies of router queue management that allow the detection of network congestion, the notification of such occurrences to the hosts on the network borders, and the adoption of a suitable control policy. Two different parallel implementations of the genetic algorithm are adopted to determine an optimal configuration of the FPI controller parameters. Finally, the results of several experiments carried out on a forty nodes cluster of workstations are presented.
Resumo:
This paper presents a parallel genetic algorithm to the Steiner Problem in Networks. Several previous papers have proposed the adoption of GAs and others metaheuristics to solve the SPN demonstrating the validity of their approaches. This work differs from them for two main reasons: the dimension and the characteristics of the networks adopted in the experiments and the aim from which it has been originated. The reason that aimed this work was namely to build a comparison term for validating deterministic and computationally inexpensive algorithms which can be used in practical engineering applications, such as the multicast transmission in the Internet. On the other hand, the large dimensions of our sample networks require the adoption of a parallel implementation of the Steiner GA, which is able to deal with such large problem instances.
Resumo:
A parallel hardware random number generator for use with a VLSI genetic algorithm processing device is proposed. The design uses an systolic array of mixed congruential random number generators. The generators are constantly reseeded with the outputs of the proceeding generators to avoid significant biasing of the randomness of the array which would result in longer times for the algorithm to converge to a solution. 1 Introduction In recent years there has been a growing interest in developing hardware genetic algorithm devices [1, 2, 3]. A genetic algorithm (GA) is a stochastic search and optimization technique which attempts to capture the power of natural selection by evolving a population of candidate solutions by a process of selection and reproduction [4]. In keeping with the evolutionary analogy, the solutions are called chromosomes with each chromosome containing a number of genes. Chromosomes are commonly simple binary strings, the bits being the genes.
Resumo:
A parallel convolutional coder (104) comprising: a plurality of serial convolutional coders (108) each having a register with a plurality of memory cells and a plurality of serial coder outputs,- input means (120) from which data can be transferred in parallel into the registers,- and a parallel coder output (124) comprising a plurality of output memory cells each of which is connected to one of the serial coder outputs so that data can be transferred in parallel from all of the serial coders to the parallel coder output.