686 resultados para Processors
Resumo:
Processing of highly perishable non-storable crops, such as tomato, is typically promoted for two reasons: as a way of absorbing excess supply, particularly during gluts that result from predominantly rainfed cultivation; and to enhance the value chain through a value-added process. For Ghana, improving domestic tomato processing would also reduce the country’s dependence on imported tomato paste and so improve foreign exchange reserves, as well as provide employment opportunities and development opportunities in what are poor rural areas of the country. Many reports simply repeat the mantra that processing offers a way of buying up the glut. Yet the reality is that the “tomato gluts,” an annual feature of the local press, occur only for a few weeks of the year, and are almost always a result of large volumes of rainfed local varieties unsuitable for processing entering the fresh market at the same time, not the improved varieties that could be used by the processors. For most of the year, the price of tomatoes suitable for processing is above the breakeven price for tomato processors, given the competition from imports. Improved varieties (such as Pectomech) that are suitable for processing are also preferred by consumers and achieve a premium price over the local varieties.
Resumo:
We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.
Resumo:
Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.
Resumo:
Rocket species have been shown to have very high concentrations of glucosinolates and flavonols, which have numerous positive health benefits with regular consumption. In this review we highlight how breeders and processors of rocket species can utilize genomic and phytochemical research to improve varieties and enhance the nutritive benefits to consumers. Plant breeders are increasingly looking to new technologies such as HPLC, UPLC, LC-MS and GC-MS to screen populations for their phytochemical content to inform plant selections. Here we collate the research that has been conducted to-date in rocket, and summarise all glucosinolate and flavonol compounds identified in the species. We emphasize the importance of the broad screening of populations for phytochemicals and myrosinase degradation products, as well as unique traits that may be found in underutilized gene bank resources. We also stress that collaboration with industrial partners is becoming essential for long-term plant breeding goals through research.
Resumo:
Globalization, either directly or indirectly (e.g. through structural adjustment reforms), has called for profound changes in the previously existing institutional order. Some changes adversely impacted the production and market environment of many coffee producers in developing countries resulting in more risky and less remunerative coffee transactions. This paper focuses on customization of a tropical commodity, fair-trade coffee, as an approach to mitigating the effects of worsened market conditions for small-scale coffee producers in less developed countries. fair-trade labeling is viewed as a form of “de-commodification” of coffee through product differentiation on ethical grounds. This is significant not only as a solution to the market failure caused by pervasive information asymmetries along the supply chain, but also as a means of revitalizing the agricultural-commodity-based trade of less developed countries (LDCs) that has been languishing under globalization. More specifically, fair-trade is an example of how the same strategy adopted by developed countries’ producers/ processors (i.e. the sequence product differentiation - institutional certification - advertisement) can be used by LDC producers to increase the reputation content of their outputs by transforming them from mere commodities into “decommodified” (i.e. customized and more reputed) goods. The resulting segmentation of the world coffee market makes possible to meet the demand by consumers with preference for this “(ethically) customized” coffee and to transfer a share of the accruing economic rents backward to the Fair-trade coffee producers in LDCs. It should however be stressed that this outcome cannot be taken for granted since investments are needed to promote the required institutional innovations. In Italy FTC is a niche market with very few private brands selling this product. However, an increase of FTC market share could be a big commercial opportunity for farmers in LDCs and other economic agents involved along the international coffee chain. Hence, this research explores consumers’ knowledge of labels promoting quality products, consumption coffee habits, brand loyalty, willingness to pay and market segmentation according to the heterogeneity of preferences for coffee products. The latter was assessed developing a D-efficient design where stimuli refinement was tested during two focus groups.
Resumo:
The induction of classification rules from previously unseen examples is one of the most important data mining tasks in science as well as commercial applications. In order to reduce the influence of noise in the data, ensemble learners are often applied. However, most ensemble learners are based on decision tree classifiers which are affected by noise. The Random Prism classifier has recently been proposed as an alternative to the popular Random Forests classifier, which is based on decision trees. Random Prism is based on the Prism family of algorithms, which is more robust to noise. However, like most ensemble classification approaches, Random Prism also does not scale well on large training data. This paper presents a thorough discussion of Random Prism and a recently proposed parallel version of it called Parallel Random Prism. Parallel Random Prism is based on the MapReduce programming paradigm. The paper provides, for the first time, novel theoretical analysis of the proposed technique and in-depth experimental study that show that Parallel Random Prism scales well on a large number of training examples, a large number of data features and a large number of processors. Expressiveness of decision rules that our technique produces makes it a natural choice for Big Data applications where informed decision making increases the user’s trust in the system.
Resumo:
We present parallel algorithms on the BSP/CGM model, with p processors, to count and generate all the maximal cliques of a circle graph with n vertices and m edges. To count the number of all the maximal cliques, without actually generating them, our algorithm requires O(log p) communication rounds with O(nm/p) local computation time. We also present an algorithm to generate the first maximal clique in O(log p) communication rounds with O(nm/p) local computation, and to generate each one of the subsequent maximal cliques this algorithm requires O(log p) communication rounds with O(m/p) local computation. The maximal cliques generation algorithm is based on generating all maximal paths in a directed acyclic graph, and we present an algorithm for this problem that uses O(log p) communication rounds with O(m/p) local computation for each maximal path. We also show that the presented algorithms can be extended to the CREW PRAM model.
Resumo:
The InteGrade middleware intends to exploit the idle time of computing resources in computer laboratories. In this work we investigate the performance of running parallel applications with communication among processors on the InteGrade grid. As costly communication on a grid can be prohibitive, we explore the so-called systolic or wavefront paradigm to design the parallel algorithms in which no global communication is used. To evaluate the InteGrade middleware we considered three parallel algorithms that solve the matrix chain product problem, the 0-1 Knapsack Problem, and the local sequence alignment problem, respectively. We show that these three applications running under the InteGrade middleware and MPI take slightly more time than the same applications running on a cluster with only LAM-MPI support. The results can be considered promising and the time difference between the two is not substantial. The overhead of the InteGrade middleware is acceptable, in view of the benefits obtained to facilitate the use of grid computing by the user. These benefits include job submission, checkpointing, security, job migration, etc. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
The problem of scheduling a parallel program presented by a weighted directed acyclic graph (DAG) to the set of homogeneous processors for minimizing the completion time of the program has been extensively studied as academic optimization problem which occurs in optimizing the execution time of parallel algorithm with parallel computer.In this paper, we propose an application of the Ant Colony Optimization (ACO) to a multiprocessor scheduling problem (MPSP). In the MPSP, no preemption is allowed and each operation demands a setup time on the machines. The problem seeks to compose a schedule that minimizes the total completion time.We therefore rely on heuristics to find solutions since solution methods are not feasible for most problems as such. This novel heuristic searching approach to the multiprocessor based on the ACO algorithm a collection of agents cooperate to effectively explore the search space.A computational experiment is conducted on a suit of benchmark application. By comparing our algorithm result obtained to that of previous heuristic algorithm, it is evince that the ACO algorithm exhibits competitive performance with small error ratio.
Resumo:
In order to achieve the high performance, we need to have an efficient scheduling of a parallelprogram onto the processors in multiprocessor systems that minimizes the entire executiontime. This problem of multiprocessor scheduling can be stated as finding a schedule for ageneral task graph to be executed on a multiprocessor system so that the schedule length can be minimize [10]. This scheduling problem is known to be NP- Hard.In multi processor task scheduling, we have a number of CPU’s on which a number of tasksare to be scheduled that the program’s execution time is minimized. According to [10], thetasks scheduling problem is a key factor for a parallel multiprocessor system to gain betterperformance. A task can be partitioned into a group of subtasks and represented as a DAG(Directed Acyclic Graph), so the problem can be stated as finding a schedule for a DAG to beexecuted in a parallel multiprocessor system so that the schedule can be minimized. Thishelps to reduce processing time and increase processor utilization. The aim of this thesis workis to check and compare the results obtained by Bee Colony algorithm with already generatedbest known results in multi processor task scheduling domain.
Resumo:
Application of optimization algorithm to PDE modeling groundwater remediation can greatly reduce remediation cost. However, groundwater remediation analysis requires a computational expensive simulation, therefore, effective parallel optimization could potentially greatly reduce computational expense. The optimization algorithm used in this research is Parallel Stochastic radial basis function. This is designed for global optimization of computationally expensive functions with multiple local optima and it does not require derivatives. In each iteration of the algorithm, an RBF is updated based on all the evaluated points in order to approximate expensive function. Then the new RBF surface is used to generate the next set of points, which will be distributed to multiple processors for evaluation. The criteria of selection of next function evaluation points are estimated function value and distance from all the points known. Algorithms created for serial computing are not necessarily efficient in parallel so Parallel Stochastic RBF is different algorithm from its serial ancestor. The application for two Groundwater Superfund Remediation sites, Umatilla Chemical Depot, and Former Blaine Naval Ammunition Depot. In the study, the formulation adopted treats pumping rates as decision variables in order to remove plume of contaminated groundwater. Groundwater flow and contamination transport is simulated with MODFLOW-MT3DMS. For both problems, computation takes a large amount of CPU time, especially for Blaine problem, which requires nearly fifty minutes for a simulation for a single set of decision variables. Thus, efficient algorithm and powerful computing resource are essential in both cases. The results are discussed in terms of parallel computing metrics i.e. speedup and efficiency. We find that with use of up to 24 parallel processors, the results of the parallel Stochastic RBF algorithm are excellent with speed up efficiencies close to or exceeding 100%.
Resumo:
We outline possible actions to be adopted by the European Union to ensure a better share of total coffee revenues to producers in developing countries. The way to this translates, ultimately, in producers receiving a fair price for the commodity they supply, i.e., a market price that results from fair market conditions in the whole coffee producing chain. We plead for proposals to take place in the consuming countries, as market conditions in the consuming-countries side of the coffee producing chain are not fair; market failures and ingenious distortions are responsible for the enormous asymmetry of gains in the two sides. The first of three proposals for consumer government supported actions is to help in the creation of domestic trading companies for achieving higher export volumes. These tradings would be associated to roasters that, depending on the final product envisaged, could perform the roasting in the country and export the roasted – and sometimes ground – coffee, breaking the increasing importers-exporters verticalisation. Another measure would be the systematic provision of basic intelligence on the consuming markets. Statistics of the quantities sold according to mode of consumption, by broad “categories of coffee” and point of sale, could be produced for each country. They should be matched to the exports/imports data and complemented by (aggregate) country statistics on the roasting sector. This would extremely help producing countries design their own market and producing strategies. Finally, a fund, backed by a common EU tax on roasted coffee – created within the single market tax harmonisation programme, is suggested. This European Coffee Fund would have two main projects. Together with the ICO, it would launch an advertising campaign on coffee in general, aimed at counterbalancing the increasing “brandification” of coffee. Basic information on the characteristics of the plant and the drink would be passed, and the effort could be extended to the future Eastern European members of the Union, as a further assurance that EU processors would not have a too privileged access to these new markets. A quality label for every coffee sold in the Union could complement this initiative, helping to create a level playing field for products from outside the EU. A second project would consist in a careful diversification effort, to take place in selected producing countries.
Resumo:
O enfoque da Nova Economia Institucional é utilizado no estudo de sistemas agroindustriais brasileiros, mas ainda há poucos trabalhos sobre as relações entre produtores rurais e indústrias processadoras. O objetivo deste estudo é aferir o alinhamento das características das transações com os modos de governança adotados na comercialização de borracha natural entre produtores rurais e indústrias de beneficiamento primário. Busca-se confrontar os resultados com as previsões da teoria da Economia dos Custos de Transação, que prevê a adoção da governança mais eficiente, contingenciada ao ambiente transacional. Esta pesquisa utiliza um enfoque pioneiro para o setor. A primeira parte consiste na exploração qualitativa das transações entre produtores rurais e usinas beneficiadoras e na descrição dos elementos que compõe o sistema agroindustrial da borracha natural no Estado de São Paulo. Os resultados desta etapa são utilizados para refinar as hipóteses da etapa quantitativa, que relacionam os modos de governança adotados aos investimentos em ativos específicos, como o plantio de seringueiras, à frequência, à incerteza e à assistência técnica associadas à transação. Os resultados, apesar das limitações devido ao tamanho pequeno da amostra, sugerem que a quantidade de borracha comercializada influi positivamente na adoção de modos de governança mais coordenados e que a recorrência das transações provavelmente está associada a contratos relacionais. A hipótese da endogeneidade é rejeitada.
Resumo:
The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.