867 resultados para Emergent and Distributed Systems (IJPEDS)
Resumo:
Metadata that is associated with either an information system or an information object for purposes of description, administration, legal requirements, technical functionality, use and usage, and preservation, plays a critical role in ensuring the creation, management, preservation and use and re-use of trustworthymaterials, including records. Recordkeeping1 metadata, of which one key type is archival description, plays a particularly important role in documenting the reliability and authenticity of records and recordkeeping systemsas well as the various contexts (legal-administrative, provenancial, procedural, documentary, and technical) within which records are created and kept as they move across space and time. In the digital environment, metadata is also the means by which it is possible to identify how record components – those constituent aspects of a digital record that may be managed, stored and used separately by the creator or the preserver – can be reassembled to generate an authentic copy of a record or reformulated per a user’s request as a customized output package.Issues relating to the creation, capture, management and preservation of adequate metadata are, therefore, integral to any research study addressing the reliability and authenticity of digital entities, regardless of the community, sector or institution within which they are being created. The InterPARES 2 Description Cross-Domain Group (DCD) examined the conceptualization, definitions, roles, and current functionality of metadata and archival description in terms of requirements generated by InterPARES 12. Because of the needs to communicate the work of InterPARES in a meaningful way across not only other disciplines, but also different archival traditions; to interface with, evaluate and inform existing standards, practices and other research projects; and to ensure interoperability across the three focus areas of InterPARES2, the Description Cross-Domain also addressed its research goals with reference to wider thinking about and developments in recordkeeping and metadata. InterPARES2 addressed not only records, however, but a range of digital information objects (referred to as “entities” by InterPARES 2, but not to be confused with the term “entities” as used in metadata and database applications) that are the products and by-products of government, scientific and artistic activities that are carried out using dynamic, interactive or experiential digital systems. The nature of these entities was determined through a diplomatic analysis undertaken as part of extensive case studies of digital systems that were conducted by the InterPARES 2 Focus Groups. This diplomatic analysis established whether the entities identified during the case studies were records, non-records that nevertheless raised important concerns relating to reliability and authenticity, or “potential records.” To be determined to be records, the entities had to meet the criteria outlined by archival theory – they had to have a fixed documentary format and stable content. It was not sufficient that they be considered to be or treated as records by the creator. “Potential records” is a new construct that indicates that a digital system has the potential to create records upon demand, but does not actually fix and set aside records in the normal course of business. The work of the Description Cross-Domain Group, therefore, addresses the metadata needs for all three categories of entities.Finally, since “metadata” as a term is used today so ubiquitously and in so many different ways by different communities, that it is in peril of losing any specificity, part of the work of the DCD sought to name and type categories of metadata. It also addressed incentives for creators to generate appropriate metadata, as well as issues associated with the retention, maintenance and eventual disposition of the metadata that aggregates around digital entities over time.
Resumo:
The rise in population growth, as well as nutrient mining, has contributed to low agricultural productivity in Sub-Saharan Africa (SSA). A plethora of technologies to boost agricultural production have been developed but the dissemination of these agricultural innovations and subsequent uptake by smallholder farmers has remained a challenge. Scientists and philanthropists have adopted the Integrated Soil Fertility Management (ISFM) paradigm as a means to promote sustainable intensification of African farming systems. This comparative study aimed: 1) To assess the efficacy of Agricultural Knowledge and Innovation Systems (AKIS) in East (Kenya) and West (Ghana) Africa in the communication and dissemination of ISFM (Study I); 2) To investigate how specifically soil quality, and more broadly socio-economic status and institutional factors, influence farmer adoption of ISFM (Study II); and 3) To assess the effect of ISFM on maize yield and total household income of smallholder farmers (Study III). To address these aims, a mixed methodology approach was employed for study I. AKIS actors were subjected to social network analysis methods and in-depth interviews. Structured questionnaires were administered to 285 farming households in Tamale and 300 households in Kakamega selected using a stratified random sampling approach. There was a positive relationship between complete ISFM awareness among farmers and weak knowledge ties to both formal and informal actors at both research locations. The Kakamega AKIS revealed a relationship between complete ISFM awareness among farmers and them having strong knowledge ties to formal actors implying that further integration of formal actors with farmers’ local knowledge is crucial for the agricultural development progress. The structured questionnaire was also utilized to answer the query pertaining to study II. Soil samples (0-20 cm depth) were drawn from 322 (Tamale, Ghana) and 459 (Kakamega, Kenya) maize plots and analysed non-destructively for various soil fertility indicators. Ordinal regression modeling was applied to assess the cumulative adoption of ISFM. According to model estimates, soil carbon seemed to preclude farmers from intensifying input use in Tamale, whereas in Kakamega it spurred complete adoption. This varied response by farmers to soil quality conditions is multifaceted. From the Tamale perspective, it is consistent with farmers’ tendency to judiciously allocate scarce resources. Viewed from the Kakamega perspective, it points to a need for farmers here to intensify agricultural production in order to foster food security. In Kakamega, farmers with more acidic soils were more likely to adopt ISFM. Other household and farm-level factors necessary for ISFM adoption included off-farm income, livestock ownership, farmer associations, and market inter-linkages. Finally, in study III a counterfactual model was used to calculate the difference in outcomes (yield and household income) of the treatment (ISFM adoption) in order to estimate causal effects of ISFM adoption. Adoption of ISFM contributed to a yield increase of 16% in both Tamale and Kakamega. The innovation affected total household income only in Tamale, where ISFM adopters had an income gain of 20%. This may be attributable to the different policy contexts under which the two sets of farmers operate. The main recommendations underscored the need to: (1) improve the functioning of AKIS, (2) enhance farmer access to hybrid maize seed and credit, (3) and conduct additional multi-locational studies as farmers operate under varying contexts.
Resumo:
Yacon, Smallanthus sonchifolius, an Andean species. is a rich source of dictetíc oligofructans with low glucose content. proteins and phenolic compounds. These constituents have shown efficacy in the prevention of diet-related ehronic diseases, including gastroin-testinal disorders and diabetes |1,2|. Yacon is part of a research program at the National Center for Natural Products Research (NCNPR) and University of Mississippi Field Station to develop new alternative root crops for Mississippi while attempting to im-prove the diet of low incorne families. Yacon can be easily propa-gated by cultings. Virus and nematode infections have been re-ported on plants propagated by cuttings in Brazil. a country that hás adopted Yacon as specialty crop [3|. We have developed two culture systems. autotrophic and heterotrophic, to produce healthy plants. Herem we describe the presence of endophytic bactéria m micropropagated Yacon. In auxin free media, new roots were induced. Overa 15day period. the average root mduction per expiam was 5.45 to 8.75 under autotrophic and heterotrophic cul-tures, respectively. Root lenglh vaned between 3 and 60mrn. The presence of root hairs and lateral roots was noticed only in auto-trophic condilions. These beneficiai bactéria were identified and chemically ctiaracterized. Acknowledgement: This research work was partially supported by the USDA/ARS Cooperative Research Agreement No. 58-6408-2-009. Referentes; |1) Terada S. et ai. (2006] Yakugaku Zasshi 126(8): 665-669. (2| Valentová K. Ulri-chová j. (2003) Biomedical Papers 147: 119-130. [3| Mogor C. et ai, (2003) Acta Horticulturea 597: 311 -313.
Resumo:
Mesh topologies are important for large-scale peer-to-peer systems that use low-power transceivers. The Quality of Service (QoS) in such systems is known to decrease as the scale increases. We present a scalable approach for dissemination that exploits all the shortest paths between a pair of nodes and improves the QoS. Despite th presence of multiple shortest paths in a system, we show that these paths cannot be exploited by spreading the messages over the paths in a simple round-robin manner; nodes along one of these paths will always handle more messages than the nodes along the other paths. We characterize the set of shortest paths between a pair of nodes in regular mesh topologies and derive rules, using this characterization, to effectively spread the messages over all the available paths. These rules ensure that all the nodes that are at the same distance from the source handle roughly the same number of messages. By modeling the multihop propagation in the mesh topology as a multistage queuing network, we present simulation results from a variety of scenarios that include link failures and propagation irregularities to reflect real-world characteristics. Our method achieves improved QoS in all these scenarios.
Resumo:
Computing has recently reached an inflection point with the introduction of multicore processors. On-chip thread-level parallelism is doubling approximately every other year. Concurrency lends itself naturally to allowing a program to trade performance for power savings by regulating the number of active cores; however, in several domains, users are unwilling to sacrifice performance to save power. We present a prediction model for identifying energy-efficient operating points of concurrency in well-tuned multithreaded scientific applications and a runtime system that uses live program analysis to optimize applications dynamically. We describe a dynamic phase-aware performance prediction model that combines multivariate regression techniques with runtime analysis of data collected from hardware event counters to locate optimal operating points of concurrency. Using our model, we develop a prediction-driven phase-aware runtime optimization scheme that throttles concurrency so that power consumption can be reduced and performance can be set at the knee of the scalability curve of each program phase. The use of prediction reduces the overhead of searching the optimization space while achieving near-optimal performance and power savings. A thorough evaluation of our approach shows a reduction in power consumption of 10.8 percent, simultaneous with an improvement in performance of 17.9 percent, resulting in energy savings of 26.7 percent.
Resumo:
Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss.
Resumo:
Peak power consumption is the first order design constraint of data centers. Though peak power consumption is rarely, if ever, observed, the entire data center facility must prepare for it, leading to inefficient usage of its resources. The most prominent way for addressing this issue is to limit the power consumption of the data center IT facility far below its theoretical peak value. Many approaches have been proposed to achieve that, based on the same small set of enforcement mechanisms, but there has been no corresponding work on systematically examining the advantages and disadvantages of each such mechanism. In the absence of such a study,it is unclear what is the optimal mechanism for a given computing environment, which can lead to unnecessarily poor performance if an inappropriate scheme is used. This paper fills this gap by comparing for the first time five widely used power capping mechanisms under the same hardware/software setting. We also explore possible alternative power capping mechanisms beyond what has been previously proposed and evaluate them under the same setup. We systematically analyze the strengths and weaknesses of each mechanism, in terms of energy efficiency, overhead, and predictable behavior. We show how these mechanisms can be combined in order to implement an optimal power capping mechanism which reduces the slow down compared to the most widely used mechanism by up to 88%. Our results provide interesting insights regarding the different trade-offs of power capping techniques, which will be useful for designing and implementing highly efficient power capping in the future.
Resumo:
Se hace un balance del Proyecto TRENDS (Training Educators through Networks and Distributed Systems) implantado en Grecia, España, Francia, Italia, Portugal y Reino Unido. En el balance se presentan los objetivos del proyecto y su desarrollo en el que se incluye el modelo de formación, aspectos tecnológicos y la organización del proyecto. Así mismo se hace una contextualización del proyecto en España y se concluye subrayando la gran utilidad de esta experiencia para elaborar futuros telemáticos de más amplio alcance que beneficien la formación de las personas adultas tanto en la modalidad presencial como en la de distancia .
Resumo:
The metaheuristics techiniques are known to solve optimization problems classified as NP-complete and are successful in obtaining good quality solutions. They use non-deterministic approaches to generate solutions that are close to the optimal, without the guarantee of finding the global optimum. Motivated by the difficulties in the resolution of these problems, this work proposes the development of parallel hybrid methods using the reinforcement learning, the metaheuristics GRASP and Genetic Algorithms. With the use of these techniques, we aim to contribute to improved efficiency in obtaining efficient solutions. In this case, instead of using the Q-learning algorithm by reinforcement learning, just as a technique for generating the initial solutions of metaheuristics, we use it in a cooperative and competitive approach with the Genetic Algorithm and GRASP, in an parallel implementation. In this context, was possible to verify that the implementations in this study showed satisfactory results, in both strategies, that is, in cooperation and competition between them and the cooperation and competition between groups. In some instances were found the global optimum, in others theses implementations reach close to it. In this sense was an analyze of the performance for this proposed approach was done and it shows a good performance on the requeriments that prove the efficiency and speedup (gain in speed with the parallel processing) of the implementations performed
Resumo:
Increased accessibility to high-performance computing resources has created a demand for user support through performance evaluation tools like the iSPD (iconic Simulator for Parallel and Distributed systems), a simulator based on iconic modelling for distributed environments such as computer grids. It was developed to make it easier for general users to create their grid models, including allocation and scheduling algorithms. This paper describes how schedulers are managed by iSPD and how users can easily adopt the scheduling policy that improves the system being simulated. A thorough description of iSPD is given, detailing its scheduler manager. Some comparisons between iSPD and Simgrid simulations, including runs of the simulated environment in a real cluster, are also presented. © 2012 IEEE.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS