930 resultados para Memory Management (Computer science)
Resumo:
Coset enumeration is a most important procedure for investigating finitely presented groups. We present a practical parallel procedure for coset enumeration on shared memory processors. The shared memory architecture is particularly interesting because such parallel computation is both faster and cheaper. The lower cost comes when the program requires large amounts of memory, and additional CPU's. allow us to lower the time that the expensive memory is being used. Rather than report on a suite of test cases, we take a single, typical case, and analyze the performance factors in-depth. The parallelization is achieved through a master-slave architecture. This results in an interesting phenomenon, whereby the CPU time is divided into a sequential and a parallel portion, and the parallel part demonstrates a speedup that is linear in the number of processors. We describe an early version for which only 40% of the program was parallelized, and we describe how this was modified to achieve 90% parallelization while using 15 slave processors and a master. In the latter case, a sequential time of 158 seconds was reduced to 29 seconds using 15 slaves.
Resumo:
In this and a preceding paper, we provide an introduction to the Fujitsu VPP range of vector-parallel supercomputers and to some of the computational chemistry software available for the VPP. Here, we consider the implementation and performance of seven popular chemistry application packages. The codes discussed range from classical molecular dynamics to semiempirical and ab initio quantum chemistry. All have evolved from sequential codes, and have typically been parallelised using a replicated data approach. As such they are well suited to the large-memory/fast-processor architecture of the VPP. For one code, CASTEP, a distributed-memory data-driven parallelisation scheme is presented. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, the minimum-order stable recursive filter design problem is proposed and investigated. This problem is playing an important role in pipeline implementation sin signal processing. Here, the existence of a high-order stable recursive filter is proved theoretically, in which the upper bound for the highest order of stable filters is given. Then the minimum-order stable linear predictor is obtained via solving an optimization problem. In this paper, the popular genetic algorithm approach is adopted since it is a heuristic probabilistic optimization technique and has been widely used in engineering designs. Finally, an illustrative example is sued to show the effectiveness of the proposed algorithm.
Resumo:
Environmental processes have been modelled for decades. However. the need for integrated assessment and modeling (IAM) has,town as the extent and severity of environmental problems in the 21st Century worsens. The scale of IAM is not restricted to the global level as in climate change models, but includes local and regional models of environmental problems. This paper discusses various definitions of IAM and identifies five different types of integration that Lire needed for the effective solution of environmental problems. The future is then depicted in the form of two brief scenarios: one optimistic and one pessimistic. The current state of IAM is then briefly reviewed. The issues of complexity and validation in IAM are recognised as more complex than in traditional disciplinary approaches. Communication is identified as a central issue both internally among team members and externally with decision-makers. stakeholders and other scientists. Finally it is concluded that the process of integrated assessment and modelling is considered as important as the product for any particular project. By learning to work together and recognise the contribution of all team members and participants, it is believed that we will have a strong scientific and social basis to address the environmental problems of the 21st Century. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
While multimedia data, image data in particular, is an integral part of most websites and web documents, our quest for information so far is still restricted to text based search. To explore the World Wide Web more effectively, especially its rich repository of truly multimedia information, we are facing a number of challenging problems. Firstly, we face the ambiguous and highly subjective nature of defining image semantics and similarity. Secondly, multimedia data could come from highly diversified sources, as a result of automatic image capturing and generation processes. Finally, multimedia information exists in decentralised sources over the Web, making it difficult to use conventional content-based image retrieval (CBIR) techniques for effective and efficient search. In this special issue, we present a collection of five papers on visual and multimedia information management and retrieval topics, addressing some aspects of these challenges. These papers have been selected from the conference proceedings (Kluwer Academic Publishers, ISBN: 1-4020- 7060-8) of the Sixth IFIP 2.6 Working Conference on Visual Database Systems (VDB6), held in Brisbane, Australia, on 29–31 May 2002.
Resumo:
Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.
Resumo:
Purpose - Using Brandenburger and Nalebuff`s 1995 co-opetition model as a reference, the purpose of this paper is to seek to develop a tool that, based on the tenets of classical game theory, would enable scholars and managers to identify which games may be played in response to the different conflict of interest situations faced by companies in their business environments. Design/methodology/approach - The literature on game theory and business strategy are reviewed and a conceptual model, the strategic games matrix (SGM), is developed. Two novel games are described and modeled. Findings - The co-opetition model is not sufficient to realistically represent most of the conflict of interest situations faced by companies. It seeks to address this problem through development of the SGM, which expands upon Brandenburger and Nalebuff`s model by providing a broader perspective, through incorporation of an additional dimension (power ratio between players) and three novel, respectively, (rival, individualistic, and associative). Practical implications - This proposed model, based on the concepts of game theory, should be used to train decision- and policy-makers to better understand, interpret and formulate conflict management strategies. Originality/value - A practical and original tool to use game models in conflict of interest situations is generated. Basic classical games, such as Nash, Stackelberg, Pareto, and Minimax, are mapped on the SGM to suggest in which situations they Could be useful. Two innovative games are described to fit four different types of conflict situations that so far have no corresponding game in the literature. A test application of the SGM to a classic Intel Corporation strategic management case, in the complex personal computer industry, shows that the proposed method is able to describe, to interpret, to analyze, and to prescribe optimal competitive and/or cooperative strategies for each conflict of interest situation.
Resumo:
Purpose - The purpose of this paper is to provide a framework for radio frequency identification (RFID) technology adoption considering company size and five dimensions of analysis: RFID applications, expected benefits business drivers or motivations barriers and inhibitors, and organizational factors. Design/methodology/approach - A framework for RFID adoption derived from literature and the practical experience on the subject is developed. This framework provides a conceptual basis for analyzing a survey conducted with 114 companies in Brazil. Findings - Many companies have been developing RFID initiatives in order to identify potential applications and map benefits associated with their implementation. The survey highlights the importance business drivers in the RFID implementation stage, and that companies implement RFID focusing on a few specific applications. However, there is a weak association between expected benefits and business challenges with the current level of RFID technology adoption in Brazil. Research limitations/implications - The paper is not exhaustive, since RFID adoption in Brazil is at early stages during the survey timeline. Originality/value - The main contribution of the paper is that it yields a framework for analyzing RFID technology adoption. The authors use this framework to analyze RFID adoption in Brazil, which proved to be a useful one for identifying key issues for technology adoption. The paper is useful to any researchers or practitioners who are focused on technology adoption, in particular, RFID technology.
Resumo:
Purpose - The purpose of this paper is to verify if Brazilian companies are adopting environmental requirements in the supplier selection process. Further, this paper intends to analyze whether there is a relation between the level of environmental management maturity and the inclusion of environmental criteria in the companies` selection of suppliers. Design/methodology/approach - A review of mainstream literature on environmental management, traditional criteria in the supplier selection process and the incorporation of environmental requirements in this context. The empirical study`s strategy is based on five Brazilian case studies with industrial companies. Face-to-face interviews and informal conversations are to be held, explanations made by e-mail with representatives from the purchasing, environmental management, logistics and other areas, and observation and the collection of company documents are also employed. Findings - Based on the cases, it is concluded that companies still use traditional criteria to select suppliers, such as quality and cost, and do not adopt environmental requirements in the supplier selection process in a uniform manner. Evidence found shows that the level of environmental management maturity influences the depth with which companies adopt environmental criteria when selecting suppliers. Thus, a company with more advanced environmental management adopts more formal procedures for selecting environmentally appropriate suppliers than others. Originality/value - This is the first known study to verify if Brazilian companies are adopting environmental requirements in the supplier selection process.
Resumo:
Purpose - The purpose of this research is to shed light on the main barriers faced by Mozambican micro and small enterprises (MSEs) and their implications in respect to the support policies available for these enterprises. Design/methodology/approach - A literature review was made on those barriers faced by the MSEs and on the policies and governmental instruments of assistance available for MSEs. Then, a two-step research was conducted. The first phase consisted of collecting data from 21 MSEs in Mozambique, mainly by means of interviews where the main barriers faced by those interviewed were identified and hence, this led to the second phase, which was interviewing governmental/support entities in order to know what they had done to minimize those barriers which had been identified by the entrepreneurs. Findings - The results show that financial and competitive barriers are the main barriers faced by the analyzed MSEs. These barriers vary according to the field of activity of the enterprises. Originality/value - This study serves to enrich the state of the art on the subject of smaller enterprises in Africa and will specially. help to fill the lack of academic research available about Mozambique.
Resumo:
We study the existence of mild solutions for a class of impulsive neutral functional differential equation defined on the whole real axis. Some concrete applications to ordinary and partial neutral differential equations with impulses are considered. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Sprague Dawley rats were submitted to bilateral ventral hippocampus lesions 7 days after birth. This corresponds to the Lipska and Weinberger`s procedure for modeling schizophrenia. The aim of the present work was to test the learning capacity of such rats with an associative Pavlovian and an instrumental learning paradigm, both methods using reward outcome (food, sucrose or polycose). The associative paradigm comprised also a second learning test with reversed learning contingencies. The instrumental conditioning comprised an extinction test under outcome devaluation conditions. Neonatally lesioned rats, once adults (over 60 days of age), showed a conditioning deficit in the associative paradigm but not in the instrumental one. Lesioned rats remained able to adapt as readily as controls to the reversed learning contingency and were as sensitive as controls to the devaluation of outcome. Such observations indicate that the active access (instrumental learning) to a reward could have compensated for the deficit observed under the ""passive"" stimulus-reward associative learning condition. This feature is compared to the memory management impairments observed in clinical patients. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Minimal perfect hash functions are used for memory efficient storage and fast retrieval of items from static sets. We present an infinite family of efficient and practical algorithms for generating order preserving minimal perfect hash functions. We show that almost all members of the family construct space and time optimal order preserving minimal perfect hash functions, and we identify the one with minimum constants. Members of the family generate a hash function in two steps. First a special kind of function into an r-graph is computed probabilistically. Then this function is refined deterministically to a minimal perfect hash function. We give strong theoretical evidence that the first step uses linear random time. The second step runs in linear deterministic time. The family not only has theoretical importance, but also offers the fastest known method for generating perfect hash functions.
Resumo:
In order to separate the effects of experience from other characteristics of word frequency (e.g., orthographic distinctiveness), computer science and psychology students rated their experience with computer science technical items and nontechnical items from a wide range of word frequencies prior to being tested for recognition memory of the rated items. For nontechnical items, there was a curvilinear relationship between recognition accuracy and word frequency for both groups of students. The usual superiority of low-frequency words was demonstrated and high-frequency words were recognized least well. For technical items, a similar curvilinear relationship was evident for the psychology students, but for the computer science students, recognition accuracy was inversely related to word frequency. The ratings data showed that subjective experience rather than background word frequency was the better predictor of recognition accuracy.