880 resultados para Value of complex use
Resumo:
Purpose - There are many library automation packages available as open-source software, comprising two modules: staff-client module and online public access catalogue (OPAC). Although the OPAC of these library automation packages provides advanced features of searching and retrieval of bibliographic records, none of them facilitate full-text searching. Most of the available open-source digital library software facilitates indexing and searching of full-text documents in different formats. This paper makes an effort to enable full-text search features in the widely used open-source library automation package Koha, by integrating it with two open-source digital library software packages, Greenstone Digital Library Software (GSDL) and Fedora Generic Search Service (FGSS), independently. Design/methodology/approach - The implementation is done by making use of the Search and Retrieval by URL (SRU) feature available in Koha, GSDL and FGSS. The full-text documents are indexed both in Koha and GSDL and FGSS. Findings - Full-text searching capability in Koha is achieved by integrating either GSDL or FGSS into Koha and by passing an SRU request to GSDL or FGSS from Koha. The full-text documents are indexed both in the library automation package (Koha) and digital library software (GSDL, FGSS) Originality/value - This is the first implementation enabling the full-text search feature in a library automation software by integrating it into digital library software.
Resumo:
Scientific research revolves around the production, analysis, storage, management, and re-use of data. Data sharing offers important benefits for scientific progress and advancement of knowledge. However, several limitations and barriers in the general adoption of data sharing are still in place. Probably the most important challenge is that data sharing is not yet very common among scholars and is not yet seen as a regular activity among scientists, although important efforts are being invested in promoting data sharing. In addition, there is a relatively low commitment of scholars to cite data. The most important problems and challenges regarding data metrics are closely tied to the more general problems related to data sharing. The development of data metrics is dependent on the growth of data sharing practices, after all it is nothing more than the registration of researchers’ behaviour. At the same time, the availability of proper metrics can help researchers to make their data work more visible. This may subsequently act as an incentive for more data sharing and in this way a virtuous circle may be set in motion. This report seeks to further explore the possibilities of metrics for datasets (i.e. the creation of reliable data metrics) and an effective reward system that aligns the main interests of the main stakeholders involved in the process. The report reviews the current literature on data sharing and data metrics. It presents interviews with the main stakeholders on data sharing and data metrics. It also analyses the existing repositories and tools in the field of data sharing that have special relevance for the promotion and development of data metrics. On the basis of these three pillars, the report presents a number of solutions and necessary developments, as well as a set of recommendations regarding data metrics. The most important recommendations include the general adoption of data sharing and data publication among scholars; the development of a reward system for scientists that includes data metrics; reducing the costs of data publication; reducing existing negative cultural perceptions of researchers regarding data publication; developing standards for preservation, publication, identification and citation of datasets; more coordination of data repository initiatives; and further development of interoperability protocols across different actors.
Resumo:
Current technology valuation literature predominantly focuses on explaining the merits and implications of specific tools, but little research is available that takes a contextual process perspective. The aim of this paper is to further develop an integrative process framework that supports the structuring of the valuation process and the more systematic choice of valuation techniques for new technologies. The paper starts by reviewing key concepts and issues that surround the assessment of technology investments and the evidence of what companies use. Many factors need to be brought into the appraisal process, reflecting technological and market conditions. While there is usually a desire to reduce the assessment to a financial value, it is also widely appreciated that there is long term strategic value in securing a technological lead, which is difficult, or even inappropriate, to assess in purely financial terms. The multiple factors involved in the evaluation activity are identified with respect to the changing nature of the appraisal process as the technology matures and the implications for associated tools. The result of the literature review is a process framework which provides a conceptual basis for integrating valuation techniques. This framework is then populated with the results of industrial case studies on technology valuation to allow conclusions on its applicability to be drawn. © 2011 IEEE.
Resumo:
The electricity sectors of many developing countries underwent substantial reforms during the 1980s and 1990s, driven by global agendas of privatization and liberalization. However, rural electrification offered little by way of market incentives for profit-seeking private companies and was often neglected. As a consequence, delivery models for rural electrification need to change. This paper will review the experiences of various rural electrification delivery models that have been established in developing countries, including concessionary models, dealership approaches and the strengthening of small and medium-sized energy businesses. It will use examples from the USA, Bangladesh and Nepal, together with a detailed case study of a Nepali rural electric cooperative, to explore the role that local cooperatives can play in extending electricity access. It is shown that although there is no magic bullet solution to deliver rural electrification, if offered appropriate financial and institutional support, socially orientated cooperative businesses can be a willing, efficient and effective means of extending and managing rural electricity services. It is expected that this paper will be of particular value to policy-makers, donors, project planners and implementers currently working in the field of rural electrification. © 2010 Elsevier Ltd.
Resumo:
Computations are made for chevron and coflowing jet nozzles. The latter has a bypass ratio of 6:1. Also, unlike the chevron nozzle, the core flow is heated, making the inlet conditions reminiscent of those for a real engine. A large-eddy resolving approach is used with circa 12 × 10 6 cell meshes. Because the codes being used tend toward being dissipative the subgrid scale model is abandoned, giving what can be termed numerical large-eddy simulation. To overcome near-wall modeling problems a hybrid numerical large-eddy simulation-Reynolds-averaged Navier-Stokes related method is used. For y + ≤ 60 a Reynolds-averaged Navier-Stokes model is used. Blending between the two regions makes use of the differential Hamilton-Jabobi equation, an extension of the eikonal equation. For both nozzles, results show encouraging agreement with measurements of other workers. The eikonal equation is also used for ray tracing to explore the effect of the mean flow on acoustic ray trajectories, thus yielding a coherent solution strategy. © 2011 by Cambridge University.
Resumo:
Systems design involves the determination of interdependent variables. Thus the precedence ordering for the tasks of determining these variables involves circuits. Circuits require planning decisions abut how to iterate and where to use estimates. Conventional planning techniques, such as critical path, do not deal with these problems. Techniques are shown in this paper which acknowledge these circuits in the design of systems. These techniques can be used to develop an effective engineering plan, showing where estimates are to be used, how design iterations and reviews are handled, and how information flows during the design work.
Resumo:
Spatial relations, reflecting the complex association between geographical phenomena and environments, are very important in the solution of geographical issues. Different spatial relations can be expressed by indicators which are useful for the analysis of geographical issues. Urbanization, an important geographical issue, is considered in this paper. The spatial relationship indicators concerning urbanization are expressed with a decision table. Thereafter, the spatial relationship indicator rules are extracted based on the application of rough set theory. The extraction process of spatial relationship indicator rules is illustrated with data from the urban and rural areas of Shenzhen and Hong Kong, located in the Pearl River Delta. Land use vector data of 1995 and 2000 are used. The extracted spatial relationship indicator rules of 1995 are used to identify the urban and rural areas in Zhongshan, Zhuhai and Macao. The identification accuracy is approximately 96.3%. Similar procedures are used to extract the spatial relationship indicator rules of 2000 for the urban and rural areas in Zhongshan, Zhuhai and Macao. An identification accuracy of about 83.6% is obtained.
Resumo:
Orthogonal descriptors is a viable method for variable selection, but this method strongly depend on the orthogonalisation ordering of the descriptors. In this paper, we compared the different methods used for order the descriptors. It showed that better results could be achieved with the use of backward elimination ordering. We predicted R-f value of phenol and aniline derivatives by this method, and compared it with classical algorithms such as forward selection, backward elimination, and stepwise procedure. Some interesting hints were obtained.
Resumo:
The use of interlaminar fracture tests to measure the delamination resistance of unidirectional composite laminates is now widespread. However, because of the frequent occurrence of fiber bridging and multiple cracking during the tests, it leads to artificially high values of delamination resistance, which will not represent the behavior of the laminates. Initiation fracture from the crack starter, on the other hand, does not involve bridging, and should be more representative of the delamination resistance of the composite laminates. Since there is some uncertainty involved in determining the initiation value of delamination resistance in mode I tests in the literature, a power law of the form G(IC) = A.DELTA alpha(b) (where G(IC) is mode I interlaminar fracture toughness and DELTA alpha is delamination growth) is presented in this paper to determine initiation value of mode I interlaminar fracture toughness. It is found that initiation values of the mode I interlaminar fracture toughness, G(IC)(ini), can be defined as the G(IC) value at which 1 mm of delamination from the crack starter has occurred. Examples of initiation values determined by this method are given for both carbon fiber reinforced thermoplastic and thermosetting polymers.
Resumo:
Creativity is often defined as developing something novel or new, that fits its context, and has value. To achieve this, the creative process itself has gained increasing attention as organizational leaders seek competitive advantages through developing new products, services, process, or business models. In this paper, we explore the notion of the creative process as including a series of “filters” or ways to process information as being a critical component of the creative process. We use the metaphor of coffee making and filters because many of our examples come from Vietnam, which is one of the world’s top coffee exporters and which has created a coffee culture rivaling many other countries. We begin with a brief review of the creative process its connection to information processing, propose a tentative framework for integrating the two ideas, and provide examples of how it might work. We close with implications for further practical and theoretical directions for this idea.
Resumo:
OBJECTIVE: The diagnosis of Alzheimer's disease (AD) remains difficult. Lack of diagnostic certainty or possible distress related to a positive result from diagnostic testing could limit the application of new testing technologies. The objective of this paper is to quantify respondents' preferences for obtaining AD diagnostic tests and to estimate the perceived value of AD test information. METHODS: Discrete-choice experiment and contingent-valuation questions were administered to respondents in Germany and the United Kingdom. Choice data were analyzed by using random-parameters logit. A probit model characterized respondents who were not willing to take a test. RESULTS: Most respondents indicated a positive value for AD diagnostic test information. Respondents who indicated an interest in testing preferred brain imaging without the use of radioactive markers. German respondents had relatively lower money-equivalent values for test features compared with respondents in the United Kingdom. CONCLUSIONS: Respondents preferred less invasive diagnostic procedures and tests with higher accuracy and expressed a willingness to pay up to €700 to receive a less invasive test with the highest accuracy.
Resumo:
Economic analysis of technology treats it as given exogenously, while determined endogenously. This paper examines the conceptual conflict. The paper outlines an alternative conceptual framework. This uses a 'General Vertical Division of Labour' into conceptual and executive parts to facilitate a coherent political economic explanation of technological change. The paper suggests that we may acquire rather than impose an understanding of technological change. It also suggests that we may re-define and reassess the efficiency of technological change, through the values inculcated into it.
Resumo:
SoC systems are now being increasingly constructed using a hierarchy of subsystems or silicon Intellectual Property (IP) cores. The key challenge is to use these cores in a highly efficient manner which can be difficult as the internal core structure may not be known. A design methodology based on synthesizing hierarchical circuit descriptions is presented. The paper employs the MARS synthesis scheduling algorithm within the existing IRIS synthesis flow and details how it can be enhanced to allow for design exploration of IP cores. It is shown that by accessing parameterised expressions for the datapath latencies in the cores, highly efficient FPGA solutions can be achieved. Hardware sharing at both the hierarchical and flattened levels is explored for a normalized lattice filter and results are presented.