15 resultados para experts
em Indian Institute of Science - Bangalore - Índia
Resumo:
PIP: A delphi study was conducted to identify or envision health scenarios in India by the year 2000. Questionnaires consisting of 48 questions on 5 areas (diagnosis and therapy; family planning; pharmaceuticals and drugs; biochemical and biomedical research; health services) were mailed to 250 experts in India. 36 responded. Results were compiled and mailed back to the respondents for changes and comments. 17 people responded. Results of the delphi study shows that policy decisions with respect to compulsory family planning as well as health education at secondary school level will precede further breakthroughs in birth control technology. Non operation reversible sterilization procedures, immunological birth control, Ayurvedic medicines for contraception and abortion, and selection of baby's sex are all possible by 2000 thereafter. Complete eradication of infectious diseases, malnutrition and associated diseases is considered unlikely before 2000, as are advances in biomedical research. Changes in health services (e.g., significant increases in hospital beds and doctors, cheap bulk drugs), particularly in rural areas, are imminent, leading to prolonging of life expectancy to 70 years. Genetic engineering may provide significant breakthroughs in the prevention of malignancies and cardiac disorders. The India delphi study is patterned after a similar delphi study conducted in the U.S. by Smith, Kline and French (SKF) Laboratories in 1968. The SKF study was able to predict some breakthroughs with basic research which have been realized.
Resumo:
Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.
Resumo:
Despite a significant growth in food production over the past half-century, one of the most important challenges facing society today is how to feed an expected population of some nine billion by the middle of the 20th century. To meet the expected demand for food without significant increases in prices, it has been estimated that we need to produce 70-100 per cent more food, in light of the growing impacts of climate change, concerns over energy security, regional dietary shifts and the Millennium Development target of halving world poverty and hunger by 2015. The goal for the agricultural sector is no longer simply to maximize productivity, but to optimize across a far more complex landscape of production, rural development, environmental, social justice and food consumption outcomes. However, there remain significant challenges to developing national and international policies that support the wide emergence of more sustainable forms of land use and efficient agricultural production. The lack of information flow between scientists, practitioners and policy makers is known to exacerbate the difficulties, despite increased emphasis upon evidence-based policy. In this paper, we seek to improve dialogue and understanding between agricultural research and policy by identifying the 100 most important questions for global agriculture. These have been compiled using a horizon-scanning approach with leading experts and representatives of major agricultural organizations worldwide. The aim is to use sound scientific evidence to inform decision making and guide policy makers in the future direction of agricultural research priorities and policy support. If addressed, we anticipate that these questions will have a significant impact on global agricultural practices worldwide, while improving the synergy between agricultural policy, practice and research. This research forms part of the UK Government's Foresight Global Food and Farming Futures project.
Resumo:
Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.
Resumo:
Payment systems all over the world have grown into a complicated web of solutions. This is more challenging in the case of mobile based payment systems. Mobile based payment systems are many and consist of different technologies providing different services. The diffusion of these various technologies in a market is uncertain. Diffusion theorists, for example Rogers, and Davis suggest how innovation is accepted in markets. In the case of electronic payment systems, the tale of Mondex vs Octopus throws interesting insights on diffusion. Our paper attempts to understand the success potential of various mobile payment technologies. We illustrate what we describe as technology breadth in mobile payment systems using data from payment systems all over the world (n=62). Our data shows an unexpected superiority of SMS technology, over other technologies like NFC, WAP and others. We also used a Delphi based survey (n=5) with experts to address the possibility that SMS will gain superiority in market diffusion. The economic conditions of a country, particularly in developing countries, the services availed and characteristics of the user (for example number of un-banked users in large populated countries) may put SMS in the forefront. This may be true more for micro payments using the mobile.
Resumo:
Inspired by the demonstration that tool-use variants among wild chimpanzees and orangutans qualify as traditions (or cultures), we developed a formal model to predict the incidence of these acquired specializations among wild primates and to examine the evolution of their underlying abilities. We assumed that the acquisition of the skill by an individual in a social unit is crucially controlled by three main factors, namely probability of innovation, probability of socially biased learning, and the prevailing social conditions (sociability, or number of potential experts at close proximity). The model reconfirms the restriction of customary tool use in wild primates to the most intelligent radiation, great apes; the greater incidence of tool use in more sociable populations of orangutans and chimpanzees; and tendencies toward tool manufacture among the most sociable monkeys. However, it also indicates that sociable gregariousness is far more likely to produce the maintenance of invented skills in a population than solitary life, where the mother is the only accessible expert. We therefore used the model to explore the evolution of the three key parameters. The most likely evolutionary scenario is that where complex skills contribute to fitness, sociability and/or the capacity for socially biased learning increase, whereas innovative abilities (i.e., intelligence) follow indirectly. We suggest that the evolution of high intelligence will often be a byproduct of selection on abilities for socially biased learning that are needed to acquire important skills, and hence that high intelligence should be most common in sociable rather than solitary organisms. Evidence for increased sociability during hominin evolution is consistent with this new hypothesis. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Rate control regulates the instantaneous video bit -rate to maximize a picture quality metric while satisfying channel constraints. Typically, a quality metric such as Peak Signalto-Noise ratio (PSNR) or weighted signal -to-noise ratio(WSNR) is chosen out of convenience. However this metric is not always truly representative of perceptual video quality.Attempts to use perceptual metrics in rate control have been limited by the accuracy of the video quality metrics chosen.Recently, new and improved metrics of subjective quality such as the Video quality experts group's (VQEG) NTIA1 General Video Quality Model (VQM) have been proven to have strong correlation with subjective quality. Here, we apply the key principles of the NTIA -VQM model to rate control in order to maximize perceptual video quality. Our experiments demonstrate that applying NTIA -VQM motivated metrics to standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 / MSE based implementation.
Resumo:
Urbanisation has evinced interest from a wide section of the society including experts, amateurs, and novices. The multidisciplinary scope of the subject invokes the interest from ecologists, to urban planners and civil engineers, to sociologists, to administrators and policy makers, students and finally the common man. With the development and infrastructure initiatives mostly around the urban centres, the impacts of urbanisation and sprawl would be on the environment and the natural resources. The wisdom lies in how effectively we plan the urban growth without - hampering the environment, excessively harnessing the natural resources and eventually disturbing the natural set-up. The research on these help urban residents and policymakers make informed decisions and take action to restore these resources before they are lost. Ultimately the power to balance the urban ecosystems rests with regional awareness, policies, administration practices, management issues and operational problems. This publication on urban systems is aimed at helping scientists, policy makers, engineers, urban planners and ultimately the common man to visualise how towns and cities grow over a period of time based on investigations in the regions around the highway and cities. Two important highways in Karnataka, South India, viz., Bangalore - Mysore highway and the Mangalore - Udupi highway, in Karnataka and the Tiruchirapalli - Tanjavore - Kumbakonam triangular road network in Tamil Nadu, South India, were considered in this investigation. Geographic Information System and Remote Sensing data were used to analyse the pattern of urbanisation. This was coupled with the spatial and temporal data from the Survey of India toposheets (for 1972), satellite imageries procured from National Remote Sensing Agency (NRSA) (LANDSAT TM for 1987 and IRS LISS III for 1999), demographic details from the Census of India (1971, 1981, 1991 and 2001) and the village maps from the Directorate of Survey Settlements and Land Records, Government of Karnataka. All this enabled in quantifying the increase in the built-up area for nearly three decades. With intent of identifying the potential sprawl zones, this could be modelled and projected for the future decades. Apart from these the study could quantify some of the metrics that could be used in the study of urban sprawl.
Resumo:
This work analyses the influence of several design methods on the degree of creativity of the design outcome. A design experiment has been carried out in which the participants were divided into four teams of three members, and each team was asked to work applying different design methods. The selected methods were Brainstorming, Functional Analysis, and SCAMPER method. The `degree of creativity' of each design outcome is assessed by means of a questionnaire offered to a number of experts and by means of three different metrics: the metric of Moss, the metric of Sarkar and Chakrabarti, and the evaluation of innovative potential. The three metrics share the property of measuring the creativity as a combination of the degree of novelty and the degree of usefulness. The results show that Brainstorming provides more creative outcomes than when no method is applied, while this is not proved for SCAMPER and Functional Analysis.
Resumo:
Assembly is an important part of the product development process. To avoid potential issues during assembly in specialized domains such as aircraft assembly, expert knowledge to predict such issues is helpful. Knowledge based systems can act as virtual experts to provide assistance. Knowledge acquisition for such systems however, is a challenge, and this paper describes one part of an ongoing research to acquire knowledge through a dialog between an expert and a knowledge acquisition system. In particular this paper discusses the use of a situation model for assemblies to present experts with a virtual assembly and help them locate the specific context of the knowledge they provide to the system.
Resumo:
In this article, we aim at reducing the error rate of the online Tamil symbol recognition system by employing multiple experts to reevaluate certain decisions of the primary support vector machine classifier. Motivated by the relatively high percentage of occurrence of base consonants in the script, a reevaluation technique has been proposed to correct any ambiguities arising in the base consonants. Secondly, a dynamic time-warping method is proposed to automatically extract the discriminative regions for each set of confused characters. Class-specific features derived from these regions aid in reducing the degree of confusion. Thirdly, statistics of specific features are proposed for resolving any confusions in vowel modifiers. The reevaluation approaches are tested on two databases (a) the isolated Tamil symbols in the IWFHR test set, and (b) the symbols segmented from a set of 10,000 Tamil words. The recognition rate of the isolated test symbols of the IWFHR database improves by 1.9 %. For the word database, the incorporation of the reevaluation step improves the symbol recognition rate by 3.5 % (from 88.4 to 91.9 %). This, in turn, boosts the word recognition rate by 11.9 % (from 65.0 to 76.9 %). The reduction in the word error rate has been achieved using a generic approach, without the incorporation of language models.
Resumo:
The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.
Resumo:
Bioenergy deployment offers significant potential for climate change mitigation, but also carries considerable risks. In this review, we bring together perspectives of various communities involved in the research and regulation of bioenergy deployment in the context of climate change mitigation: Land-use and energy experts, land-use and integrated assessment modelers, human geographers, ecosystem researchers, climate scientists and two different strands of life-cycle assessment experts. We summarize technological options, outline the state-of-the-art knowledge on various climate effects, provide an update on estimates of technical resource potential and comprehensively identify sustainability effects. Cellulosic feedstocks, increased end-use efficiency, improved land carbon-stock management and residue use, and, when fully developed, BECCS appear as the most promising options, depending on development costs, implementation, learning, and risk management. Combined heat and power, efficient biomass cookstoves and small-scale power generation for rural areas can help to promote energy access and sustainable development, along with reduced emissions. We estimate the sustainable technical potential as up to 100EJ: high agreement; 100-300EJ: medium agreement; above 300EJ: low agreement. Stabilization scenarios indicate that bioenergy may supply from 10 to 245EJyr(-1) to global primary energy supply by 2050. Models indicate that, if technological and governance preconditions are met, large-scale deployment (>200EJ), together with BECCS, could help to keep global warming below 2 degrees degrees of preindustrial levels; but such high deployment of land-intensive bioenergy feedstocks could also lead to detrimental climate effects, negatively impact ecosystems, biodiversity and livelihoods. The integration of bioenergy systems into agriculture and forest landscapes can improve land and water use efficiency and help address concerns about environmental impacts. We conclude that the high variability in pathways, uncertainties in technological development and ambiguity in political decision render forecasts on deployment levels and climate effects very difficult. However, uncertainty about projections should not preclude pursuing beneficial bioenergy options.
Resumo:
The problem of scaling up data integration, such that new sources can be quickly utilized as they are discovered, remains elusive: Global schemas for integrated data are difficult to develop and expand, and schema and record matching techniques are limited by the fact that data and metadata are often under-specified and must be disambiguated by data experts. One promising approach is to avoid using a global schema, and instead to develop keyword search-based data integration-where the system lazily discovers associations enabling it to join together matches to keywords, and return ranked results. The user is expected to understand the data domain and provide feedback about answers' quality. The system generalizes such feedback to learn how to correctly integrate data. A major open challenge is that under this model, the user only sees and offers feedback on a few ``top-'' results: This result set must be carefully selected to include answers of high relevance and answers that are highly informative when feedback is given on them. Existing systems merely focus on predicting relevance, by composing the scores of various schema and record matching algorithms. In this paper, we show how to predict the uncertainty associated with a query result's score, as well as how informative feedback is on a given result. We build upon these foundations to develop an active learning approach to keyword search-based data integration, and we validate the effectiveness of our solution over real data from several very different domains.
Resumo:
The broader goal of the research being described here is to automatically acquire diagnostic knowledge from documents in the domain of manual and mechanical assembly of aircraft structures. These documents are treated as a discourse used by experts to communicate with others. It therefore becomes possible to use discourse analysis to enable machine understanding of the text. The research challenge addressed in the paper is to identify documents or sections of documents that are potential sources of knowledge. In a subsequent step, domain knowledge will be extracted from these segments. The segmentation task requires partitioning the document into relevant segments and understanding the context of each segment. In discourse analysis, the division of a discourse into various segments is achieved through certain indicative clauses called cue phrases that indicate changes in the discourse context. However, in formal documents such language may not be used. Hence the use of a domain specific ontology and an assembly process model is proposed to segregate chunks of the text based on a local context. Elements of the ontology/model, and their related terms serve as indicators of current context for a segment and changes in context between segments. Local contexts are aggregated for increasingly larger segments to identify if the document (or portions of it) pertains to the topic of interest, namely, assembly. Knowledge acquired through such processes enables acquisition and reuse of knowledge during any part of the lifecycle of a product.