846 resultados para Open Information Extraction
Resumo:
Open access is a new model for the publishing of scientific journals enabled by the Internet, in which the published articles are freely available for anyone to read. During the 1990’s hundreds of individual open access journals were founded by groups of academics, supported by grants and unpaid voluntary work. During the last five years other types of open access journals, funded by author charges have started to emerge and also established publishers have started to experiment with different variations of open access. This article reports on the experiences of one open access journal (The Electronic Journal of Information Technology in Construction, ITcon) over its ten year history. In addition to a straightforward account of the lessons learned the journal is also benchmarked against a number of competitors in the same research area and its development is put into the larger perspective of changes in scholarly publishing. The main findings are: That a journal publishing around 20-30 articles per year, equivalent to a typical quarterly journal, can sustainable be produced using an open source like production model. The journal outperforms its competitors in some respects, such as the speed of publication, availability of the results and balanced global distribution of authorship, and is on a par with them in most other respects. The key statistics for ITcon are: Acceptance rate 55 %. Average speed of publication 6-7 months. 801 subscribers to email alerts. Average number of downloads by human readers per paper per month 21.
Resumo:
We propose two texture-based approaches, one involving Gabor filters and the other employing log-polar wavelets, for separating text from non-text elements in a document image. Both the proposed algorithms compute local energy at some information-rich points, which are marked by Harris' corner detector. The advantage of this approach is that the algorithm calculates the local energy at selected points and not throughout the image, thus saving a lot of computational time. The algorithm has been tested on a large set of scanned text pages and the results have been seen to be better than the results from the existing algorithms. Among the proposed schemes, the Gabor filter based scheme marginally outperforms the wavelet based scheme.
Resumo:
Kirjastoissa ja yliopistoissa tapahtuvaa tieteellisten töiden verkkokäyttöä koskevat tekijänoikeudelliset kysymykset ovat viimeaikoina aiheuttaneet päänvaivaa. Tietoverkot ja digitaalinen ympäristö muodostavatkin tekijänoikeuden kannalta erityisen soveltamisympäristön johon perehtyminen edellyttää tarkempaa tietämystä tiedon siirtämisestä, tietokannoista sekä ylipäätään tietoverkkoihin liittyvistä teknisistä toiminnoista. Koska sovelletut tekniset ratkaisut poikkeavat eri yhteyksissä toisistaan, pyrin kirjoituksessa yleisellä tasolla selvittämään niitä käyttäjien ja oikeudenhaltijoiden välisiä tekijän- ja sopimusoikeudellisia kysymyksiä, joita teosten käyttö tietoverkoissa aiheuttaa. Pyrkimyksenä on tuoda esiin ne tekijänoikeudellisesti merkitykselliset seikat, jotka verkkojulkaisuja arkistoitaessa, välitettäessä sekä linkkejä käytettäessä tulisi alkuperäisten tekijöiden, kustantajien ja verkkojulkaisijoiden (esimerkiksi kirjasto tai yliopisto) välisissä sopimuksissa ottaa huomioon. Kysymyksiä tarkastellaan erityisesti julkaisijan näkökulmasta. Esitys sisältää myös kustantajien lupakäytäntöä käsittelevän empiirisen tutkimuksen. Tutkimuksessa on tarkasteltu kuinka usein kustantajat ovat vuosien 2000 – 2003 välisenä aikana myöntäneet luvan julkaista väitöskirjan artikkeli osana väitöskirjaa Teknillisen korkeakoulun avoimella ei kaupallisella www-palvelimella. Koska linkeillä on verkkojulkaisutoiminnassa usein merkittävä rooli, mutta niiden tekijänoikeudellinen asema on epäselvä, kirjoituksen jälkimmäisessä osiossa perehdytään linkkien tekijänoikeudelliseen asemaan.
Resumo:
Extraction of formant information from linear-prediction phase spectra is proposed. It is shown that the derivative of phase spectrum gives reliable formant information. Since the phase spectra for several resonators in cascade are additive, the resonance peaks are additive in the derivative of the phase spectrum unlike in the magnitude spectrum and hence the problem of identifying merged peaks is very easily solved by this method. Application of the method is illustrated through examples of linear-prediction spectra obtained for simulated models and for actural speech segments.
Resumo:
In voiced speech analysis epochal information is useful in accurate estimation of pitch periods and the frequency response of the vocal tract system. Ideally, linear prediction (LP) residual should give impulses at epochs. However, there are often ambiguities in the direct use of LP residual since samples of either polarity occur around epochs. Further, since the digital inverse filter does not compensate the phase response of the vocal tract system exactly, there is an uncertainty in the estimated epoch position. In this paper we present an interpretation of LP residual by considering the effect of the following factors: 1) the shape of glottal pulses, 2) inaccurate estimation of formants and bandwidths, 3) phase angles of formants at the instants of excitation, and 4) zeros in the vocal tract system. A method for the unambiguous identification of epochs from LP residual is then presented. The accuracy of the method is tested by comparing the results with the epochs obtained from the estimated glottal pulse shapes for several vowel segments. The method is used to identify the closed glottis interval for the estimation of the true frequency response of the vocal tract system.
Resumo:
Purpose - There are many library automation packages available as open-source software, comprising two modules: staff-client module and online public access catalogue (OPAC). Although the OPAC of these library automation packages provides advanced features of searching and retrieval of bibliographic records, none of them facilitate full-text searching. Most of the available open-source digital library software facilitates indexing and searching of full-text documents in different formats. This paper makes an effort to enable full-text search features in the widely used open-source library automation package Koha, by integrating it with two open-source digital library software packages, Greenstone Digital Library Software (GSDL) and Fedora Generic Search Service (FGSS), independently. Design/methodology/approach - The implementation is done by making use of the Search and Retrieval by URL (SRU) feature available in Koha, GSDL and FGSS. The full-text documents are indexed both in Koha and GSDL and FGSS. Findings - Full-text searching capability in Koha is achieved by integrating either GSDL or FGSS into Koha and by passing an SRU request to GSDL or FGSS from Koha. The full-text documents are indexed both in the library automation package (Koha) and digital library software (GSDL, FGSS) Originality/value - This is the first implementation enabling the full-text search feature in a library automation software by integrating it into digital library software.
Resumo:
Fully structured and matured open source spatial and temporal analysis technology seems to be the official carrier of the future for planning of the natural resources especially in the developing nations. This technology has gained enormous momentum because of technical superiority, affordability and ability to join expertise from all sections of the society. Sustainable development of a region depends on the integrated planning approaches adopted in decision making which requires timely and accurate spatial data. With the increased developmental programmes, the need for appropriate decision support system has increased in order to analyse and visualise the decisions associated with spatial and temporal aspects of natural resources. In this regard Geographic Information System (GIS) along with remote sensing data support the applications that involve spatial and temporal analysis on digital thematic maps and the remotely sensed images. Open source GIS would help in wide scale applications involving decisions at various hierarchical levels (for example from village panchayat to planning commission) on economic viability, social acceptance apart from technical feasibility. GRASS (Geographic Resources Analysis Support System, http://wgbis.ces.iisc.ernet.in/grass) is an open source GIS that works on Linux platform (freeware), but most of the applications are in command line argument, necessitating a user friendly and cost effective graphical user interface (GUI). Keeping these aspects in mind, Geographic Resources Decision Support System (GRDSS) has been developed with functionality such as raster, topological vector, image processing, statistical analysis, geographical analysis, graphics production, etc. This operates through a GUI developed in Tcltk (Tool command language / Tool kit) under Linux as well as with a shell in X-Windows. GRDSS include options such as Import /Export of different data formats, Display, Digital Image processing, Map editing, Raster Analysis, Vector Analysis, Point Analysis, Spatial Query, which are required for regional planning such as watershed Analysis, Landscape Analysis etc. This is customised to Indian context with an option to extract individual band from the IRS (Indian Remote Sensing Satellites) data, which is in BIL (Band Interleaved by Lines) format. The integration of PostgreSQL (a freeware) in GRDSS aids as an efficient database management system.
Resumo:
It is being realized that the traditional closed-door and market driven approaches for drug discovery may not be the best suited model for the diseases of the developing world such as tuberculosis and malaria, because most patients suffering from these diseases have poor paying capacity. To ensure that new drugs are created for patients suffering from these diseases, it is necessary to formulate an alternate paradigm of drug discovery process. The current model constrained by limitations for collaboration and for sharing of resources with confidentiality hampers the opportunities for bringing expertise from diverse fields. These limitations hinder the possibilities of lowering the cost of drug discovery. The Open Source Drug Discovery project initiated by Council of Scientific and Industrial Research, India has adopted an open source model to power wide participation across geographical borders. Open Source Drug Discovery emphasizes integrative science through collaboration, open-sharing, taking up multi-faceted approaches and accruing benefits from advances on different fronts of new drug discovery. Because the open source model is based on community participation, it has the potential to self-sustain continuous development by generating a storehouse of alternatives towards continued pursuit for new drug discovery. Since the inventions are community generated, the new chemical entities developed by Open Source Drug Discovery will be taken up for clinical trial in a non-exclusive manner by participation of multiple companies with majority funding from Open Source Drug Discovery. This will ensure availability of drugs through a lower cost community driven drug discovery process for diseases afflicting people with poor paying capacity. Hopefully what LINUX the World Wide Web have done for the information technology, Open Source Drug Discovery will do for drug discovery. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A torque control scheme, based on a direct torque control (DTC) algorithm using a 12-sided polygonal voltage space vector, is proposed for a variable speed control of an open-end induction motor drive. The conventional DTC scheme uses a stator flux vector for the sector identification and then the switching vector to control stator flux and torque. However, the proposed DTC scheme selects switching vectors based on the sector information of the estimated fundamental stator voltage vector and its relative position with respect to the stator flux vector. The fundamental stator voltage estimation is based on the steady-state model of IM and the synchronous frequency of operation is derived from the computed stator flux using a low-pass filter technique. The proposed DTC scheme utilizes the exact positions of the fundamental stator voltage vector and stator flux vector to select the optimal switching vector for fast control of torque with small variation of stator flux within the hysteresis band. The present DTC scheme allows full load torque control with fast transient response to very low speeds of operation, with reduced switching frequency variation. Extensive experimental results are presented to show the fast torque control for speed of operation from zero to rated.
Resumo:
Many of the research institutions and universities across the world are facilitating open-access (OA) to their intellectual outputs through their respective OA institutional repositories (IRs) or through the centralized subject-based repositories. The registry of open access repositories (ROAR) lists more than 2850 such repositories across the world. The awareness about the benefits of OA to scholarly literature and OA publishing is picking up in India, too. As per the ROAR statistics, to date, there are more than 90 OA repositories in the country. India is doing particularly well in publishing open-access journals (OAJ). As per the directory of open-access journals (DOAJ), to date, India with 390 OAJs, is ranked 5th in the world in terms of numbers of OAJs being published. Much of the research done in India is reported in the journals published from India. These journals have limited readership and many of them are not being indexed by Web of Science, Scopus or other leading international abstracting and indexing databases. Consequently, research done in the country gets hidden not only from the fellow countrymen, but also from the international community. This situation can be easily overcome if all the researchers facilitate OA to their publications. One of the easiest ways to facilitate OA to scientific literature is through the institutional repositories. If every research institution and university in India set up an open-access IR and ensure that copies of the final accepted versions of all the research publications are uploaded in the IRs, then the research done in India will get far better visibility. The federation of metadata from all the distributed, interoperable OA repositories in the country will serve as a window to the research done across the country. Federation of metadata from the distributed OAI-compliant repositories can be easily achieved by setting up harvesting software like the PKP Harvester. In this paper, we share our experience in setting up a prototype metadata harvesting service using the PKP harvesting software for the OAI-compliant repositories in India.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
We analyze the spectral zero-crossing rate (SZCR) properties of transient signals and show that SZCR contains accurate localization information about the transient. For a train of pulses containing transient events, the SZCR computed on a sliding window basis is useful in locating the impulse locations accurately. We present the properties of SZCR on standard stylized signal models and then show how it may be used to estimate the epochs in speech signals. We also present comparisons with some state-of-the-art techniques that are based on the group-delay function. Experiments on real speech show that the proposed SZCR technique is better than other group-delay-based epoch detectors. In the presence of noise, a comparison with the zero-frequency filtering technique (ZFF) and Dynamic programming projected Phase-Slope Algorithm (DYPSA) showed that performance of the SZCR technique is better than DYPSA and inferior to that of ZFF. For highpass-filtered speech, where ZFF performance suffers drastically, the identification rates of SZCR are better than those of DYPSA.
Resumo:
Identifying symmetry in scalar fields is a recent area of research in scientific visualization and computer graphics communities. Symmetry detection techniques based on abstract representations of the scalar field use only limited geometric information in their analysis. Hence they may not be suited for applications that study the geometric properties of the regions in the domain. On the other hand, methods that accumulate local evidence of symmetry through a voting procedure have been successfully used for detecting geometric symmetry in shapes. We extend such a technique to scalar fields and use it to detect geometrically symmetric regions in synthetic as well as real-world datasets. Identifying symmetry in the scalar field can significantly improve visualization and interactive exploration of the data. We demonstrate different applications of the symmetry detection method to scientific visualization: query-based exploration of scalar fields, linked selection in symmetric regions for interactive visualization, and classification of geometrically symmetric regions and its application to anomaly detection.
Resumo:
Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced with the help of Lee, Frost and Gamma MAP filters. A performance comparison of these speckle removal filters is presented. From the results obtained, we deduce that the Gamma MAP is reliable. The selected Gamma MAP filtered image is segmented using Gray Level Co-occurrence Matrix (GLCM) and Mean Shift Segmentation (MSS). The GLCM is a texture analysis method that separates the image pixels into water and non-water groups based on their spectral feature whereas MSS is a gradient ascent method, here segmentation is carried out using spectral and spatial information. As test case, Kosi river flood is considered in our study. From the segmentation result of both these methods are comprehensively analysed and concluded that the MSS is efficient for flood mapping.