898 resultados para information content
Resumo:
SPOT simulation imagery was acquired for a test site in the Forest of Dean in Gloucestershire, U.K. This data was qualitatively and quantitatively evaluated for its potential application in forest resource mapping and management. A variety of techniques are described for enhancing the image with the aim of providing species level discrimination within the forest. Visual interpretation of the imagery was more successful than automated classification. The heterogeneity within the forest classes, and in particular between the forest and urban class, resulted in poor discrimination using traditional `per-pixel' automated methods of classification. Different means of assessing classification accuracy are proposed. Two techniques for measuring textural variation were investigated in an attempt to improve classification accuracy. The first of these, a sequential segmentation method, was found to be beneficial. The second, a parallel segmentation method, resulted in little improvement though this may be related to a combination of resolution in size of the texture extraction area. The effect on classification accuracy of combining the SPOT simulation imagery with other data types is investigated. A grid cell encoding technique was selected as most appropriate for storing digitised topographic (elevation, slope) and ground truth data. Topographic data were shown to improve species-level classification, though with sixteen classes overall accuracies were consistently below 50%. Neither sub-division into age groups or the incorporation of principal components and a band ratio significantly improved classification accuracy. It is concluded that SPOT imagery will not permit species level classification within forested areas as diverse as the Forest of Dean. The imagery will be most useful as part of a multi-stage sampling scheme. The use of texture analysis is highly recommended for extracting maximum information content from the data. Incorporation of the imagery into a GIS will both aid discrimination and provide a useful management tool.
Resumo:
We address the question of how to communicate among distributed processes valuessuch as real numbers, continuous functions and geometrical solids with arbitrary precision, yet efficiently. We extend the established concept of lazy communication using streams of approximants by introducing explicit queries. We formalise this approach using protocols of a query-answer nature. Such protocols enable processes to provide valid approximations with certain accuracy and focusing on certain locality as demanded by the receiving processes through queries. A lattice-theoretic denotational semantics of channel and process behaviour is developed. Thequery space is modelled as a continuous lattice in which the top element denotes the query demanding all the information, whereas other elements denote queries demanding partial and/or local information. Answers are interpreted as elements of lattices constructed over suitable domains of approximations to the exact objects. An unanswered query is treated as an error anddenoted using the top element. The major novel characteristic of our semantic model is that it reflects the dependency of answerson queries. This enables the definition and analysis of an appropriate concept of convergence rate, by assigning an effort indicator to each query and a measure of information content to eachanswer. Thus we capture not only what function a process computes, but also how a process transforms the convergence rates from its inputs to its outputs. In future work these indicatorscan be used to capture further computational complexity measures. A robust prototype implementation of our model is available.
Resumo:
Divisia money is a monetary aggregate that gives each component asset an assigned weight. We use an evolutionary neural network to calculate new Divisia weights for each component utilising the Bank of England monetary data for the U.K. We propose a new monetary aggregate using our newly derived weights to carry out quantitative inflation prediction. The results show that this new monetary aggregate has better inflation forecasting performance than the traditionally constructed Bank of England Divisa money. This result is important for monetary policymakers, as improved construction of monetary aggregates will yield tighter relationships between key macroeconomic variables and ultimately, greater macroeconomic control. Research is ongoing to establish the extent of the increased information content and parameter stability of this new monetary aggregate.
Resumo:
The expansion of the Internet has made the task of searching a crucial one. Internet users, however, have to make a great effort in order to formulate a search query that returns the required results. Many methods have been devised to assist in this task by helping the users modify their query to give better results. In this paper we propose an interactive method for query expansion. It is based on the observation that documents are often found to contain terms with high information content, which can summarise their subject matter. We present experimental results, which demonstrate that our approach significantly shortens the time required in order to accomplish a certain task by performing web searches.
Resumo:
This study examines the information content of alternative implied volatility measures for the 30 components of the Dow Jones Industrial Average Index from 1996 until 2007. Along with the popular Black-Scholes and \model-free" implied volatility expectations, the recently proposed corridor implied volatil- ity (CIV) measures are explored. For all pair-wise comparisons, it is found that a CIV measure that is closely related to the model-free implied volatility, nearly always delivers the most accurate forecasts for the majority of the firms. This finding remains consistent for different forecast horizons, volatility definitions, loss functions and forecast evaluation settings.
Resumo:
The paper is a description of information and software content of a computer knowledge bank on medical diagnostics. The classes of its users and the tasks which they can solve are described. The information content of the bank contains three ontologies: an ontology of observations in the field of medical diagnostics, an ontology of knowledge base (diseases) in medical diagnostics and an ontology of case records, and also it contains three classes of information resources for every division of medicine – observation bases, knowledge bases, and data bases (with data about patients), that correspond to these ontologies. Software content consists of editors for information of different kinds (ontologies, bases of observations, knowledge and data), and also of a program which performs medical diagnostics.
Resumo:
Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
The current research activities of the Institute of Mathematics and Informatics at the Bulgarian Academy of Sciences (IMI—BAS) include the study and application of knowledge-based methods for the creation, integration and development of multimedia digital libraries with applications in cultural heritage. This report presents IMI-BAS’s developments at the digital library management systems and portals, i.e. the Bulgarian Iconographical Digital Library, the Bulgarian Folklore Digital Library and the Bulgarian Folklore Artery, etc. developed during the several national and international projects: - "Digital Libraries with Multimedia Content and its Application in Bulgarian Cultural Heritage" (contract 8/21.07.2005 between the IMI–BAS, and the State Agency for Information Technologies and Communications; - FP6/IST/P-027451 PROJECT LOGOS "Knowledge-on-Demand for Ubiquitous Learning", EU FP6, IST, Priority 2.4.13 "Strengthening the Integration of the ICT research effort in an Enlarged Europe" - NSF project D-002-189 SINUS "Semantic Technologies for Web Services and Technology Enhanced Learning". - NSF project IO-03-03/2006 ―Development of Digital Libraries and Information Portal with Virtual Exposition "Bulgarian Folklore Heritage". The presented prototypes aims to provide flexible and effective access to the multimedia presentation of the cultural heritage artefacts and collections, maintaining different forms and format of the digitized information content and rich functionality for interaction. The developments are a result of long- standing interests and work in the technological developments in information systems, knowledge processing and content management systems. The current research activities aims at creating innovative solutions for assembling multimedia digital libraries for collaborative use in specific cultural heritage context, maintaining their semantic interoperability and creating new services for dynamic aggregation of their resources, access improvement, personification, intelligent curation of content, and content protection. The investigations are directed towards the development of distributed tools for aggregating heterogeneous content and ensuring semantic compatibility with the European digital library EUROPEANA, thus providing possibilities for pan- European access to rich digitalised collections of Bulgarian cultural heritage.
Resumo:
This paper examines investors' reactions to dividend reductions or omissions conditional on past earnings and dividend patterns for a sample of eighty-two U.S. firms that incurred an annual loss. We document that the market reaction for firms with long patterns of past earnings and dividend payouts is significantly more negative than for firms with lessestablished past earnings and dividends records. Our results can be explained by the following line of reasoning. First, consistent with DeAngelo, DeAngelo, and Skinner (1992), a loss following a long stream of earnings and dividend payments represents an unreliable indicator of future earnings. Thus, established firms have higher loss reliability than less-established firms. Second, because current earnings and dividend policy are a substitute source of means of forecasting future earnings, lower loss reliability increases the information content of dividend reductions. Therefore, given the presence of a loss, the longer the stream of prior earnings and dividend payments, (1) the lower the loss reliability and (2) the more reliably dividend cuts are perceived as an indication that earnings difficulties will persist in the future.
Resumo:
This review discusses the connection between quantitative changes of environmental factors and oribatid communities. With the overview of available studies, it can be clearly explored how various characteristics of Oribatid communities are modified due to changes in moisture, temperature, heavy metal concentration, organic matter content and level of disturbance. The most important question concerning the application of Oribatids as indicators is to clarify what kind of information content does natural Oribatid coenological patterns possess from the aspect of bioindication. Most of the variables listed above can be directly measured, since rapid methods are available to quantify parameters of the soil. Responses of Oribatids are worth to study in a more complex approach. Even now we have an expansive knowledge on how communities change due to modifications of different factors. These pieces of information necessitate the elaboration of such methods which render Oribatid communities suitable for the task to prognosticate what extent the given site can be considered near-natural or degraded, based on the Oribatid composition of a single sample taken from the given area. Answering this problem needs extensive and coordinated work.
Resumo:
The main inputs to the hippocampus arise from the entorhinal cortex (EC) and form a loop involving the dentate gyrus, CA3 and CA1 hippocampal subfields and then back to EC. Since the discovery that the hippocampus is involved in memory formation in the 50's, this region and its circuitry have been extensively studied. Beyond memory, the hippocampus has also been found to play an important role in spatial navigation. In rats and mice, place cells show a close relation between firing rate and the animal position in a restricted area of the environment, the so-called place field. The firing of place cells peaks at the center of the place field and decreases when the animal moves away from it, suggesting the existence of a rate code for space. Nevertheless, many have described the emergence of hippocampal network oscillations of multiple frequencies depending on behavioral state, which are believed to be important for temporal coding. In particular, theta oscillations (5-12 Hz) exhibit a spatio-temporal relation with place cells known as phase precession, in which place cells consistently change the theta phase of spiking as the animal traverses the place field. Moreover, current theories state that CA1, the main output stream of the hippocampus, would interplay inputs from EC and CA3 through network oscillations of different frequencies, namely high gamma (60-100 Hz; HG) and low gamma (30-50 Hz; LG), respectively, which tend to be nested in different phases of the theta cycle. In the present dissertation we use a freely available online dataset to make extensive computational analyses aimed at reproducing classical and recent results about the activity of place cells in the hippocampus of freely moving rats. In particular, we revisit the debate of whether phase precession is due to changes in firing frequency or space alone, and conclude that the phenomenon cannot be explained by either factor independently but by their joint influence. We also perform novel analyses investigating further characteristics of place cells in relation to network oscillations. We show that the strength of theta modulation of spikes only marginally affects the spatial information content of place cells, while the mean spiking theta phase has no influence on spatial information. Further analyses reveal that place cells are also modulated by theta when they fire outside the place field. Moreover, we find that the firing of place cells within the theta cycle is modulated by HG and LG amplitude in both CA1 and EC, matching cross-frequency coupling results found at the local field potential level. Additionally, the phase-amplitude coupling in CA1 associated with spikes inside the place field is characterized by amplitude modulation in the 40-80 Hz range. We conclude that place cell firing is embedded in large network states reflected in local field potential oscillations and suggest that their activity might be seen as a dynamic state rather than a fixed property of the cell.
Resumo:
Various physical systems have dynamics that can be modeled by percolation processes. Percolation is used to study issues ranging from fluid diffusion through disordered media to fragmentation of a computer network caused by hacker attacks. A common feature of all of these systems is the presence of two non-coexistent regimes associated to certain properties of the system. For example: the disordered media can allow or not allow the flow of the fluid depending on its porosity. The change from one regime to another characterizes the percolation phase transition. The standard way of analyzing this transition uses the order parameter, a variable related to some characteristic of the system that exhibits zero value in one of the regimes and a nonzero value in the other. The proposal introduced in this thesis is that this phase transition can be investigated without the explicit use of the order parameter, but rather through the Shannon entropy. This entropy is a measure of the uncertainty degree in the information content of a probability distribution. The proposal is evaluated in the context of cluster formation in random graphs, and we apply the method to both classical percolation (Erd¨os- R´enyi) and explosive percolation. It is based in the computation of the entropy contained in the cluster size probability distribution and the results show that the transition critical point relates to the derivatives of the entropy. Furthermore, the difference between the smooth and abrupt aspects of the classical and explosive percolation transitions, respectively, is reinforced by the observation that the entropy has a maximum value in the classical transition critical point, while that correspondence does not occurs during the explosive percolation.
Resumo:
Various physical systems have dynamics that can be modeled by percolation processes. Percolation is used to study issues ranging from fluid diffusion through disordered media to fragmentation of a computer network caused by hacker attacks. A common feature of all of these systems is the presence of two non-coexistent regimes associated to certain properties of the system. For example: the disordered media can allow or not allow the flow of the fluid depending on its porosity. The change from one regime to another characterizes the percolation phase transition. The standard way of analyzing this transition uses the order parameter, a variable related to some characteristic of the system that exhibits zero value in one of the regimes and a nonzero value in the other. The proposal introduced in this thesis is that this phase transition can be investigated without the explicit use of the order parameter, but rather through the Shannon entropy. This entropy is a measure of the uncertainty degree in the information content of a probability distribution. The proposal is evaluated in the context of cluster formation in random graphs, and we apply the method to both classical percolation (Erd¨os- R´enyi) and explosive percolation. It is based in the computation of the entropy contained in the cluster size probability distribution and the results show that the transition critical point relates to the derivatives of the entropy. Furthermore, the difference between the smooth and abrupt aspects of the classical and explosive percolation transitions, respectively, is reinforced by the observation that the entropy has a maximum value in the classical transition critical point, while that correspondence does not occurs during the explosive percolation.
Resumo:
Requirements for space based monitoring of permafrost features had been already defined within the IGOS Cryosphere Theme Report at the start of the IPY in 2007 (IGOS, 2007). The WMO Polar Space Task Group (PSTG, http://www.wmo.int/pages/prog/sat/pstg_en.php) identified the need to review the requirements for permafrost monitoring and to update these requirements in 2013. Relevant surveys with focus on satellite data are already available from the ESA DUE Permafrost User requirements survey (2009), the United States National Research Council (2014) and the ESA - CliC - IPA - GTN -P workshop in February 2014. These reports have been reviewed and specific needs discussed within the community and a white paper submitted to the WMO PSTG. Acquisition requirements for monitoring of especially terrain changes (incl. rock glaciers and coastal erosion) and lakes (extent, ice properties etc.) with respect to current satellite missions have been specified. About 50 locations ('cold spots') where permafrost (Arctic and Antarctic) in situ monitoring has been taking place for many years or where field stations are currently established have been identified. These sites have been proposed to the WMO Polar Space Task Group as focus areas for future monitoring by high resolution satellite data. The specifications of these sites including meta-data on site instrumentation have been published as supplement to the white paper (Bartsch et al. 2014, doi:10.1594/PANGAEA.847003). The representativity of the 'cold spots' around the arctic has been in the following assessed based on a landscape units product which has been developed as part of the FP7 project PAGE21. The ESA DUE Permafrost service has been utilized to produce a pan-arctic database (25km, 2000-2014) comprising Mean Annual Surface Temperature, Annual and summer Amplitude of Surface Temperature, Mean Summer (July-August) Surface Temperature. Surface status (frozen/unfrozen) related products have been also derived from the ESA DUE Permafrost service. This includes the length of unfrozen period, first unfrozen day and first frozen day. In addition, SAR (ENVISAT ASAR GM) statistics as well as topographic parameters have been considered. The circumpolar datasets have been assessed for their redundancy in information content. 12 distinct units could be derived. The landscape units reveal similarities between North Slope Alaska and the region from the Yamal Peninsula to the Yenisei estuary. Northern Canada is characterized by the same landscape units like western Siberia. North-eastern Canada shows similarities to the Laptev coast region. This paper presents the result of this assessment and formulates recommendations for extensions of the in situ monitoring networks and categorizes the sites by satellite data requirements (specifically Sentinels) with respect to the landscape type and related processes.
Resumo:
Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.