994 resultados para correlated information


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a sparse multi-carrier index keying (MCIK) method for orthogonal frequency division multiplexing (OFDM) system, which uses the indices of sparse sub-carriers to transmit the data, and improve the performance
of signal detection in highly correlated sub-carriers. Although a receiver is able to exploit a power gain with precoding in OFDM, the sensitivity of the signal detection is usually high as the orthogonality is not retained in highly dispersive
environments. To overcome this, we focus on developing the trade-off between the sparsity of the MCIK, correlation, and performances, analyzing the average probability of the error propagation imposed by incorrect index detection over highly correlated sub-carriers. In asymptotic cases, we are able to see how sparsity of MCIK should be designed in order to perform superior to the classical OFDM system. Based on this feature, sparse MCIK based OFDM is a better choice for low detection errors in highly correlated sub-carriers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les stimuli naturels projetés sur nos rétines nous fournissent de l’information visuelle riche. Cette information varie le long de propriétés de « bas niveau » telles que la luminance, le contraste, et les fréquences spatiales. Alors qu’une partie de cette information atteint notre conscience, une autre partie est traitée dans le cerveau sans que nous en soyons conscients. Les propriétés de l’information influençant l’activité cérébrale et le comportement de manière consciente versus non-consciente demeurent toutefois peu connues. Cette question a été examinée dans les deux derniers articles de la présente thèse, en exploitant les techniques psychophysiques développées dans les deux premiers articles. Le premier article présente la boîte à outils SHINE (spectrum, histogram, and intensity normalization and equalization), développée afin de permettre le contrôle des propriétés de bas niveau de l'image dans MATLAB. Le deuxième article décrit et valide la technique dite des bulles fréquentielles, qui a été utilisée tout au long des études de cette thèse pour révéler les fréquences spatiales utilisées dans diverses tâches de perception des visages. Cette technique offre les avantages d’une haute résolution au niveau des fréquences spatiales ainsi que d’un faible biais expérimental. Le troisième et le quatrième article portent sur le traitement des fréquences spatiales en fonction de la conscience. Dans le premier cas, la méthode des bulles fréquentielles a été utilisée avec l'amorçage par répétition masquée dans le but d’identifier les fréquences spatiales corrélées avec les réponses comportementales des observateurs lors de la perception du genre de visages présentés de façon consciente versus non-consciente. Les résultats montrent que les mêmes fréquences spatiales influencent de façon significative les temps de réponse dans les deux conditions de conscience, mais dans des sens opposés. Dans le dernier article, la méthode des bulles fréquentielles a été combinée à des enregistrements intracrâniens et au Continuous Flash Suppression (Tsuchiya & Koch, 2005), dans le but de cartographier les fréquences spatiales qui modulent l'activation de structures spécifiques du cerveau (l'insula et l'amygdale) lors de la perception consciente versus non-consciente des expressions faciales émotionnelles. Dans les deux régions, les résultats montrent que la perception non-consciente s'effectue plus rapidement et s’appuie davantage sur les basses fréquences spatiales que la perception consciente. La contribution de cette thèse est donc double. D’une part, des contributions méthodologiques à la recherche en perception visuelle sont apportées par l'introduction de la boîte à outils SHINE ainsi que de la technique des bulles fréquentielles. D’autre part, des indications sur les « corrélats de la conscience » sont fournies à l’aide de deux approches différentes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data assimilation provides techniques for combining observations and prior model forecasts to create initial conditions for numerical weather prediction (NWP). The relative weighting assigned to each observation in the analysis is determined by its associated error. Remote sensing data usually has correlated errors, but the correlations are typically ignored in NWP. Here, we describe three approaches to the treatment of observation error correlations. For an idealized data set, the information content under each simplified assumption is compared with that under correct correlation specification. Treating the errors as uncorrelated results in a significant loss of information. However, retention of an approximated correlation gives clear benefits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In survival analysis frailty is often used to model heterogeneity between individuals or correlation within clusters. Typically frailty is taken to be a continuous random effect, yielding a continuous mixture distribution for survival times. A Bayesian analysis of a correlated frailty model is discussed in the context of inverse Gaussian frailty. An MCMC approach is adopted and the deviance information criterion is used to compare models. As an illustration of the approach a bivariate data set of corneal graft survival times is analysed. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote sensing observations often have correlated errors, but the correlations are typically ignored in data assimilation for numerical weather prediction. The assumption of zero correlations is often used with data thinning methods, resulting in a loss of information. As operational centres move towards higher-resolution forecasting, there is a requirement to retain data providing detail on appropriate scales. Thus an alternative approach to dealing with observation error correlations is needed. In this article, we consider several approaches to approximating observation error correlation matrices: diagonal approximations, eigendecomposition approximations and Markov matrices. These approximations are applied in incremental variational assimilation experiments with a 1-D shallow water model using synthetic observations. Our experiments quantify analysis accuracy in comparison with a reference or ‘truth’ trajectory, as well as with analyses using the ‘true’ observation error covariance matrix. We show that it is often better to include an approximate correlation structure in the observation error covariance matrix than to incorrectly assume error independence. Furthermore, by choosing a suitable matrix approximation, it is feasible and computationally cheap to include error correlation structure in a variational data assimilation algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper are identified several factors which affect a potential user's willingness to use recycled water for agricultural irrigation. This study is based on the results of a survey carried out among farmers in the island of Crete, Greece. It was found that a higher level of income and education are positively correlated with a respondent's willingness to use recycled water. Income and education are also positively correlated with a potential user's sensitivity to information on the advantages of using non-conventional water resources. Overall, extra information on the advantages of recycled water has a statistically significant impact on reported degrees of willingness to use recycled water.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector`s orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Use of geographical information systems (GIS) in inland fisheries has hitherto been essentially restricted to site evaluation for aquaculture development and assessment of limnological changes in time and space in individual water bodies. The present GIS study was conducted on the land-use pattern of the catchments of nine reservoirs in Sri Lanka, for which detailed fishery data, viz. yield, fishing intensity, landing size of major constituent species, together with selected limnological data such as conductivity and chlorophyll-a, were available. Potential statistical relationships (linear, curvilinear, exponential and second-order polynomial) of fish yield (FY, in kg ha−1 yr−1) to different land-use patterns, such as forest cover (FC, in km2) and shrub-land (SL, in km2), either singly, or in combination, and/or the ratio of each land type to reservoir area (RA in km2) and reservoir capacity (RC in km3), were explored. Highly significant relationships were evident between FY to the ratio of SL and/or FC+SL to RA and/or RC. Similarly, the above land-use types to RA and RC ratios were significantly related to limnological features of the reservoirs. The relationships of FY to various parameters obtained in this study were much better correlated than those relationships of FY to limnological and biological parameters used in yield prediction in tropical and temperate lacustrine waters previously.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This letter addresses the issue of joint space-time trellis decoding and channel estimation in time-varying fading channels that are spatially and temporally correlated. A recursive space-time receiver which incorporates per-survivor processing (PSP) and Kalman filtering into the Viterbi algorithm is proposed. This approach generalizes existing work to the correlated fading channel case. The channel time-evolution is modeled by a multichannel autoregressive process, and a bank of Kalman filters is used to track the channel variations. Computer simulation results show that a performance close to the maximum likelihood receiver with perfect channel state information (CSI) can be obtained. The effects of the spatial correlation on the performance of a receiver that assumes independent fading channels are examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Constraint-based modeling of reconstructed genome-scale metabolic networks has been successfully applied on several microorganisms. In constraint-based modeling, in order to characterize all allowable phenotypes, network-based pathways, such as extreme pathways and elementary flux modes, are defined. However, as the scale of metabolic network rises, the number of extreme pathways and elementary flux modes increases exponentially. Uniform random sampling solves this problem to some extent to study the contents of the available phenotypes. After uniform random sampling, correlated reaction sets can be identified by the dependencies between reactions derived from sample phenotypes. In this paper, we study the relationship between extreme pathways and correlated reaction sets.

Results: Correlated reaction sets are identified for E. coli core, red blood cell and Saccharomyces cerevisiae metabolic networks respectively. All extreme pathways are enumerated for the former two metabolic networks. As for Saccharomyces cerevisiae metabolic network, because of the large scale, we get a set of extreme pathways by sampling the whole extreme pathway space. In most cases, an extreme pathway covers a correlated reaction set in an 'all or none' manner, which means either all reactions in a correlated reaction set or none is used by some extreme pathway. In rare cases, besides the 'all or none' manner, a correlated reaction set may be fully covered by combination of a few extreme pathways with related function, which may bring redundancy and flexibility to improve the survivability of a cell. In a word, extreme pathways show strong complementary relationship on usage of reactions in the same correlated reaction set.

Conclusion: Both extreme pathways and correlated reaction sets are derived from the topology information of metabolic networks. The strong relationship between correlated reaction sets and extreme pathways suggests a possible mechanism: as a controllable unit, an extreme pathway is regulated by its corresponding correlated reaction sets, and a correlated reaction set is further regulated by the organism's regulatory network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research identifies how the IT function can create agility in existing information systems. Agility is the capability to quickly sense and respond to environmental perturbations. We contrasted perspectives on agility from a widely used industry framework and that of the IS research literature. Beer’s Viable System Model was a useful meta-level theory to house agility elements from IS research and it introduced cybernetic principles to identify the processes required of the IT function. Indeed, our surveys of 70 organizations confirmed that the applied theory better correlates with reported agility than does existing industry best practice.

The research conducted two quantitative surveys to test the applied theory. The first survey mailed a Likert-type questionnaire to the clients of an Australian IT consultancy. The second survey invited international members of professional interest groups to complete a web-based questionnaire. The responses from the surveys were analyzed using partial-least-squares modeling. The data analysis positively correlated the maturity of IT function processes prescribed by the VSM and the likelihood of agility in existing information systems. We claim our findings generalize to other large organizations in OECD member countries.

The research offers an agility-capability model of the IT function to explain and predict agility in existing information systems. A further contribution is to improve industry ‘best practice’ frameworks by prescribing processes of the IT function to develop in maturity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel traffic classification scheme to improve classification performance when few training data arc available. In the proposed scheme, traffic flows are described using the discretized statistical features and flow correlation information is modeled by bag-of-flow (BoF). We solve the BoF-based traffic classification in a classifier combination framework and theoretically analyze the performance benefit. Furthermore, a new BoF-based traffic classification method is proposed to aggregate the naive Bayes (NB) predictions of the correlated flows. We also present an analysis on prediction error sensitivity of the aggregation strategies. Finally, a large number of experiments are carried out on two large-scale real-world traffic datasets to evaluate the proposed scheme. The experimental results show that the proposed scheme can achieve much better classification performance than existing state-of-the-art traffic classification methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with blind separation of spatially correlated signals mixed by an instantaneous system. Taking advantage of the fact that the source signals are accessible in some man-made systems such as wireless communication systems, we preprocess the source signals in transmitters by a set of properly designed first-order precoders and then the coded signals are transmitted. At the receiving side, information about the precoders are utilized to perform signal separation. Compared with the existing precoder-based methods, the new method only employs the simplest first-order precoders, which reduces the delay in data transmission and is easier to implement in practical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the arrival of big data era, the Internet traffic is growing exponentially. A wide variety of applications arise on the Internet and traffic classification is introduced to help people manage the massive applications on the Internet for security monitoring and quality of service purposes. A large number of Machine Learning (ML) algorithms are introduced to deal with traffic classification. A significant challenge to the classification performance comes from imbalanced distribution of data in traffic classification system. In this paper, we proposed an Optimised Distance-based Nearest Neighbor (ODNN), which has the capability of improving the classification performance of imbalanced traffic data. We analyzed the proposed ODNN approach and its performance benefit from both theoretical and empirical perspectives. A large number of experiments were implemented on the real-world traffic dataset. The results show that the performance of “small classes” can be improved significantly even only with small number of training data and the performance of “large classes” remains stable.