876 resultados para COMPUTER SCIENCE, THEORY


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multiplication and comultiplication of beliefs represent a generalisation of multiplication and comultiplication of probabilities as well as of binary logic AND and OR. Our approach follows that of subjective logic, where belief functions are expressed as opinions that are interpreted as being equivalent to beta probability distributions. We compare different types of opinion product and coproduct, and show that they represent very good approximations of the analytical product and coproduct of beta probability distributions. We also define division and codivision of opinions, and compare our framework with other logic frameworks for combining uncertain propositions. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Refinement in software engineering allows a specification to be developed in stages, with design decisions taken at earlier stages constraining the design at later stages. Refinement in complex data models is difficult due to lack of a way of defining constraints, which can be progressively maintained over increasingly detailed refinements. Category theory provides a way of stating wide scale constraints. These constraints lead to a set of design guidelines, which maintain the wide scale constraints under increasing detail. Previous methods of refinement are essentially local, and the proposed method does not interfere very much with these local methods. The result is particularly applicable to semantic web applications, where ontologies provide systems of more or less abstract constraints on systems, which must be implemented and therefore refined by participating systems. With the approach of this paper, the concept of committing to an ontology carries much more force. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electronic communications devices intended for government or military applications must be rigorously evaluated to ensure that they maintain data confidentiality. High-grade information security evaluations require a detailed analysis of the device's design, to determine how it achieves necessary security functions. In practice, such evaluations are labour-intensive and costly, so there is a strong incentive to find ways to make the process more efficient. In this paper we show how well-known concepts from graph theory can be applied to a device's design to optimise information security evaluations. In particular, we use end-to-end graph traversals to eliminate components that do not need to be evaluated at all, and minimal cutsets to identify the smallest group of components that needs to be evaluated in depth.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This correspondence considers block detection for blind wireless digital transmission. At high signal-to-noise ratio (SNR), block detection errors are primarily due to the received sequence having multiple possible decoded sequences with the same likelihood. We derive analytic expressions for the probability of detection ambiguity written in terms of a Dedekind zeta function, in the zero noise case with large constellations. Expressions are also provided for finite constellations, which can be evaluated efficiently, independent of the block length. Simulations demonstrate that the analytically derived error floors exist at high SNR.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we present a novel indexing technique called Multi-scale Similarity Indexing (MSI) to index image's multi-features into a single one-dimensional structure. Both for text and visual feature spaces, the similarity between a point and a local partition's center in individual space is used as the indexing key, where similarity values in different features are distinguished by different scale. Then a single indexing tree can be built on these keys. Based on the property that relevant images have similar similarity values from the center of the same local partition in any feature space, certain number of irrelevant images can be fast pruned based on the triangle inequity on indexing keys. To remove the dimensionality curse existing in high dimensional structure, we propose a new technique called Local Bit Stream (LBS). LBS transforms image's text and visual feature representations into simple, uniform and effective bit stream (BS) representations based on local partition's center. Such BS representations are small in size and fast for comparison since only bit operation are involved. By comparing common bits existing in two BSs, most of irrelevant images can be immediately filtered. To effectively integrate multi-features, we also investigated the following evidence combination techniques-Certainty Factor, Dempster Shafer Theory, Compound Probability, and Linear Combination. Our extensive experiment showed that single one-dimensional index on multi-features improves multi-indices on multi-features greatly. Our LBS method outperforms sequential scan on high dimensional space by an order of magnitude. And Certainty Factor and Dempster Shafer Theory perform best in combining multiple similarities from corresponding multiple features.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of distributed compression for correlated quantum sources is considered. The classical version of this problem was solved by Slepian and Wolf, who showed that distributed compression could take full advantage of redundancy in the local sources created by the presence of correlations. Here it is shown that, in general, this is not the case for quantum sources, by proving a lower bound on the rate sum for irreducible sources of product states which is stronger than the one given by a naive application of Slepian-Wolf. Nonetheless, strategies taking advantage of correlation do exist for some special classes of quantum sources. For example, Devetak and Winter demonstrated the existence of such a strategy when one of the sources is classical. Optimal nontrivial strategies for a different extreme, sources of Bell states, are presented here. In addition, it is explained how distributed compression is connected to other problems in quantum information theory, including information-disturbance questions, entanglement distillation and quantum error correction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

High-level language program compilation strategies can be proven correct by modelling the process as a series of refinement steps from source code to a machine-level description. We show how this can be done for programs containing recursively-defined procedures in the well-established predicate transformer semantics for refinement. To do so the formalism is extended with an abstraction of the way stack frames are created at run time for procedure parameters and variables.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a scientific and technical description of the modelling framework and the main results of modelling the long-term average sediment delivery at hillslope to medium-scale catchments over the entire Murray Darling Basin (MDB). A theoretical development that relates long-term averaged sediment delivery to the statistics of rainfall and catchment parameters is presented. The derived flood frequency approach was adapted to investigate the problem of regionalization of the sediment delivery ratio (SDR) across the Basin. SDR, a measure of catchment response to the upland erosion rate, was modeled by two lumped linear stores arranged in series: hillslope transport to the nearest streams and flow routing in the channel network. The theory shows that the ratio of catchment sediment residence time (SRT) to average effective rainfall duration is the most important control in the sediment delivery processes. In this study, catchment SRTs were estimated using travel time for overland flow multiplied by an enlargement factor which is a function of particle size. Rainfall intensity and effective duration statistics were regionalized by using long-term measurements from 195 pluviograph sites within and around the Basin. Finally, the model was implemented across the MDB by using spatially distributed soil, vegetation, topographical and land use properties under Geographic Information System (GIs) environment. The results predict strong variations in SDR from close to 0 in floodplains to 70% in the eastern uplands of the Basin. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este estudo teve como objetivo principal analisar a relao entre a Liderana Transformacional, a Converso do Conhecimento e a Eficcia Organizacional. Foram considerados como pressupostos tericos conceitos consolidados sobre os temas desta relao, alm de recentes pesquisas j realizadas em outros pases e contextos organizacionais. Com base nisto identificou-se potencial estudo de um modelo que relacionasse estes trs conceitos. Para tal considera-se que as organizaes que buscam atingir Vantagem Competitiva e incorporam a Knowledge-Based View possam conquistar diferenciao frente a seus concorrentes. Nesse contexto o conhecimento ganha maior destaque e papel protagonista nestas organizaes. Dessa forma criar conhecimento atravs de seus colaboradores, passa a ser um dos desafios dessas organizaes ao passo que sugere melhoria de seus indicadores Econmicos, Sociais, Sistmicos e Polticos, o que se define por Eficcia Organizacional. Portanto os modos de converso do conhecimento nas organizaes, demonstram relevncia, uma vez que se cria e se converte conhecimentos atravs da interao entre o conhecimento existente de seus colaboradores. Essa converso do conhecimento ou modelo SECI possui quatro modos que so a Socializao, Externalizao, Combinao e Internalizao. Nessa perspectiva a liderana nas organizaes apresenta-se como um elemento capaz de influenciar seus colaboradores, propiciando maior dinmica ao modelo SECI de converso do conhecimento. Se identifica ento na liderana do tipo Transformacional, caractersticas que possam influenciar colaboradores e entende-se que esta relao entre a Liderana Transformacional e a Converso do Conhecimento possa ter influncia positiva nos indicadores da Eficcia Organizacional. Dessa forma esta pesquisa buscou analisar um modelo que explorasse essa relao entre a liderana do tipo Transformacional, a Converso do Conhecimento (SECI) e a Eficcia Organizacional. Esta pesquisa teve o carter quantitativo com coleta de dados atravs do mtodo survey, obtendo um total de 230 respondentes vlidos de diferentes organizaes. O instrumento de coleta de dados foi composto por afirmativas relativas ao modelo de relao pesquisado com um total de 44 itens. O perfil de respondentes concentrou-se entre 30 e 39 anos de idade, com a predominncia de organizaes privadas e de departamentos de TI/Telecom, Docncia e Recursos Humanos respectivamente. O tratamento dos dados foi atravs da Anlise Fatorial Exploratria e Modelagem de Equaes Estruturais via Partial Least Square Path Modeling (PLS-PM). Como resultado da anlise desta pesquisa, as hipteses puderam ser confirmadas, concluindo que a Liderana Transformacional apresenta influncia positiva nos modos de Converso do Conhecimento e que; a Converso do Conhecimento influencia positivamente na Eficcia Organizacional. Ainda, concluiu-se que a percepo entre os respondentes no apresenta resultado diferente sobre o modelo desta pesquisa entre quem possui ou no funo de liderana.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose a method to determine the critical noise level for decoding Gallager type low density parity check error correcting codes. The method is based on the magnetization enumerator (<span class='mathrm'>M</span>), rather than on the weight enumerator (<span class='mathrm'>W</span>) presented recently in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present the prototype tool CADS* for the computer-aided development of an important class of self-* systems, namely systems whose components can be modelled as Markov chains. Given a Markov chain representation of the IT components to be included into a self-* system, CADS* automates or aids (a) the development of the artifacts necessary to build the self-* system; and (b) their integration into a fully-operational self-* solution. This is achieved through a combination of formal software development techniques including model transformation, model-driven code generation and dynamic software reconfiguration.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Collaborative working with the aid of computers is increasing rapidly due to the widespread use of computer networks, geographic mobility of people, and small powerful personal computers. For the past ten years research has been conducted into this use of computing technology from a wide variety of perspectives and for a wide range of uses. This thesis adds to that previous work by examining the area of collaborative writing amongst groups of people. The research brings together a number of disciplines, namely sociology for examining group dynamics, psychology for understanding individual writing and learning processes, and computer science for database, networking, and programming theory. The project initially looks at groups and how they form, communicate, and work together, progressing on to look at writing and the cognitive processes it entails for both composition and retrieval. The thesis then details a set of issues which need to be addressed in a collaborative writing system. These issues are then followed by developing a model for collaborative writing, detailing an iterative process of co-ordination, writing and annotation, consolidation, and negotiation, based on a structured but extensible document model. Implementation issues for a collaborative application are then described, along with various methods of overcoming them. Finally the design and implementation of a collaborative writing system, named Collaborwriter, is described in detail, which concludes with some preliminary results from initial user trials and testing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper summarizes the scientific work presented at the 32nd European Conference on Information Retrieval. It demonstrates that information retrieval (IR) as a research area continues to thrive with progress being made in three complementary sub-fields, namely IR theory and formal methods together with indexing and query representation issues, furthermore Web IR as a primary application area and finally research into evaluation methods and metrics. It is the combination of these areas that gives IR its solid scientific foundations. The paper also illustrates that significant progress has been made in other areas of IR. The keynote speakers addressed three such subject fields, social search engines using personalization and recommendation technologies, the renewed interest in applying natural language processing to IR, and multimedia IR as another fast-growing area.