932 resultados para Information dispersal algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conferência - 16th International Symposium on Wireless Personal Multimedia Communications (WPMC)- Jun 24-27, 2013

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Liver steatosis is mainly a textural abnormality of the hepatic parenchyma due to fat accumulation on the hepatic vesicles. Today, the assessment is subjectively performed by visual inspection. Here a classifier based on features extracted from ultrasound (US) images is described for the automatic diagnostic of this phatology. The proposed algorithm estimates the original ultrasound radio-frequency (RF) envelope signal from which the noiseless anatomic information and the textural information encoded in the speckle noise is extracted. The features characterizing the textural information are the coefficients of the first order autoregressive model that describes the speckle field. A binary Bayesian classifier was implemented and the Bayes factor was calculated. The classification has revealed an overall accuracy of 100%. The Bayes factor could be helpful in the graphical display of the quantitative results for diagnosis purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on the problem of feature selection for clustering continues to develop. This is a challenging task, mainly due to the absence of class labels to guide the search for relevant features. Categorical feature selection for clustering has rarely been addressed in the literature, with most of the proposed approaches having focused on numerical data. In this work, we propose an approach to simultaneously cluster categorical data and select a subset of relevant features. Our approach is based on a modification of a finite mixture model (of multinomial distributions), where a set of latent variables indicate the relevance of each feature. To estimate the model parameters, we implement a variant of the expectation-maximization algorithm that simultaneously selects the subset of relevant features, using a minimum message length criterion. The proposed approach compares favourably with two baseline methods: a filter based on an entropy measure and a wrapper based on mutual information. The results obtained on synthetic data illustrate the ability of the proposed expectation-maximization method to recover ground truth. An application to real data, referred to official statistics, shows its usefulness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on cluster analysis for categorical data continues to develop, new clustering algorithms being proposed. However, in this context, the determination of the number of clusters is rarely addressed. We propose a new approach in which clustering and the estimation of the number of clusters is done simultaneously for categorical data. We assume that the data originate from a finite mixture of multinomial distributions and use a minimum message length criterion (MML) to select the number of clusters (Wallace and Bolton, 1986). For this purpose, we implement an EM-type algorithm (Silvestre et al., 2008) based on the (Figueiredo and Jain, 2002) approach. The novelty of the approach rests on the integration of the model estimation and selection of the number of clusters in a single algorithm, rather than selecting this number based on a set of pre-estimated candidate models. The performance of our approach is compared with the use of Bayesian Information Criterion (BIC) (Schwarz, 1978) and Integrated Completed Likelihood (ICL) (Biernacki et al., 2000) using synthetic data. The obtained results illustrate the capacity of the proposed algorithm to attain the true number of cluster while outperforming BIC and ICL since it is faster, which is especially relevant when dealing with large data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cluster analysis for categorical data has been an active area of research. A well-known problem in this area is the determination of the number of clusters, which is unknown and must be inferred from the data. In order to estimate the number of clusters, one often resorts to information criteria, such as BIC (Bayesian information criterion), MML (minimum message length, proposed by Wallace and Boulton, 1968), and ICL (integrated classification likelihood). In this work, we adopt the approach developed by Figueiredo and Jain (2002) for clustering continuous data. They use an MML criterion to select the number of clusters and a variant of the EM algorithm to estimate the model parameters. This EM variant seamlessly integrates model estimation and selection in a single algorithm. For clustering categorical data, we assume a finite mixture of multinomial distributions and implement a new EM algorithm, following a previous version (Silvestre et al., 2008). Results obtained with synthetic datasets are encouraging. The main advantage of the proposed approach, when compared to the above referred criteria, is the speed of execution, which is especially relevant when dealing with large data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The discussion and analysis of the diverse outreach activities in this article provide guidance and suggestions for academic librarians who are interested in outreach and community engagement of any scale and nature. Cases are draw from a wide spectrum and are particularly strong in the setting of large academic libraries, special collections and programming for multicultural populations. The aim of this study is to present the results of research carried out regarding the needs, demand and consumption of European Union information by users in European Documentation Centres (EDC). A quantitative methodology was chosen based on a questionnaire with 24 items. This questionnaire was distributed within the EDC of Salamanca, Spain, and the EDC of Porto, Portugal, during specific time intervals between 2010 and 2011. We examined the level of EU information that EDC users possess, and identified the factors that facilitate or hinder access to EU information, the topics most demanded, and the types of documents consulted. Analysis was made of the use that the consumer of European information makes of databases and their behaviour during the consultation. Although the sample used was not very significant owing to its small size, it is a faithful reflection of the scarce visits made to EDCs. This study can be of use to managers of EDCs, providing them with better knowledge of the information needs and demands of their users. Ultimately this should lead to improvements in the services offered. The study lies within a frame of research scarcely addressed in specialized scholarly literature: European Union information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper suggests that the thought of the North-American critical theorist James W. Carey provides a relevant perspective on communication and technology. Having as background American social pragmatism and progressive thinkers of the beginning of the 20th century (as Dewey, Mead, Cooley, and Park), Carey built a perspective that brought together the political economy of Harold A. Innis, the social criticism of David Riesman and Charles W. Mills and incorporated Marxist topics such as commodification and sociocultural domination. The main goal of this paper is to explore the connection established by Carey between modern technological communication and what he called the “transmissive model”, a model which not only reduces the symbolic process of communication to instrumentalization and to information delivery, but also politically converges with capitalism as well as power, control and expansionist goals. Conceiving communication as a process that creates symbolic and cultural systems, in which and through which social life takes place, Carey gives equal emphasis to the incorporation processes of communication.If symbolic forms and culture are ways of conditioning action, they are also influenced by technological and economic materializations of symbolic systems, and by other conditioning structures. In Carey’s view, communication is never a disembodied force; rather, it is a set of practices in which co-exist conceptions, techniques and social relations. These practices configure reality or, alternatively, can refute, transform and celebrate it. Exhibiting sensitiveness favourable to the historical understanding of communication, media and information technologies, one of the issues Carey explored most was the history of the telegraph as an harbinger of the Internet, of its problems and contradictions. For Carey, Internet was seen as the contemporary heir of the communications revolution triggered by the prototype of transmission technologies, namely the telegraph in the 19th century. In the telegraph Carey saw the prototype of many subsequent commercial empires based on science and technology, a pioneer model for complex business management; an example of conflict of interest for the control over patents; an inducer of changes both in language and in structures of knowledge; and a promoter of a futurist and utopian thought of information technologies. After a brief approach to Carey’s communication theory, this paper focuses on his seminal essay "Technology and ideology. The case of the telegraph", bearing in mind the prospect of the communication revolution introduced by Internet. We maintain that this essay has seminal relevance for critically studying the information society. Our reading of it highlights the reach, as well as the problems, of an approach which conceives the innovation of the telegraph as a metaphor for all innovations, announcing the modern stage of history and determining to this day the major lines of development in modern communication systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atualmente o sistema produtivo do tipo job shop é muito comum nas PMEs (Pequenas e Médias Empresas). Estas empresas trabalham por encomenda. Produzem grande variedade de modelos, e em pequenas quantidades. Os prazos de entrega são um fator de elevada importância, pois os clientes exigem um produto de qualidade no tempo certo. O presente trabalho, pretende criar uma ferramenta de programação da produção para a secção da costura, usando dados reais da empresa, que tem uma implantação do tipo job shop com máquinas multi-operação (Multi-Purpose -Machines Job Shop). No final, são reunidas as principais conclusões e perspetivados futuros desenvolvimentos. Os resultados obtidos comprovam que o algoritmo desenvolvido, com base no algoritmo de Giffler & Thompson, consegue obter com grande precisão e de forma rápida o escalonamento / balanceamento da secção da costura. Com a ferramenta criada, a empresa otimiza a programação da secção da costura e fornece informação importante á gestão da produção, possibilitando uma melhoria do planeamento da empresa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Critical Infrastructures became more vulnerable to attacks from adversaries as SCADA systems become connected to the Internet. The open standards for SCADA Communications make it very easy for attackers to gain in-depth knowledge about the working and operations of SCADA networks. A number of Intenrnet SCADA security issues were raised that have compromised the authenticity, confidentiality, integrity and non-repudiation of information transfer between SCADA Components. This paper presents an integration of the Cross Crypto Scheme Cipher to secure communications for SCADA components. The proposed scheme integrates both the best features of symmetric and asymmetric encryptiontechniques. It also utilizes the MD5 hashing algorithm to ensure the integrity of information being transmitted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectivo do estudo: comparar o desempenho dos algoritmos Pencil Beam Convolution (PBC) e do Analytical Anisotropic Algorithm (AAA) no planeamento do tratamento de tumores de mama com radioterapia conformacional a 3D.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To avoid additional hardware deployment, indoor localization systems have to be designed in such a way that they rely on existing infrastructure only. Besides the processing of measurements between nodes, localization procedure can include the information of all available environment information. In order to enhance the performance of Wi-Fi based localization systems, the innovative solution presented in this paper considers also the negative information. An indoor tracking method inspired by Kalman filtering is also proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents some results of a survey on access to European information among a group of 234 users of 55 European Documentation Centres (EDCs), from 21 European Union (EU) Member-States. The findings of the questionnaire made to 88 EDCs' managers, from 26 EU Member-States, will also be analyse. Two different points of view regarding issues related to reasons to access European information, the valued aspects during that access and the use of European databases will be compare.