986 resultados para message processing
Resumo:
This paper describes JERIM-320, a new 320-bit hash function used for ensuring message integrity and details a comparison with popular hash functions of similar design. JERIM-320 and FORK -256 operate on four parallel lines of message processing while RIPEMD-320 operates on two parallel lines. Popular hash functions like MD5 and SHA-1 use serial successive iteration for designing compression functions and hence are less secure. The parallel branches help JERIM-320 to achieve higher level of security using multiple iterations and processing on the message blocks. The focus of this work is to prove the ability of JERIM 320 in ensuring the integrity of messages to a higher degree to suit the fast growing internet applications
Resumo:
The shift from host-centric to information-centric networking (ICN) promises seamless communication in mobile networks. However, most existing works either consider well-connected networks with high node density or introduce modifications to {ICN} message processing for delay-tolerant Networking (DTN). In this work, we present agent-based content retrieval, which provides information-centric {DTN} support as an application module without modifications to {ICN} message processing. This enables flexible interoperability in changing environments. If no content source can be found via wireless multi-hop routing, requesters may exploit the mobility of neighbor nodes (called agents) by delegating content retrieval to them. Agents that receive a delegation and move closer to content sources can retrieve data and return it back to requesters. We show that agent-based content retrieval may be even more efficient in scenarios where multi-hop communication is possible. Furthermore, we show that broadcast communication may not be necessarily the best option since dynamic unicast requests have little overhead and can better exploit short contact times between nodes (no broadcast delays required for duplicate suppression).
Resumo:
O trabalho é um estudo exploratório sobre o processamento de mensagens de entretenimento. O objetivo do trabalho foi propor e testar um modelo de processamento de mensagens dedicado à compreensão de jogos digitais. Para realizar tal tarefa realizou-se um extenso levantamento de técnicas de observação de usuários diante de softwares e mídias, para conhecer as qualidades e limitações de cada uma dessas técnicas, bem como de sua abordagem do problema. Também foi realizado um levantamento dos modelos de processamento de mensagens nas mídias tradicionais e nas novas mídias. Com isso foi possível propor um novo modelo de análise de processamento de mensagens de entretenimento. Uma vez criado o modelo teórico, fez-se preciso testar se os elementos propostos como participantes desse processo estavam corretos e se seriam capazes de capturar adequadamente as semelhanças e diferenças entre a interação entre jogadores e as diferentes mídias. Por essa razão, estruturou-se uma ferramenta de coleta de dados, que foi validada junto a designers de jogos digitais, uma vez que esses profissionais conhecem o processo de criação de um jogo, seus elementos e objetivos. Posteriormente, foi feito um primeiro teste, junto a praticantes de jogos digitais de diversas idades em computadores pessoais e TV digital interativa, a fim e verificar como os elementos do modelo relacionavam-se entre si. O teste seguinte fez a coleta de dados de praticantes de jogos digitais em aparelhos celulares, tendo como objetivo capturar como se dá a formação de uma experiência através do processamento da mensagem de entretenimento num meio cujas limitações são inúmeras: tamanho de tela e teclas, para citar algumas delas. Como resultado, verificou-se, por meio de testes estatísticos, que jogos praticados em meios como computadores pessoais atraem mais por seus aspectos estéticos, enquanto a apreciação de um jogo em aparelhos celulares depende muito mais de sua habilidade de manter a interação que um jogo praticado em PC. Com isso conclui-se que o processamento das mensagens de entretenimento depende da capacidade dos seus criadores em entender os limites de cada meio e usar adequadamente os elementos que compõe o ambiente de um jogo, para conseguir levar à apreciação do mesmo.(AU)
Resumo:
O trabalho é um estudo exploratório sobre o processamento de mensagens de entretenimento. O objetivo do trabalho foi propor e testar um modelo de processamento de mensagens dedicado à compreensão de jogos digitais. Para realizar tal tarefa realizou-se um extenso levantamento de técnicas de observação de usuários diante de softwares e mídias, para conhecer as qualidades e limitações de cada uma dessas técnicas, bem como de sua abordagem do problema. Também foi realizado um levantamento dos modelos de processamento de mensagens nas mídias tradicionais e nas novas mídias. Com isso foi possível propor um novo modelo de análise de processamento de mensagens de entretenimento. Uma vez criado o modelo teórico, fez-se preciso testar se os elementos propostos como participantes desse processo estavam corretos e se seriam capazes de capturar adequadamente as semelhanças e diferenças entre a interação entre jogadores e as diferentes mídias. Por essa razão, estruturou-se uma ferramenta de coleta de dados, que foi validada junto a designers de jogos digitais, uma vez que esses profissionais conhecem o processo de criação de um jogo, seus elementos e objetivos. Posteriormente, foi feito um primeiro teste, junto a praticantes de jogos digitais de diversas idades em computadores pessoais e TV digital interativa, a fim e verificar como os elementos do modelo relacionavam-se entre si. O teste seguinte fez a coleta de dados de praticantes de jogos digitais em aparelhos celulares, tendo como objetivo capturar como se dá a formação de uma experiência através do processamento da mensagem de entretenimento num meio cujas limitações são inúmeras: tamanho de tela e teclas, para citar algumas delas. Como resultado, verificou-se, por meio de testes estatísticos, que jogos praticados em meios como computadores pessoais atraem mais por seus aspectos estéticos, enquanto a apreciação de um jogo em aparelhos celulares depende muito mais de sua habilidade de manter a interação que um jogo praticado em PC. Com isso conclui-se que o processamento das mensagens de entretenimento depende da capacidade dos seus criadores em entender os limites de cada meio e usar adequadamente os elementos que compõe o ambiente de um jogo, para conseguir levar à apreciação do mesmo.(AU)
Resumo:
O trabalho é um estudo exploratório sobre o processamento de mensagens de entretenimento. O objetivo do trabalho foi propor e testar um modelo de processamento de mensagens dedicado à compreensão de jogos digitais. Para realizar tal tarefa realizou-se um extenso levantamento de técnicas de observação de usuários diante de softwares e mídias, para conhecer as qualidades e limitações de cada uma dessas técnicas, bem como de sua abordagem do problema. Também foi realizado um levantamento dos modelos de processamento de mensagens nas mídias tradicionais e nas novas mídias. Com isso foi possível propor um novo modelo de análise de processamento de mensagens de entretenimento. Uma vez criado o modelo teórico, fez-se preciso testar se os elementos propostos como participantes desse processo estavam corretos e se seriam capazes de capturar adequadamente as semelhanças e diferenças entre a interação entre jogadores e as diferentes mídias. Por essa razão, estruturou-se uma ferramenta de coleta de dados, que foi validada junto a designers de jogos digitais, uma vez que esses profissionais conhecem o processo de criação de um jogo, seus elementos e objetivos. Posteriormente, foi feito um primeiro teste, junto a praticantes de jogos digitais de diversas idades em computadores pessoais e TV digital interativa, a fim e verificar como os elementos do modelo relacionavam-se entre si. O teste seguinte fez a coleta de dados de praticantes de jogos digitais em aparelhos celulares, tendo como objetivo capturar como se dá a formação de uma experiência através do processamento da mensagem de entretenimento num meio cujas limitações são inúmeras: tamanho de tela e teclas, para citar algumas delas. Como resultado, verificou-se, por meio de testes estatísticos, que jogos praticados em meios como computadores pessoais atraem mais por seus aspectos estéticos, enquanto a apreciação de um jogo em aparelhos celulares depende muito mais de sua habilidade de manter a interação que um jogo praticado em PC. Com isso conclui-se que o processamento das mensagens de entretenimento depende da capacidade dos seus criadores em entender os limites de cada meio e usar adequadamente os elementos que compõe o ambiente de um jogo, para conseguir levar à apreciação do mesmo.(AU)
Resumo:
While a variety of crisis types loom as real risks for organizations and communities, and the media landscape continues to evolve, research is needed to help explain and predict how people respond to various kinds of crisis and disaster information. For example, despite the rising prevalence of digital and mobile media centered on still and moving visuals, and stark increases in Americans’ use of visual-based platforms for seeking and sharing disaster information, relatively little is known about how the presence or absence of disaster visuals online might prompt or deter resilience-related feelings, thoughts, and/or behaviors. Yet, with such insights, governmental and other organizational entities as well as communities themselves may best help individuals and communities prepare for, cope with, and recover from adverse events. Thus, this work uses the theoretical lens of the social-mediated crisis communication model (SMCC) coupled with the limited capacity model of motivated mediated message processing (LC4MP) to explore effects of disaster information source and visuals on viewers’ resilience-related responses to an extreme flooding scenario. Results from two experiments are reported. First a preliminary 2 (disaster information source: organization/US National Weather Service vs. news media/USA Today) x 2 (disaster visuals: no visual podcast vs. moving visual video) factorial between-subjects online experiment with a convenience sample of university students probes effects of crisis source and visuals on a variety of cognitive, affective, and behavioral outcomes. A second between-subjects online experiment manipulating still and moving visual pace in online videos (no visual vs. still, slow-pace visual vs. still, medium-pace visual vs. still, fast-pace visual vs. moving, slow-pace visual vs. moving, medium-pace visual vs. moving, fast-pace visual) with a convenience sample recruited from Amazon’s Mechanical Turk (mTurk) similarly probes a variety of potentially resilience-related cognitive, affective, and behavioral outcomes. The role of biological sex as a quasi-experimental variable is also investigated in both studies. Various implications for community resilience and recommendations for risk and disaster communicators are explored. Implications for theory building and future research are also examined. Resulting modifications of the SMCC model (i.e., removing “message strategy” and adding the new category of “message content elements” under organizational considerations) are proposed.
Resumo:
This paper investigates demodulation of differentially phase modulated signals DPMS using optimal HMM filters. The optimal HMM filter presented in the paper is computationally of order N3 per time instant, where N is the number of message symbols. Previously, optimal HMM filters have been of computational order N4 per time instant. Also, suboptimal HMM filters have be proposed of computation order N2 per time instant. The approach presented in this paper uses two coupled HMM filters and exploits knowledge of ...
Resumo:
Comprehension of a complex acoustic signal - speech - is vital for human communication, with numerous brain processes required to convert the acoustics into an intelligible message. In four studies in the present thesis, cortical correlates for different stages of speech processing in a mature linguistic system of adults were investigated. In two further studies, developmental aspects of cortical specialisation and its plasticity in adults were examined. In the present studies, electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings of the mismatch negativity (MMN) response elicited by changes in repetitive unattended auditory events and the phonological mismatch negativity (PMN) response elicited by unexpected speech sounds in attended speech inputs served as the main indicators of cortical processes. Changes in speech sounds elicited the MMNm, the magnetic equivalent of the electric MMN, that differed in generator loci and strength from those elicited by comparable changes in non-speech sounds, suggesting intra- and interhemispheric specialisation in the processing of speech and non-speech sounds at an early automatic processing level. This neuronal specialisation for the mother tongue was also reflected in the more efficient formation of stimulus representations in auditory sensory memory for typical native-language speech sounds compared with those formed for unfamiliar, non-prototype speech sounds and simple tones. Further, adding a speech or non-speech sound context to syllable changes was found to modulate the MMNm strength differently in the left and right hemispheres. Following the acoustic-phonetic processing of speech input, phonological effort related to the selection of possible lexical (word) candidates was linked with distinct left-hemisphere neuronal populations. In summary, the results suggest functional specialisation in the neuronal substrates underlying different levels of speech processing. Subsequently, plasticity of the brain's mature linguistic system was investigated in adults, in whom representations for an aurally-mediated communication system, Morse code, were found to develop within the same hemisphere where representations for the native-language speech sounds were already located. Finally, recording and localization of the MMNm response to changes in speech sounds was successfully accomplished in newborn infants, encouraging future MEG investigations on, for example, the state of neuronal specialisation at birth.
Resumo:
In this paper, we develop a low-complexity message passing algorithm for joint support and signal recovery of approximately sparse signals. The problem of recovery of strictly sparse signals from noisy measurements can be viewed as a problem of recovery of approximately sparse signals from noiseless measurements, making the approach applicable to strictly sparse signal recovery from noisy measurements. The support recovery embedded in the approach makes it suitable for recovery of signals with same sparsity profiles, as in the problem of multiple measurement vectors (MMV). Simulation results show that the proposed algorithm, termed as JSSR-MP (joint support and signal recovery via message passing) algorithm, achieves performance comparable to that of sparse Bayesian learning (M-SBL) algorithm in the literature, at one order less complexity compared to the M-SBL algorithm.
Resumo:
In this paper, we propose a multiple-input multiple-output (MIMO) receiver algorithm that exploits channel hardening that occurs in large MIMO channels. Channel hardening refers to the phenomenon where the off-diagonal terms of the matrix become increasingly weaker compared to the diagonal terms as the size of the channel gain matrix increases. Specifically, we propose a message passing detection (MPD) algorithm which works with the real-valued matched filtered received vector (whose signal term becomes, where is the transmitted vector), and uses a Gaussian approximation on the off-diagonal terms of the matrix. We also propose a simple estimation scheme which directly obtains an estimate of (instead of an estimate of), which is used as an effective channel estimate in the MPD algorithm. We refer to this receiver as the channel hardening-exploiting message passing (CHEMP) receiver. The proposed CHEMP receiver achieves very good performance in large-scaleMIMO systems (e.g., in systems with 16 to 128 uplink users and 128 base station antennas). For the considered large MIMO settings, the complexity of the proposed MPD algorithm is almost the same as or less than that of the minimum mean square error (MMSE) detection. This is because the MPD algorithm does not need a matrix inversion. It also achieves a significantly better performance compared to MMSE and other message passing detection algorithms using MMSE estimate of. Further, we design optimized irregular low density parity check (LDPC) codes specific to the considered large MIMO channel and the CHEMP receiver through EXIT chart matching. The LDPC codes thus obtained achieve improved coded bit error rate performance compared to off-the-shelf irregular LDPC codes.
Resumo:
National Laboratory for Parallel and Distributed Processing; The University of Hong Kong
Resumo:
The aim of the present paper is to provide insight into the issue of idiom comprehension in pa- tients who are in the process of recovery from the syndrome of aphasia. Research in figurative language comprehension has seen a robust development in the recent decades. However, it has not been until quite recently that psycholinguists began to delve into the aspect of metaphorical language comprehension in brain damaged populations. It was observed that even though the ability to produce and understand language is recovered in the majority of patients with head trauma, the impairment of some aspects of comprehension may protract. The understanding of idioms, metaphors, similes and proverbs, due to their specific, non-literal character, has been evi- denced to pose a serious problem to aphasic patients, as they fail to decipher the figurative mean- ing of the utterance, and, instead, tend to process the message literally (Papagno et al. 2004). In the present study, three patients who suffered from aphasic disorder were tested for com- prehension of idioms by means of two multiple choice tasks. The obtained results corroborated the hypothesis that patients who are in the process of recovery from aphasia encounter various pitfalls in the comprehension of idiomatic language. Predominantly, they exhibit an inclination to choose the erroneous, literal paraphrases of the presented idioms over their correct, idiomatic counterparts. The present paper aims at accounting for the reasons underlying such a tendency.
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
Population growth is always increasing, and thus the concept of smart and cognitive cities is becoming more important. Developed countries are aware of and working towards needed changes in city management. However, emerging countries require the optimization of their own city management. This chapter illustrates, based on a use case, how a city in an emerging country can quickly progress using the concept of smart and cognitive cities. Nairobi, the capital of Kenya, is chosen for the test case. More than half of the population of Nairobi lives in slums with poor sanitation, and many slum inhabitants often share a single toilet, so the proper functioning and reliable maintenance of toilets are crucial. For this purpose, an approach for processing text messages based on cognitive computing (using soft computing methods) is introduced. Slum inhabitants can inform the responsible center via text messages in cases when toilets are not functioning properly. Through cognitive computer systems, the responsible center can fix the problem in a quick and efficient way by sending repair workers to the area. Focusing on the slum of Kibera, an easy-to-handle approach for slum inhabitants is presented, which can make the city more efficient, sustainable and resilient (i.e., cognitive).
Resumo:
Communication between the 5′ and 3′ ends is a common feature of several aspects of eukaryotic mRNA metabolism. In the nucleus, the pre-mRNA 5′ end is bound by the nuclear cap binding complex (CBC). This RNA–protein complex plays an active role in both splicing and RNA export. We provide evidence for participation of CBC in the processing of the 3′ end of the message. Depletion of CBC from HeLa cell nuclear extract strongly reduced the endonucleolytic cleavage step of the cleavage and polyadenylation process. Cleavage was restored by addition of recombinant CBC. CBC depletion was found to reduce the stability of poly(A) site cleavage complexes formed in nuclear extract. We also provide evidence that the communication between the 5′ and 3′ ends of the pre-mRNA during processing is mediated by the physical association of the CBC/cap complex with 3′ processing factors bound at the poly(A) site. These observations, along with previous data on the function of CBC in splicing, illustrate the key role played by CBC in pre-mRNA recognition and processing. The data provides further support for the hypothesis that pre-mRNAs and mRNAs may exist and be functional in the form of “closed-loops,” due to interactions between factors bound at their 5′ and 3′ ends.