911 resultados para Cipher and telegraph codes
Resumo:
In Mobile Ad hoc NETworks (MANETs), where cooperative behaviour is mandatory, there is a high probability for some nodes to become overloaded with packet forwarding operations in order to support neighbor data exchange. This altruistic behaviour leads to an unbalanced load in the network in terms of traffic and energy consumption. In such scenarios, mobile nodes can benefit from the use of energy efficient and traffic fitting routing protocol that better suits the limited battery capacity and throughput limitation of the network. This PhD work focuses on proposing energy efficient and load balanced routing protocols for ad hoc networks. Where most of the existing routing protocols simply consider the path length metric when choosing the best route between a source and a destination node, in our proposed mechanism, nodes are able to find several routes for each pair of source and destination nodes and select the best route according to energy and traffic parameters, effectively extending the lifespan of the network. Our results show that by applying this novel mechanism, current flat ad hoc routing protocols can achieve higher energy efficiency and load balancing. Also, due to the broadcast nature of the wireless channels in ad hoc networks, other technique such as Network Coding (NC) looks promising for energy efficiency. NC can reduce the number of transmissions, number of re-transmissions, and increase the data transfer rate that directly translates to energy efficiency. However, due to the need to access foreign nodes for coding and forwarding packets, NC needs a mitigation technique against unauthorized accesses and packet corruption. Therefore, we proposed different mechanisms for handling these security attacks by, in particular by serially concatenating codes to support reliability in ad hoc network. As a solution to this problem, we explored a new security framework that proposes an additional degree of protection against eavesdropping attackers based on using concatenated encoding. Therefore, malicious intermediate nodes will find it computationally intractable to decode the transitive packets. We also adopted another code that uses Luby Transform (LT) as a pre-coding code for NC. Primarily being designed for security applications, this code enables the sink nodes to recover corrupted packets even in the presence of byzantine attacks.
Resumo:
Candida albicans is the major fungal pathogen in humans, causing diseases ranging from mild skin infections to severe systemic infections in immunocompromised individuals. The pathogenic nature of this organism is mostly due to its capacity to proliferate in numerous body sites and to its ability to adapt to drastic changes in the environment. Candida albicans exhibit a unique translational system, decoding the leucine-CUG codon ambiguously as leucine (3% of codons) and serine (97%) using a hybrid serine tRNA (tRNACAGSer). This tRNACAGSer is aminoacylated by two aminoacyl tRNA synthetases (aaRSs): leucyl-tRNA synthetase (LeuRS) and seryl-tRNA synthetase (SerRS). Previous studies showed that exposure of C. albicans to macrophages, oxidative, pH stress and antifungals increases Leu misincorporation levels from 3% to 15%, suggesting that C. albicans has the ability to regulate mistranslation levels in response to host defenses, antifungals and environmental stresses. Therefore, the hypothesis tested in this work is that Leu and Ser misincorporation at CUG codons is dependent upon competition between the LeuRS and SerRS for the tRNACAGSer. To test this hypothesis, levels of the SerRS and LeuRS were indirectly quantified under different physiological conditions, using a fluorescent reporter system that measures the activity of the respective promoters. Results suggest that an increase in Leu misincorporation at CUG codons is associated with an increase in LeuRS expression, with levels of SerRS being maintained. In the second part of the work, the objective was to identify putative regulators of SerRS and LeuRS expression. To accomplish this goal, C. albicans strains from a transcription factor knock-out collection were transformed with the fluorescent reporter system and expression of both aaRSs was quantified. Alterations in the LeuRS/SerRS expression of mutant strains compared to wild type strain allowed the identification of 5 transcription factors as possible regulators of expression of LeuRS and SerRS: ASH1, HAP2, HAP3, RTG3 and STB5. Globally, this work provides the first step to elucidate the molecular mechanism of regulation of mistranslation in C. albicans.
Digital Debris of Internet Art: An Allegorical and Entropic Resistance to the Epistemology of Search
Resumo:
This Ph.D., by thesis, proposes a speculative lens to read Internet Art via the concept of digital debris. In order to do so, the research explores the idea of digital debris in Internet Art from 1993 to 2011 in a series of nine case studies. Here, digital debris are understood as words typed in search engines and which then disappear; bits of obsolete codes which are lingering on the Internet, abandoned website, broken links or pieces of ephemeral information circulating on the Internet and which are used as a material by practitioners. In this context, the thesis asks what are digital debris? The thesis argues that the digital debris of Internet Art represent an allegorical and entropic resistance to the what Art Historian David Joselit calls the Epistemology of Search. The ambition of the research is to develop a language in-between the agency of the artist and the autonomy of the algorithm, as a way of introducing Internet Art to a pluridisciplinary audience, hence the presence of the comparative studies unfolding throughout the thesis, between Internet Art and pionners in the recycling of waste in art, the use of instructions as a medium and the programming of poetry. While many anthropological and ethnographical studies are concerned with the material object of the computer as debris once it becomes obsolete, very few studies have analysed waste as discarded data. The research shifts the focus from an industrial production of digital debris (such as pieces of hardware) to obsolete pieces of information in art practice. The research demonstrates that illustrations of such considerations can be found, for instance, in Cory Arcangel’s work Data Diaries (2001) where QuickTime files are stolen, disassembled, and then re-used in new displays. The thesis also looks at Jodi’s approach in Jodi.org (1993) and Asdfg (1998), where websites and hyperlinks are detourned, deconstructed, and presented in abstract collages that reveals the architecture of the Internet. The research starts in a typological manner and classifies the pieces of Internet Art according to the structure at play in the work. Indeed if some online works dealing with discarded documents offer a self-contained and closed system, others nurture the idea of openness and unpredictability. The thesis foregrounds the ideas generated through the artworks and interprets how those latter are visually constructed and displayed. Not only does the research questions the status of digital debris once they are incorporated into art practice but it also examine the method according to which they are retrieved, manipulated and displayed to submit that digital debris of Internet Art are the result of both semantic and automated processes, rendering them both an object of discourse and a technical reality. Finally, in order to frame the serendipity and process-based nature of the digital debris, the Ph.D. concludes that digital debris are entropic . In other words that they are items of language to-be, paradoxically locked in a constant state of realisation.
Resumo:
Contemporary studies of spatial and social cognition frequently use human figures as stimuli. The interpretation of such studies may be complicated by spatial compatibility effects that emerge when researchers employ spatial responses, and participants spontaneously code spatial relationships about an observed body. Yet, the nature of these spatial codes – whether they are location- or object-based, and coded from the perspective of the observer or the figure – has not been determined. Here, we investigated this issue by exploring spatial compatibility effects arising for objects held by a visually presented whole-bodied schematic human figure. In three experiments, participants responded to the colour of the object held in the figure’s left or right hand, using left or right key presses. Left-right compatibility effects were found relative to the participant’s egocentric perspective, rather than the figure’s. These effects occurred even when the figure was rotated by 90 degrees to the left or to the right, and the coloured objects were aligned with the participant’s midline. These findings are consistent with spontaneous spatial coding from the participant’s perspective and relative to the normal upright orientation of the body. This evidence for object-based spatial coding implies that the domain general cognitive mechanisms that result in spatial compatibility effects may contribute to certain spatial perspective-taking and social cognition phenomena.
Resumo:
An experimental study aimed at assessing the influence of redundancy and neutrality on the performance of an (1+1)-ES evolution strategy modeled using Markov chains and applied to NK fitness landscapes is presented. For the study, two families of redundant binary representations, one non-neutral family which is based on linear transformations and that allows the phenotypic neighborhoods to be designed in a simple and effective way, and the neutral family based on the mathematical formulation of error control codes are used. The results indicate whether redundancy or neutrality affects more strongly the behavior of the algorithm used.
Resumo:
Estrogen actions are mainly mediated by specific nuclear estrogen receptors (ERs), for which different genes and a diversity of transcript variants have been identified, mainly in mammals. In this study, we investigated the presence of ER splice variants in the teleost fish gilthead sea bream (Sparus auratus), by comparison with the genomic organization of the related species Takifugu rubripes. Two exon2-deleted ERα transcript variants were isolated from liver cDNA of estradiol-treated fish. The ΔE2 variant lacks ERα exon 2, generating a premature termination codon and a putative C-terminal truncated receptor, while the ΔE2,3* variant contains an in-frame deletion of exon 2 and part of exon 3 and codes for a putative ERα protein variant lacking most of the DNA-binding domain. Both variants were expressed at very low levels in several female and male sea bream tissues, and their expression was highly inducible in liver by estradiol-17β treatment with a strong positive correlation with the typical wild-type (wt) ERα response in this tissue. These findings identify novel estrogen responsive splice variants of fish ERα, and provide the basis for future studies to investigate possible modulation of wt-ER actions by splice variants.
Resumo:
In addition to phonological deficits, difficulties at the level of the visual recognition system (i. e. , the mechanisms that could affect the induction of orthographic representations or the connection of visual to lexical codes) constitute potential sources of the poor reading and visual naming that characterize dyslexia.
Resumo:
Tese de doutoramento, Física, Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
Thesis (Ph.D.)--University of Washington, 2013
Resumo:
Turbo codes experience a significant decoding delay because of the iterative nature of the decoding algorithms, the high number of metric computations and the complexity added by the (de)interleaver. The extrinsic information is exchanged sequentially between two Soft-Input Soft-Output (SISO) decoders. Instead of this sequential process, a received frame can be divided into smaller windows to be processed in parallel. In this paper, a novel parallel processing methodology is proposed based on the previous parallel decoding techniques. A novel Contention-Free (CF) interleaver is proposed as part of the decoding architecture which allows using extrinsic Log-Likelihood Ratios (LLRs) immediately as a-priori LLRs to start the second half of the iterative turbo decoding. The simulation case studies performed in this paper show that our parallel decoding method can provide %80 time saving compared to the standard decoding and %30 time saving compared to the previous parallel decoding methods at the expense of 0.3 dB Bit Error Rate (BER) performance degradation.
Resumo:
This research examines media integration in China, choosing two Chinese newspaper groups as cases for comparative study. The study analyses the convergence strategies of these Chinese groups by reference to an Role Model of convergence developed from a literature review of studies of cases of media convergence in the UK – in particular the Guardian (GNM), Telegraph Media Group (TMG), the Daily Mail and the Times. UK cases serve to establish the characteristics, causes and consequences of different forms of convergence and formulate a model of convergence. The model will specify the levels of newsroom convergence and the sub-units of analysis which will be used to collect empirical data from Chinese News Organisations and compare their strategies, practices and results with the UK experience. The literature review shows that there is a need for more comparative studies of media convergence strategy in general, and particularly in relation to Chinese media. Therefore, the study will address a gap in the understanding of media convergence in China. For this reason, my innovations have three folds: Firstly, to develop a new and comprehensive model of media convergence and a detailed understanding of the reasons why media companies pursue differing strategies in managing convergence across a wide range of units of analysis. Secondly, this study tries to compare the multimedia strategies of media groups under radically different political systems. Since, there is no standard research method or systematic theoretical framework for the study of Newsroom Convergence, this study develops an integrated perspective. The research will use the triangulation analysis of textual, field observation and interviews to explain systematically what was the newsroom structure like in the past and how did the copy flow change and why. Finally, this case study of media groups can provide an industrial model or framework for the other media groups.
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.
Resumo:
Introduction: Healthcare improvements have allowed prevention but have also increased life expectancy, resulting in more people being at risk. Our aim was to analyse the separate effects of age, period and cohort on incidence rates by sex in Portugal, 2000–2008. Methods: From the National Hospital Discharge Register, we selected admissions (aged ≥49 years) with hip fractures (ICD9-CM, codes 820.x) caused by low/moderate trauma (falls from standing height or less), readmissions and bone cancer cases. We calculated person-years at risk using population data from Statistics Portugal. To identify period and cohort effects for all ages, we used an age–period–cohort model (1-year intervals) followed by generalised additive models with a negative binomial distribution of the observed incidence rates of hip fractures. Results: There were 77,083 hospital admissions (77.4 % women). Incidence rates increased exponentially with age for both sexes (age effect). Incidence rates fell after 2004 for women and were random for men (period effect). There was a general cohort effect similar in both sexes; risk of hip fracture altered from an increasing trend for those born before 1930 to a decreasing trend following that year. Risk alterations (not statistically significant) coincident with major political and economic change in the history of Portugal were observed around birth cohorts 1920 (stable–increasing), 1940 (decreasing–increasing) and 1950 (increasing–decreasing only among women). Conclusions: Hip fracture risk was higher for those born during major economically/politically unstable periods. Although bone quality reflects lifetime exposure, conditions at birth may determine future risk for hip fractures.
Resumo:
Considering Alan Turing’s challenge in «Computing Machinery and Intelligence» (1950) – can machines play the «imitation game»? – it is proposed that the requirements of the Turing test are already implicitly being used for checking the credibility of virtual characters and avatars. Like characters, Avatars aim to visually express emotions (the exterior signs of the existence of feeling) and its creators have to resort to emotion codes. Traditional arts have profusely contributed for this field and, together with the science of anatomy, shaped the grounds for current Facial Action Coding System (FACS) and their databases. However, FACS researchers have to improve their «instruction tables» so that the machines will be able, in a near future, to be programmed to carry out the operation of recognizing human expressions (face and body) and classify them adequately. For the moment, the reproductions have to resort to the copy of real life expressions, and the presente smile of avatars comes from mirroring their human users.
Resumo:
This work aimed to contribute to drug discovery and development (DDD) for tauopathies, while expanding our knowledge on this group of neurodegenerative disorders, including Alzheimer’s disease (AD). Using yeast, a recognized model for neurodegeneration studies, useful models were produced for the study of tau interaction with beta-amyloid (Aβ), both AD hallmark proteins. The characterization of these models suggests that these proteins co-localize and that Aβ1-42, which is toxic to yeast, is involved in tau40 phosphorylation (Ser396/404) via the GSK-3β yeast orthologue, whereas tau seems to facilitate Aβ1-42 oligomerization. The mapping of tau’s interactome in yeast, achieved with a tau toxicity enhancer screen using the yeast deletion collection, provided a novel framework, composed of 31 genes, to identify new mechanisms associated with tau pathology, as well as to identify new drug targets or biomarkers. This genomic screen also allowed to select the yeast strain mir1Δ-tau40 for development of a new GPSD2TM drug discovery screening system. A library of unique 138 marine bacteria extracts, obtained from the Mid-Atlantic Ridge hydrothermal vents, was screened with mir1Δ-tau40. Three extracts were identified as suppressors of tau toxicity and constitute good starting points for DDD programs. mir1Δ strain was sensitive to tau toxicity, relating tau pathology with mitochondrial function. SLC25A3, the human homologue of MIR1, codes for the mitochondrial phosphate carrier protein (PiC). Resorting to iRNA, SLC25A3 expression was silenced in human neuroglioma cells, as a first step towards the engineering of a neural model for replicating the results obtained in yeast. This model is essential to understand the mechanisms of tau toxicity at the mitochondrial level and to validate PiC as a relevant drug target. The set of DDD tools here presented will foster the development of innovative and efficacious therapies, urgently needed to cope with tau-related disorders of high human and social-economic impact.