958 resultados para Decoding principle
Resumo:
With the rapid development of Internet technologies, video and audio processing are among the most important parts due to the constant requirements of high quality media contents. Along with the improvement of network environment and the hardware equipment, this demand is becoming more and more imperious, people prefer high quality videos and audios as well as the net streaming media resources. FFmpeg is a set of open source program about the A/V decoding. Many commercial players use FFmpeg as their displaying cores. This paper designed a simple and easy-to-use video player based on FFmpeg. The first part is about the basic theories and related knowledge of video displaying, including some concepts like data formats, streaming media data, video coding and decoding. In a word, the realization of the video player depend on the a set of video decoding process. The general idea about the process is to get the video packets from the Internet, to read the related protocols and de-encapsulate the protocols, to de-encapsulate the packaging data and to get encoded formats data, to decode them to pixel data that can be displayed directly through graphics cards. During the coding and decoding process, there could be different degrees of data losing, which is called lossy compression, but it usually does not influence the quality of user experiences. The second part is about the principle of the FFmpeg decoding process, that is one of the key point of the paper. In this project, FFmpeg is used for the main decoding task, by call some main functions and structures from FFmpeg class libraries, packaging video formats could be transfer to pixel data, after getting the pixel data, SDL is used for the displaying process. The third part is about the SDL displaying flow. Similarly, it would invoke some important displaying functions from SDL class libraries to realize the function, though SDL is able to do not only displaying task, but also many other game playing process. After that, a independent video displayer is completed, it is provided with all the key function of a player. The fourth part make a simple users interface for the player based on the MFC program, it enable the player could be used by most people. At last, in consideration of the mobile Internet’s blossom, people nowadays can hardly ever drop their mobile phones, there is a brief introduction about how to transplant the video player to Android platform which is one of the most used mobile systems.
Resumo:
The Herglotz problem is a generalization of the fundamental problem of the calculus of variations. In this paper, we consider a class of non-differentiable functions, where the dynamics is described by a scale derivative. Necessary conditions are derived to determine the optimal solution for the problem. Some other problems are considered, like transversality conditions, the multi-dimensional case, higher-order derivatives and for several independent variables.
Resumo:
The second generation of large scale interferometric gravitational wave (GW) detectors will be limited by quantum noise over a wide frequency range in their detection band. Further sensitivity improvements for future upgrades or new detectors beyond the second generation motivate the development of measurement schemes to mitigate the impact of quantum noise in these instruments. Two strands of development are being pursued to reach this goal, focusing both on modifications of the well-established Michelson detector configuration and development of different detector topologies. In this paper, we present the design of the world's first Sagnac speed meter (SSM) interferometer, which is currently being constructed at the University of Glasgow. With this proof-of-principle experiment we aim to demonstrate the theoretically predicted lower quantum noise in a Sagnac interferometer compared to an equivalent Michelson interferometer, to qualify SSM for further research towards an implementation in a future generation large scale GW detector, such as the planned Einstein telescope observatory.
Resumo:
The thesis is an investigation of the principle of least effort (Zipf 1949 [1972]). The principle is simple (all effort should be least) and universal (it governs the totality of human behavior). Since the principle is also functional, the thesis adopts a functional theory of language as its theoretical framework, i.e. Natural Linguistics. The explanatory system of Natural Linguistics posits that higher principles govern preferences, which, in turn, manifest themselves as concrete, specific processes in a given language. Therefore, the thesis’ aim is to investigate the principle of least effort on the basis of external evidence from English. The investigation falls into the three following strands: the investigation of the principle itself, the investigation of its application in articulatory effort and the investigation of its application in phonological processes. The structure of the thesis reflects the division of its broad aims. The first part of the thesis presents its theoretical background (Chapter One and Chapter Two), the second part of the thesis deals with application of least effort in articulatory effort (Chapter Three and Chapter Four), whereas the third part discusses the principle of least effort in phonological processes (Chapter Five and Chapter Six). Chapter One serves as an introduction, examining various aspects of the principle of least effort such as its history, literature, operation and motivation. It overviews various names which denote least effort, explains the origins of the principle and reviews the literature devoted to the principle of least effort in a chronological order. The chapter also discusses the nature and operation of the principle, providing numerous examples of the principle at work. It emphasizes the universal character of the principle from the linguistic field (low-level phonetic processes and language universals) and the non-linguistic ones (physics, biology, psychology and cognitive sciences), proving that the principle governs human behavior and choices. Chapter Two provides the theoretical background of the thesis in terms of its theoretical framework and discusses the terms used in the thesis’ title, i.e. hierarchy and preference. It justifies the selection of Natural Linguistics as the thesis’ theoretical framework by outlining its major assumptions and demonstrating its explanatory power. As far as the concepts of hierarchy and preference are concerned, the chapter provides their definitions and reviews their various understandings via decision theories and linguistic preference-based theories. Since the thesis investigates the principle of least effort in language and speech, Chapter Three considers the articulatory aspect of effort. It reviews the notion of easy and difficult sounds and discusses the concept of articulatory effort, overviewing its literature as well as various understandings in a chronological fashion. The chapter also presents the concept of articulatory gestures within the framework of Articulatory Phonology. The thesis’ aim is to investigate the principle of least effort on the basis of external evidence, therefore Chapters Four and Six provide evidence in terms of three experiments, text message studies (Chapter Four) and phonological processes in English (Chapter Six). Chapter Four contains evidence for the principle of least effort in articulation on the basis of experiments. It describes the experiments in terms of their predictions and methodology. In particular, it discusses the adopted measure of effort established by means of the effort parameters as well as their status. The statistical methods of the experiments are also clarified. The chapter reports on the results of the experiments, presenting them in a graphical way and discusses their relation to the tested predictions. Chapter Four establishes a hierarchy of speakers’ preferences with reference to articulatory effort (Figures 30, 31). The thesis investigates the principle of least effort in phonological processes, thus Chapter Five is devoted to the discussion of phonological processes in Natural Phonology. The chapter explains the general nature and motivation of processes as well as the development of processes in child language. It also discusses the organization of processes in terms of their typology as well as the order in which processes apply. The chapter characterizes the semantic properties of processes and overviews Luschützky’s (1997) contribution to NP with respect to processes in terms of their typology and incorporation of articulatory gestures in the concept of a process. Chapter Six investigates phonological processes. In particular, it identifies the issues of lenition/fortition definition and process typology by presenting the current approaches to process definitions and their typology. Since the chapter concludes that no coherent definition of lenition/fortition exists, it develops alternative lenition/fortition definitions. The chapter also revises the typology of phonological processes under effort management, which is an extended version of the principle of least effort. Chapter Seven concludes the thesis with a list of the concepts discussed in the thesis, enumerates the proposals made by the thesis in discussing the concepts and presents some questions for future research which have emerged in the course of investigation. The chapter also specifies the extent to which the investigation of the principle of least effort is a meaningful contribution to phonology.
Resumo:
Common computational principles underlie processing of various visual features in the cortex. They are considered to create similar patterns of contextual modulations in behavioral studies for different features as orientation and direction of motion. Here, I studied the possibility that a single theoretical framework, implemented in different visual areas, of circular feature coding and processing could explain these similarities in observations. Stimuli were created that allowed direct comparison of the contextual effects on orientation and motion direction with two different psychophysical probes: changes in weak and strong signal perception. One unique simplified theoretical model of circular feature coding including only inhibitory interactions, and decoding through standard vector average, successfully predicted the similarities in the two domains, while different feature population characteristics explained well the differences in modulation on both experimental probes. These results demonstrate how a single computational principle underlies processing of various features across the cortices.
Resumo:
Simplifying the Einstein field equation by assuming the cosmological principle yields a set of differential equations which governs the dynamics of the universe as described in the cosmological standard model. The cosmological principle assumes the space appears the same everywhere and in every direction and moreover, the principle has earned its position as a fundamental assumption in cosmology by being compatible with the observations of the 20th century. It was not until the current century when observations in cosmological scales showed significant deviation from isotropy and homogeneity implying the violation of the principle. Among these observations are the inconsistency between local and non-local Hubble parameter evaluations, baryon acoustic features of the Lyman-α forest and the anomalies of the cosmic microwave background radiation. As a consequence, cosmological models beyond the cosmological principle have been studied vastly; after all, the principle is a hypothesis and as such should frequently be tested as any other assumption in physics. In this thesis, the effects of inhomogeneity and anisotropy, arising as a consequence of discarding the cosmological principle, is investigated. The geometry and matter content of the universe becomes more cumbersome and the resulting effects on the Einstein field equation is introduced. The cosmological standard model and its issues, both fundamental and observational are presented. Particular interest is given to the local Hubble parameter, supernova explosion, baryon acoustic oscillation, and cosmic microwave background observations and the cosmological constant problems. Explored and proposed resolutions emerging by violating the cosmological principle are reviewed. This thesis is concluded by a summary and outlook of the included research papers.
Resumo:
Whilst the principle of proportionality indisputably plays a crucial role in the protection of fundamental rights, it is still unclear to what extent it applies to other fields in international law. The paper therefore explores the role it plays in selected fields of public international law, beyond human rights. The examination begins in the classical domain of reprisals and in maritime boundary delimitation and continues to analyse the role played in the law of multilateral trade regulation of the World Trade Organization and in bilateral investment protection. In an attempt to explain differences in recourse to proportionality in the various fields, we develop in our conclusions a distinction between horizontal and vertical constellations of legal protection.
Resumo:
L’intégration des nouveaux immigrants pose un défi, et ce, particulièrement dans les nations infra-étatiques. En effet, les citoyens vivant dans ces contextes ont davantage tendance à percevoir les immigrants comme de potentielles menaces politiques et culturelles. Cependant, les différents groupes ethniques et religieux minoritaires ne représentent pas tous le même degré de menace. Cette étude cherche à déterminer si les citoyens francophones québécois perçoivent différemment les différents groupes ethniques et religieux minoritaires, et s’ils entretiennent des attitudes plus négatives envers ces groupes, comparativement aux autres Canadiens. Dans la mesure où ces attitudes négatives existent, l’étude cherche à comprendre si ces dernières sont basées principalement sur des préjugés raciaux ou sur des inquiétudes culturelles. Se fondant sur des données nationales et provinciales, les résultats démontrent que les francophones Québécois sont plus négatifs envers les minorités religieuses que les autres canadiens mais pas envers les minorités raciales, et que ces attitudes négatives sont fondées principalement sur une inquiétude liée la laïcité et à la sécurité culturelle. L’antipathie envers certaines minorités observée au sein de la majorité francophone au Québec semble donc être dirigée envers des groupes spécifiques, et se fondent sur des principes de nature davantage culturelle que raciale.
Resumo:
Nel primo capitolo si riporta il principio del massimo per operatori ellittici. Sarà considerato, in un primo momento, l'operatore di Laplace e, successivamente, gli operatori ellittici del secondo ordine, per i quali si dimostrerà anche il principio del massimo di Hopf. Nel secondo capitolo si affronta il principio del massimo per operatori parabolici e lo si utilizza per dimostrare l'unicità delle soluzioni di problemi ai valori al contorno.
Resumo:
L’intégration des nouveaux immigrants pose un défi, et ce, particulièrement dans les nations infra-étatiques. En effet, les citoyens vivant dans ces contextes ont davantage tendance à percevoir les immigrants comme de potentielles menaces politiques et culturelles. Cependant, les différents groupes ethniques et religieux minoritaires ne représentent pas tous le même degré de menace. Cette étude cherche à déterminer si les citoyens francophones québécois perçoivent différemment les différents groupes ethniques et religieux minoritaires, et s’ils entretiennent des attitudes plus négatives envers ces groupes, comparativement aux autres Canadiens. Dans la mesure où ces attitudes négatives existent, l’étude cherche à comprendre si ces dernières sont basées principalement sur des préjugés raciaux ou sur des inquiétudes culturelles. Se fondant sur des données nationales et provinciales, les résultats démontrent que les francophones Québécois sont plus négatifs envers les minorités religieuses que les autres canadiens mais pas envers les minorités raciales, et que ces attitudes négatives sont fondées principalement sur une inquiétude liée la laïcité et à la sécurité culturelle. L’antipathie envers certaines minorités observée au sein de la majorité francophone au Québec semble donc être dirigée envers des groupes spécifiques, et se fondent sur des principes de nature davantage culturelle que raciale.
Resumo:
Machine learning is widely adopted to decode multi-variate neural time series, including electroencephalographic (EEG) and single-cell recordings. Recent solutions based on deep learning (DL) outperformed traditional decoders by automatically extracting relevant discriminative features from raw or minimally pre-processed signals. Convolutional Neural Networks (CNNs) have been successfully applied to EEG and are the most common DL-based EEG decoders in the state-of-the-art (SOA). However, the current research is affected by some limitations. SOA CNNs for EEG decoding usually exploit deep and heavy structures with the risk of overfitting small datasets, and architectures are often defined empirically. Furthermore, CNNs are mainly validated by designing within-subject decoders. Crucially, the automatically learned features mainly remain unexplored; conversely, interpreting these features may be of great value to use decoders also as analysis tools, highlighting neural signatures underlying the different decoded brain or behavioral states in a data-driven way. Lastly, SOA DL-based algorithms used to decode single-cell recordings rely on more complex, slower to train and less interpretable networks than CNNs, and the use of CNNs with these signals has not been investigated. This PhD research addresses the previous limitations, with reference to P300 and motor decoding from EEG, and motor decoding from single-neuron activity. CNNs were designed light, compact, and interpretable. Moreover, multiple training strategies were adopted, including transfer learning, which could reduce training times promoting the application of CNNs in practice. Furthermore, CNN-based EEG analyses were proposed to study neural features in the spatial, temporal and frequency domains, and proved to better highlight and enhance relevant neural features related to P300 and motor states than canonical EEG analyses. Remarkably, these analyses could be used, in perspective, to design novel EEG biomarkers for neurological or neurodevelopmental disorders. Lastly, CNNs were developed to decode single-neuron activity, providing a better compromise between performance and model complexity.
Resumo:
In the brain, mutations in SLC25A12 gene encoding AGC1 cause an ultra-rare genetic disease reported as a developmental and epileptic encephalopathy associated with global cerebral hypomyelination. Symptoms of the disease include diffused hypomyelination, arrested psychomotor development, severe hypotonia, seizures and are common to other neurological and developmental disorders. Amongst the biological components believed to be most affected by AGC1 deficiency are oligodendrocytes, glial cells responsible for myelination. Recent studies (Poeta et al, 2022) have also shown how altered levels of transcription factors and epigenetic modifications greatly affect proliferation and differentiation in oligodendrocyte precursor cells (OPCs). In this study we explore the transcriptomic landscape of Agc1 in two different system models: OPCs silenced for Agc1 and iPSCs from human patients differentiated to neural progenitors. Analyses range from differential expression analysis, alternative splicing, master regulator analysis. ATAC-seq results on OPCs were integrated with results from RNA-Seq to assess the activity of a TF based on the accessibility data from its putative targets, which allows to integrate RNA-Seq data to infer their role as either activators or repressors. All the findings for this model were also integrated with early data from iPSCs RNA-seq results, looking for possible commonalities between the two different system models, among which we find a downregulation in genes encoding for SREBP, a transcription factor regulating fatty acids biosynthesis, a key process for myelination which could explain the hypomyelinated state of patients. We also find that in both systems cells tend to form more neurites, likely losing their ability to differentiate, considering their progenitor state. We also report several alterations in the chromatin state of cells lacking Agc1, which confirms the hypothesis for which Agc1 is not a disease restricted only to metabolic alterations in the cells, but there is a profound shift of the regulatory state of these cells.
Resumo:
In this thesis, the viability of the Dynamic Mode Decomposition (DMD) as a technique to analyze and model complex dynamic real-world systems is presented. This method derives, directly from data, computationally efficient reduced-order models (ROMs) which can replace too onerous or unavailable high-fidelity physics-based models. Optimizations and extensions to the standard implementation of the methodology are proposed, investigating diverse case studies related to the decoding of complex flow phenomena. The flexibility of this data-driven technique allows its application to high-fidelity fluid dynamics simulations, as well as time series of real systems observations. The resulting ROMs are tested against two tasks: (i) reduction of the storage requirements of high-fidelity simulations or observations; (ii) interpolation and extrapolation of missing data. The capabilities of DMD can also be exploited to alleviate the cost of onerous studies that require many simulations, such as uncertainty quantification analysis, especially when dealing with complex high-dimensional systems. In this context, a novel approach to address parameter variability issues when modeling systems with space and time-variant response is proposed. Specifically, DMD is merged with another model-reduction technique, namely the Polynomial Chaos Expansion, for uncertainty quantification purposes. Useful guidelines for DMD deployment result from the study, together with the demonstration of its potential to ease diagnosis and scenario analysis when complex flow processes are involved.