229 resultados para Cache
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the reusability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective, providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
The aim of this work is to evaluate the SEE sensitivity of a multi-core processor having implemented ECC and parity in their cache memories. Two different application scenarios are studied. The first one configures the multi-core in Asymmetric Multi-Processing mode running a memory-bound application, whereas the second one uses the Symmetric Multi-Processsing mode running a CPU-bound application. The experiments were validated through radiation ground testing performed with 14 MeV neutrons on the Freescale P2041 multi-core manufactured in 45nm SOI technology. A deep analysis of the observed errors in cache memories was carried-out in order to reveal vulnerabilities in the cache protection mechanisms. Critical zones like tag addresses were affected during the experiments. In addition, the results show that the sensitivity strongly depends on the application and the multi-processsing mode used.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
The study of the Upper Jurassic-Lower Cretaceous deposits (Higueruelas, Villar del Arzobispo and Aldea de Cortés Formations) of the South Iberian Basin (NW Valencia, Spain) reveals new stratigraphic and sedimentological data, which have significant implications on the stratigraphic framework, depositional environments and age of these units. The Higueruelas Fm was deposited in a mid-inner carbonate platform where oncolitic bars migrated by the action of storms and where oncoid production progressively decreased towards the uppermost part of the unit. The overlying Villar del Arzobispo Fm has been traditionally interpreted as an inner platform-lagoon evolving into a tidal-flat. Here it is interpreted as an inner-carbonate platform affected by storms, where oolitic shoals protected a lagoon, which had siliciclastic inputs from the continent. The Aldea de Cortés Fm has been previously interpreted as a lagoon surrounded by tidal-flats and fluvial-deltaic plains. Here it is reinterpreted as a coastal wetland where siliciclastic muddy deposits interacted with shallow fresh to marine water bodies, aeolian dunes and continental siliciclastic inputs. The contact between the Higueruelas and Villar del Arzobispo Fms, classically defined as gradual, is also interpreted here as rapid. More importantly, the contact between the Villar del Arzobispo and Aldea de Cortés Fms, previously considered as unconformable, is here interpreted as gradual. The presence of Alveosepta in the Villar del Arzobispo Fm suggests that at least part of this unit is Kimmeridgian, unlike the previously assigned Late Tithonian-Middle Berriasian age. Consequently, the underlying Higueruelas Fm, previously considered Tithonian, should not be younger than Kimmeridgian. Accordingly, sedimentation of the Aldea de Cortés Fm, previously considered Valangian-Hauterivian, probably started during the Tithonian and it may be considered part of the regressive trend of the Late Jurassic-Early Cretaceous cycle. This is consistent with the dinosaur faunas, typically Jurassic, described in the Villar del Arzobispo and Aldea de Cortés Fms.
Resumo:
De nos jours, les séries télévisées américaines représentent une part incontournable de la culture populaire, à tel point que plusieurs traductions audiovisuelles coexistent au sein de la francophonie. Outre le doublage qui permet leur diffusion à la télévision, elles peuvent être sous titrées jusqu’à trois fois soit, en ordre chronologique : par des fans sur Internet; au Québec, pour la vente sur DVD en Amérique du Nord; et en France, pour la vente sur DVD en Europe. Pourtant, bien que ces trois sous titrages répondent aux mêmes contraintes linguistiques (celles de la langue française) et techniques (diffusion au petit écran), ils diffèrent dans leur traitement des dialogues originaux. Nous établissons dans un premier temps les pratiques à l’œuvre auprès des professionnels et des amateurs. Par la suite, l’analyse des traductions ainsi que le recours à un corpus comparable de séries télévisées françaises et québécoises permettent d’établir les normes linguistiques (notamment eu égard à la variété) et culturelles appliquées par les différents traducteurs et, subsidiairement, de définir ce que cache l’appellation « Canadian French ». Cette thèse s’inscrit dans le cadre des études descriptives et sociologiques. Nous y décrivons la réalité professionnelle des traducteurs de l’audiovisuel et l’influence que les fansubbers exercent non seulement sur la pratique professionnelle, mais aussi sur de nouvelles méthodes de formation de la prochaine génération de traducteurs. Par ailleurs, en étudiant plusieurs traductions d’une même œuvre, nous démontrons que les variétés de français ne sauraient justifier, à elles seules, la multiplication de l’offre en sous titrage, vu le faible taux de différences purement linguistiques.
Resumo:
Facebook joue un rôle déterminant dans la montée fulgurante en popularité des RSN sur Internet. Tout le fonctionnement de la plateforme repose sur la connexion aux comptes des autres membres. La popularité de Facebook doit beaucoup au partage public et réciproque des informations personnelles de ses usagers. Ce type de renseignement était jusqu’à présent réservé aux personnes appartenant à un cercle restreint de proches et d’intimes. Toutefois, les listes de contact des utilisateurs Facebook ne sont pas uniquement constituées d’amis proches, mais plutôt d’un ensemble de relations interpersonnelles – formé de connaissances, de collègues, d’anciens camarades de classe, etc. Les membres Facebook divulguent ainsi souvent leur intimité à un auditoire qui peut être qualifié de public en raison de sa taille et de sa diversité d’origine. Les recherches quantitatives sur le sujet sont nombreuses, néanmoins le point de vue de réels utilisateurs des réseaux sociaux numériques est très peu connu. Cette étude exploratoire vise les expressions visibles de l’intimité sur les réseaux sociaux numériques, tels que Facebook, et s’intéresse plus particulièrement à l’apparition d’une nouvelle forme d’intimité exclusive aux réseaux sociaux numériques. Pour ce faire, 15 entretiens individuels ont été menés auprès d’une population à forte utilisation de Facebook : les étudiantes universitaires de 18 à 24 ans. À l’aide des notions d’espace public, d’espace privé, d’intimité, d’extimité et de visibilité, ce mémoire explore la représentation que l’utilisateur se fait de l’envahissement du territoire de l’intimité par les réseaux sociaux. On y explique l’apparition d’une possible nouvelle forme d’intimité – engendré par les réseaux sociaux numériques – grâce au mouvement des frontières entre les espaces privés et publics. Selon les résultats présentés dans ce mémoire, les utilisateurs Facebook emploient un certain nombre de stratégies pour se protéger des effets négatifs de la diffusion de leur vie privée, tout en divulguant suffisamment d’information pour assurer l’entretient de leur relation avec leurs amis. Leur vie privée est donc publique, mais seulement pour leurs propres réseaux. L’intimité sur Facebook s’affiche aisément, selon des degrés déterminés par la communauté d’utilisateurs, alors qu’elle reste cachée dans la vie de tous les jours.
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
Resumo:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
Resumo:
De nos jours, les séries télévisées américaines représentent une part incontournable de la culture populaire, à tel point que plusieurs traductions audiovisuelles coexistent au sein de la francophonie. Outre le doublage qui permet leur diffusion à la télévision, elles peuvent être sous titrées jusqu’à trois fois soit, en ordre chronologique : par des fans sur Internet; au Québec, pour la vente sur DVD en Amérique du Nord; et en France, pour la vente sur DVD en Europe. Pourtant, bien que ces trois sous titrages répondent aux mêmes contraintes linguistiques (celles de la langue française) et techniques (diffusion au petit écran), ils diffèrent dans leur traitement des dialogues originaux. Nous établissons dans un premier temps les pratiques à l’œuvre auprès des professionnels et des amateurs. Par la suite, l’analyse des traductions ainsi que le recours à un corpus comparable de séries télévisées françaises et québécoises permettent d’établir les normes linguistiques (notamment eu égard à la variété) et culturelles appliquées par les différents traducteurs et, subsidiairement, de définir ce que cache l’appellation « Canadian French ». Cette thèse s’inscrit dans le cadre des études descriptives et sociologiques. Nous y décrivons la réalité professionnelle des traducteurs de l’audiovisuel et l’influence que les fansubbers exercent non seulement sur la pratique professionnelle, mais aussi sur de nouvelles méthodes de formation de la prochaine génération de traducteurs. Par ailleurs, en étudiant plusieurs traductions d’une même œuvre, nous démontrons que les variétés de français ne sauraient justifier, à elles seules, la multiplication de l’offre en sous titrage, vu le faible taux de différences purement linguistiques.
Resumo:
Current industry proposals for Hardware Transactional Memory (HTM) focus on best-effort solutions (BE-HTM) where hardware limits are imposed on transactions. These designs may show a significant performance degradation due to high contention scenarios and different hardware and operating system limitations that abort transactions, e.g. cache overflows, hardware and software exceptions, etc. To deal with these events and to ensure forward progress, BE-HTM systems usually provide a software fallback path to execute a lock-based version of the code. In this paper, we propose a hardware implementation of an irrevocability mechanism as an alternative to the software fallback path to gain insight into the hardware improvements that could enhance the execution of such a fallback. Our mechanism anticipates the abort that causes the transaction serialization, and stalls other transactions in the system so that transactional work loss is mini- mized. In addition, we evaluate the main software fallback path approaches and propose the use of ticket locks that hold precise information of the number of transactions waiting to enter the fallback. Thus, the separation of transactional and fallback execution can be achieved in a precise manner. The evaluation is carried out using the Simics/GEMS simulator and the complete range of STAMP transactional suite benchmarks. We obtain significant performance benefits of around twice the speedup and an abort reduction of 50% over the software fallback path for a number of benchmarks.
Resumo:
Ce mémoire de recherche-création est accompagné du court métrage «Tala». Pour visionner en ligne : vimeo.com/ondemand/talapierphilippechevigny
Resumo:
Ce mémoire de recherche-création est accompagné du court métrage «Tala». Pour visionner en ligne : vimeo.com/ondemand/talapierphilippechevigny
Resumo:
Background: The most frequent viral diseases which can cause abortion in sheep are Blue tongue, Border disease virus, Cache Valley fever and Schmallenberg virus. The diagnosis of abortion, namely virus-induced represents a challenge to field clinicians, since clinical signs presented by the dam are discrete, non-specific and variable (Agerhom et al., 2015). On the other hand, while some foetuses reveal characteristic and visible malformations, others do not reveal any lesions. In face of it, definitive diagnosis requires an appropriate history collection, as well as sending fresh samples, namely abortion material, foetus, placenta and umbilical cord, to a specialty laboratory, to obtain a precise diagnosis. Objectives: The authors suggest a registration method of all mandatory data, in order to further assist the diagnosis of viral diseases at the laboratories, including the most frequent congenital malformations reported in sheep abortions. Methods: Abortion samples of suspected viral origin were collected and all data were registered, in worktables optimized for this purpose. Results: The authors document, using macroscopic figures lesions of malformations in abortions, emphasizing the frequency and the importance of documenting each case, proposing practical and effective worktables to assist the fieldwork. Conclusions: Field clinician’s awareness of the importance of early detection of viral diseases causing abortion outbreaks stimulates a proper data collection for each case of abortion, in order to contribute to a precise diagnosis and posterior consistent epidemiological studies, which may allow diminishing of economic losses.