979 resultados para Modèles cache
Resumo:
Réalisé en cotutelle avec Aix Marseille Université.
Resumo:
Ce projet de recherche a été réalisé avec la collaboration de FPInnovations. Une part des travaux concernant le problème de récolte chilien a été effectuée à l'Instituto Sistemas Complejos de Ingeniería (ISCI) à Santiago (Chili).
Resumo:
Tout médicament administré par la voie orale doit être absorbé sans être métabolisé par l’intestin et le foie pour atteindre la circulation systémique. Malgré son impact majeur sur l’effet de premier passage de plusieurs médicaments, le métabolisme intestinal est souvent négligé comparativement au métabolisme hépatique. L’objectif de ces travaux de maîtrise est donc d’utiliser, caractériser et développer différents outils in vitro et in vivo pour mieux comprendre et prédire l’impact du métabolisme intestinal sur l’effet de premier passage des médicaments comparé au métabolisme hépatique. Pour se faire, différents substrats d’enzymes du métabolisme ont été incubés dans des microsomes intestinaux et hépatiques et des différences entre la vitesse de métabolisme et les métabolites produits ont été démontrés. Afin de mieux comprendre l’impact de ces différences in vivo, des études mécanistiques chez des animaux canulés et traités avec des inhibiteurs enzymatiques ont été conduites avec le substrat métoprolol. Ces études ont démontré l’impact du métabolisme intestinal sur le premier passage du métoprolol. De plus, elles ont révélé l’effet sur la vidange gastrique du 1-aminobenzotriazole, un inhibiteur des cytochromes p450, évitant ainsi une mauvaise utilisation de cet outil dans le futur. Ces travaux de maîtrise ont permis d’améliorer les connaissances des différents outils in vitro et in vivo pour étudier le métabolisme intestinal tout en permettant de mieux comprendre les différences entre le rôle de l’intestin et du foie sur l’effet de premier passage.
Resumo:
Data caching is an important technique in mobile computing environments for improving data availability and access latencies particularly because these computing environments are characterized by narrow bandwidth wireless links and frequent disconnections. Cache replacement policy plays a vital role to improve the performance in a cached mobile environment, since the amount of data stored in a client cache is small. In this paper we reviewed some of the well known cache replacement policies proposed for mobile data caches. We made a comparison between these policies after classifying them based on the criteria used for evicting documents. In addition, this paper suggests some alternative techniques for cache replacement
Resumo:
Cooperative caching in mobile ad hoc networks aims at improving the efficiency of information access by reducing access latency and bandwidth usage. Cache replacement policy plays a vital role in improving the performance of a cache in a mobile node since it has limited memory. In this paper we propose a new key based cache replacement policy called E-LRU for cooperative caching in ad hoc networks. The proposed scheme for replacement considers the time interval between the recent references, size and consistency as key factors for replacement. Simulation study shows that the proposed replacement policy can significantly improve the cache performance in terms of cache hit ratio and query delay
Resumo:
Cooperative caching is used in mobile ad hoc networks to reduce the latency perceived by the mobile clients while retrieving data and to reduce the traffic load in the network. Caching also increases the availability of data due to server disconnections. The implementation of a cooperative caching technique essentially involves four major design considerations (i) cache placement and resolution, which decides where to place and how to locate the cached data (ii) Cache admission control which decides the data to be cached (iii) Cache replacement which makes the replacement decision when the cache is full and (iv) consistency maintenance, i.e. maintaining consistency between the data in server and cache. In this paper we propose an effective cache resolution technique, which reduces the number of messages flooded in to the network to find the requested data. The experimental results gives a promising result based on the metrics of studies.
Resumo:
In this paper we investigate the problem of cache resolution in a mobile peer to peer ad hoc network. In our vision cache resolution should satisfy the following requirements: (i) it should result in low message overhead and (ii) the information should be retrieved with minimum delay. In this paper, we show that these goals can be achieved by splitting the one hop neighbours in to two sets based on the transmission range. The proposed approach reduces the number of messages flooded in to the network to find the requested data. This scheme is fully distributed and comes at very low cost in terms of cache overhead. The experimental results gives a promising result based on the metrics of studies.
Resumo:
Data caching is an attractive solution for reducing bandwidth demands and network latency in mobile ad hoc networks. Deploying caches in mobile nodes can reduce the overall traf c considerably. Cache hits eliminate the need to contact the data source frequently, which avoids additional network overhead. In this paper we propose a data discovery and cache management policy for cooperative caching, which reduces the power usage, caching overhead and delay by reducing the number of control messages flooded into the network .A cache discovery process based on position cordinates of neighboring nodes is developed for this .The stimulstion results gives a promising result based on the metrics of the studies.
Resumo:
As the number of processors in distributed-memory multiprocessors grows, efficiently supporting a shared-memory programming model becomes difficult. We have designed the Protocol for Hierarchical Directories (PHD) to allow shared-memory support for systems containing massive numbers of processors. PHD eliminates bandwidth problems by using a scalable network, decreases hot-spots by not relying on a single point to distribute blocks, and uses a scalable amount of space for its directories. PHD provides a shared-memory model by synthesizing a global shared memory from the local memories of processors. PHD supports sequentially consistent read, write, and test- and-set operations. This thesis also introduces a method of describing locality for hierarchical protocols and employs this method in the derivation of an abstract model of the protocol behavior. An embedded model, based on the work of Johnson[ISCA19], describes the protocol behavior when mapped to a k-ary n-cube. The thesis uses these two models to study the average height in the hierarchy that operations reach, the longest path messages travel, the number of messages that operations generate, the inter-transaction issue time, and the protocol overhead for different locality parameters, degrees of multithreading, and machine sizes. We determine that multithreading is only useful for approximately two to four threads; any additional interleaving does not decrease the overall latency. For small machines and high locality applications, this limitation is due mainly to the length of the running threads. For large machines with medium to low locality, this limitation is due mainly to the protocol overhead being too large. Our study using the embedded model shows that in situations where the run length between references to shared memory is at least an order of magnitude longer than the time to process a single state transition in the protocol, applications exhibit good performance. If separate controllers for processing protocol requests are included, the protocol scales to 32k processor machines as long as the application exhibits hierarchical locality: at least 22% of the global references must be able to be satisfied locally; at most 35% of the global references are allowed to reach the top level of the hierarchy.
Resumo:
Caches are known to consume up to half of all system power in embedded processors. Co-optimizing performance and power of the cache subsystems is therefore an important step in the design of embedded systems, especially those employing application specific instruction processors. In this project, we propose an analytical cache model that succinctly captures the miss performance of an application over the entire cache parameter space. Unlike exhaustive trace driven simulation, our model requires that the program be simulated once so that a few key characteristics can be obtained. Using these application-dependent characteristics, the model can span the entire cache parameter space consisting of cache sizes, associativity and cache block sizes. In our unified model, we are able to cater for direct-mapped, set and fully associative instruction, data and unified caches. Validation against full trace-driven simulations shows that our model has a high degree of fidelity. Finally, we show how the model can be coupled with a power model for caches such that one can very quickly decide on pareto-optimal performance-power design points for rapid design space exploration.
Resumo:
gvSIG Mini es una aplicación open-source de usuario final cliente móvil de Infraestructura de Datos Espaciales IDEs con licencia GNU/ GPL, diseñada para teléfonos móviles Java y Android que permite la visualización y navegación sobre cartografía digital estructurada en tiles procedente de servicios web OGC como WMS(-C) y de servicios como OpenStreetMap (OSM), Yahoo Maps, Maps Bing, así como el almacenamiento en caché para reducir al mínimo el ancho de banda. gvSIG Mini puede acceder a servicios geoespaciales como NameFinder, para la búsqueda de puntos de interés y YOURS (Yet Another OpenStreetMap Routing Service) para el cálculo de rutas y la renderización de la información vectorial el lado del cliente. Por otra parte, gvSIG Mini también ofrece servicio de localización GPS. La versión de gvSIG Mini para Android, posee algunas características adicionales como son el soporte de localización Android o el uso del lacelerómetro para centrado. Esta versión también hace uso de servicios como son la predicción del tiempo o TweetMe que permite compartir una localización utilizando el popular servicio social Twitter. gvSIG Mini es una aplicación que puede ser descargada y usada libremente, convirtiéndose en una plataforma para el desarrollo de nuevas soluciones y aplicaciones en el campo de Location Based Services (LBS). gvSIG Mini ha sido desarrollado por Prodevelop, S.L. No es un proyecto oficial de gvSIG, pero se une a la familia a través del catálogo de extensiones no oficiales de gvSIG. Phone Cache es una extensión que funciona sobre gvSIG 1.1.2 que permite generar una caché, para poder utilizar gvSIG Mini para Java en modo desconectado