633 resultados para Tcp


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today, the development of domain-specific communication applications is both time-consuming and error-prone because the low-level communication services provided by the existing systems and networks are primitive and often heterogeneous. Multimedia communication applications are typically built on top of low-level network abstractions such as TCP/UDP socket, SIP (Session Initiation Protocol) and RTP (Real-time Transport Protocol) APIs. The User-centric Communication Middleware (UCM) is proposed to encapsulate the networking complexity and heterogeneity of basic multimedia and multi-party communication for upper-layer communication applications. And UCM provides a unified user-centric communication service to diverse communication applications ranging from a simple phone call and video conferencing to specialized communication applications like disaster management and telemedicine. It makes it easier to the development of domain-specific communication applications. The UCM abstraction and API is proposed to achieve these goals. The dissertation also tries to integrate the formal method into UCM development process. The formal model is created for UCM using SAM methodology. Some design errors are found during model creation because the formal method forces to give the precise description of UCM. By using the SAM tool, formal UCM model is translated to Promela formula model. In the dissertation, some system properties are defined as temporal logic formulas. These temporal logic formulas are manually translated to promela formulas which are individually integrated with promela formula model of UCM and verified using SPIN tool. Formal analysis used here helps verify the system properties (for example multiparty multimedia protocol) and dig out the bugs of systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Supervisory Control & Data Acquisition (SCADA) systems are used by many industries because of their ability to manage sensors and control external hardware. The problem with commercially available systems is that they are restricted to a local network of users that use proprietary software. There was no Internet development guide to give remote users out of the network, control and access to SCADA data and external hardware through simple user interfaces. To solve this problem a server/client paradigm was implemented to make SCADAs available via the Internet. Two methods were applied and studied: polling of a text file as a low-end technology solution and implementing a Transmission Control Protocol (TCP/IP) socket connection. Users were allowed to login to a website and control remotely a network of pumps and valves interfaced to a SCADA. This enabled them to sample the water quality of different reservoir wells. The results were based on real time performance, stability and ease of use of the remote interface and its programming. These indicated that the most feasible server to implement is the TCP/IP connection. For the user interface, Java applets and Active X controls provide the same real time access.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability of a previously PCB-enriched microbial culture from Venice Lagoon marine sediments to dechlorinate pentachlorophenol (PCP) and 2,3,5-trichlorophenol (2,3,5-TCP) was confirmed under anaerobic conditions in microcosms consisting of site water and sediment. Dechlorination activities against Aroclor 1254 PCB mixture were also confirmed as control. Pentachlorophenol was degraded to 2,4,6-TCP (75.92±0.85 mol%), 3,5-DCP (6.40±0.75 mol%), and phenol (15.40±0.87 mol%). From the distribution of the different dechlorination products accumulated in the PCP-spiked cultures over time, two dechlorination pathways for PCP were proposed: (i) PCP to 2,3,4,6-TeCP, then to 2,4,6-TCP through the removal of both meta double-flanked chlorine substituents (main pathway); (ii) alternately, PCP to 2,3,5,6-TeCP, 2,3,5-TCP, 3,5-DCP, then phenol, through the removal of the para double-flanked chlorine, followed by ortho single-flanked chlorines, and finally meta unflanked chlorines (minor pathway). Removal of meta double-flanked chlorines is thus preferred over all other substituents. 2,3,5-TCP, that completely lacks double-flanked chlorines, was degraded to 3,5-DCP through removal of the ortho single-flanked chlorine, with a 99.6% reduction in initial concentration of 2,3,5-TCP by week 14. 16S rRNA PCR-DGGE using Chloroflexi-specific primers revealed a different role of the two microorganisms VLD-1 and VLD-2, previously identified as dechlorinators in the Aroclor 1254 PCB-enriched community, in the dehalogenation of chlorophenols. VLD-1 was observed both in PCP- and TCP-dechlorinating communities, whereas VLD-2 only in TCP-dechlorinating communities. This indicates that VLD-1 and VLD-2 may both dechlorinate ortho single-flanked chlorines, but only VLD-1 is able to remove double-flanked meta or para chlorines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud computing enables independent end users and applications to share data and pooled resources, possibly located in geographically distributed Data Centers, in a fully transparent way. This need is particularly felt by scientific applications to exploit distributed resources in efficient and scalable way for the processing of big amount of data. This paper proposes an open so- lution to deploy a Platform as a service (PaaS) over a set of multi- site data centers by applying open source virtualization tools to facilitate operation among virtual machines while optimizing the usage of distributed resources. An experimental testbed is set up in Openstack environment to obtain evaluations with different types of TCP sample connections to demonstrate the functionality of the proposed solution and to obtain throughput measurements in relation to relevant design parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il seguente lavoro di tesi si inserisce all'interno di un progetto accademico volto alla realizzazione di un sistema capace elaborare immagini utilizzando una rete FPGA, acquisite da un sensore. Ogni scrittura di un nuovo frame in memoria RAM genera un interrupt. L'obiettivo della tesi è creare un sistema client/server che permetta il trasferimento del flusso di frame dalla ZedBoard a un PC e la visualizzazione a video. Il progetto eseguito sulla ZedBoard è proposto in due versioni: la prima in assenza di sistema operativo (Standalone) e una seconda implementata su Linux. Il progetto eseguito sul PC è compatibile con Linux e Windows. La visualizzazione delle immagini è implementata utilizzando la libreria OpenCV.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il presente lavoro consiste nella realizzazione di un'interfaccia utente adibita all'assegnamento di missioni e al monitoraggio remoto di un rover agricolo autonomo. Sfruttando l'informatica per la sua implementazione, tale interfaccia trova invece applicazione nel campo dell'automazione e dell'agricoltura di precisione. L'utilizzatore ha perciò la facoltà di muovere il rover in campo aperto e di demandargli missioni specifiche, ricevendo allo stesso tempo un feedback continuo sul suo operato. L'applicativo software comunica quindi in maniera bidirezionale con il veicolo controllato ed è predisposto per sfruttare diversi canali di comunicazione (antenne seriali, pacchetti udp, socket tcp). La scrittura del codice è stata seguita da una serie di prove di comunicazione con il veicolo, effettuate indoor, e infine da alcuni test completi effettuati outdoor, con il rover in movimento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sono dette “challenged networks” quelle reti in cui lunghi ritardi, frequenti partizionamenti e interruzioni, elevati tassi di errore e di perdita non consentono l’impiego dei classici protocolli di comunicazione di Internet, in particolare il TCP/IP. Il Delay-/Disruption-Tolerant Networking (DTN) è una soluzione per il trasferimento di dati attraverso queste reti. L’architettura DTN prevede l’introduzione, sopra il livello di trasporto, del cosiddetto “bundle layer”, che si occupa di veicolare messaggi, o bundle, secondo l’approccio store-and-forward: ogni nodo DTN conserva persistentemente un bundle finché non si presenta l’opportunità di inoltrarlo al nodo successivo verso la destinazione. Il protocollo impiegato nel bundle layer è il Bundle Protocol, le cui principali implementazioni sono tre: DTN2, l’implementazione di riferimento; ION, sviluppata da NASA-JPL e più orientata alle comunicazioni spaziali; IBR-DTN, rivolta soprattutto a dispositivi embedded. Ciascuna di esse offre API che consentono la scrittura di applicazioni in grado di inviare e ricevere bundle. DTNperf è uno strumento progettato per la valutazione delle prestazioni in ambito DTN. La più recente iterazione, DTNperf_3, è compatibile sia con DTN2 che con ION nella stessa versione del programma, grazie all’introduzione di un “Abstraction Layer” che fornisce un’unica interfaccia per l’interazione con le diverse implementazioni del Bundle Protocol e che solo internamente si occupa di invocare le API specifiche dell’implementazione attiva. Obiettivo della tesi è estendere l’Abstraction Layer affinché supporti anche IBR-DTN, cosicché DTNperf_3 possa essere impiegato indifferentemente su DTN2, ION e IBR DTN. Il lavoro sarà ripartito su tre fasi: nella prima esploreremo IBR DTN e le sue API; nella seconda procederemo all’effettiva estensione dell’Abstraction Layer; nella terza verificheremo il funzionamento di DTNperf a seguito delle modifiche, sia in ambiente esclusivamente IBR-DTN, sia ibrido.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to determine the haematological value and biochemical blood in baby alpacas with enteric disorder. A total of 30 blood and serum samples were collected from alpacas of 1 month old with diarrhoea and 5 blood samples of clinically healthy baby alpacas (controls). The animals were from communities in the central Andes from Peru. About haematology were determined haematocrit, haemoglobin concentration, red blood count and white blood count that were not significantly different between control animals and animals with diarrhoea. Moreover, biochemical blood parameters as total protein, albumin and calcium decrease significantly (p<0.05). We conclude that our results could be considered as factors in the mortality of baby alpaca by infectious diarrhoea.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction : Le diabète de type 2 est une maladie évolutive débilitante et souvent mortelle qui atteint de plus en plus de personnes dans le monde. Le traitement antidiabétique non-insulinique (TADNI) notamment le traitement antidiabétique oral (TADO) est le plus fréquemment utilisé chez les adultes atteints de cette maladie. Toutefois, plusieurs de ces personnes ne prennent pas leur TADO tel que prescrit posant ainsi la problématique d’une adhésion sous-optimale. Ceci entraîne des conséquences néfastes aussi bien pour les patients que pour la société dans laquelle ils vivent. Il serait donc pertinent d’identifier des pistes de solution à cette problématique. Objectifs : Trois objectifs de recherche ont été étudiés : 1) Explorer la capacité de la théorie du comportement planifié (TCP) à prédire l’adhésion future au TADNI chez les adultes atteints de diabète de type 2, 2) Évaluer l’efficacité globale des interventions visant à améliorer l’adhésion au TADO chez les adultes atteints de diabète de type 2 et étudier l’influence des techniques de changement de comportement sur cette efficacité globale, et 3) Évaluer l’efficacité globale de l’entretien motivationnel sur l’adhésion au traitement médicamenteux chez les adultes atteints de maladie chronique et étudier l’influence des caractéristiques de cette intervention sur son efficacité globale. Méthodes : Pour l’objectif 1 : Il s’agissait d’une enquête web, suivie d’une évaluation de l’adhésion au TADNI sur une période de 30 jours, chez des adultes atteints de diabète de type 2, membres de Diabète Québec. L’enquête consistait à la complétion d’un questionnaire auto-administré incluant les variables de la TCP (intention, contrôle comportemental perçu et attitude) ainsi que d’autres variables dites «externes». Les informations relatives au calcul de l’adhésion provenaient des dossiers de pharmacie des participants transmis via la plateforme ReMed. Une régression linéaire multivariée a été utilisée pour estimer la mesure d’association entre l’intention et l’adhésion future au TADNI ainsi que l’interaction entre l’adhésion passée et l’intention. Pour répondre aux objectifs 2 et 3, deux revues systématiques et méta-analyses ont été effectuées et rapportées selon les lignes directrices de PRISMA. Un modèle à effets aléatoires a été utilisé pour estimer l’efficacité globale (g d’Hedges) des interventions et son intervalle de confiance à 95 % (IC95%) dans chacune des revues. Nous avons également quantifié l’hétérogénéité (I2 d’Higgins) entre les études, et avons fait des analyses de sous-groupe et des analyses de sensibilité. Résultats : Objectif 1 : Il y avait une interaction statistiquement significative entre l’adhésion passée et l’intention (valeur-p= 0,03). L’intention n’était pas statistiquement associée à l’adhésion future au TADNI, mais son effet était plus fort chez les non-adhérents que chez les adhérents avant l’enquête web. En revanche, l’intention était principalement prédite par le contrôle comportemental perçu à la fois chez les adhérents [β= 0,90, IC95%= (0,80; 1,00)] et chez les non-adhérents passés [β= 0,76, IC95%= (0,56; 0,97)]. Objectif 2 : L’efficacité globale des interventions sur l’adhésion au TADO était de 0,21 [IC95%= (-0,05; 0,47); I2= 82 %]. L’efficacité globale des interventions dans lesquelles les intervenants aidaient les patients et/ou les cliniciens à être proactifs dans la gestion des effets indésirables était de 0,64 [IC95%= (0,31; 0,96); I2= 56 %]. Objectif 3 : L’efficacité globale des interventions (basées sur l’entretien motivationnel) sur l’adhésion au traitement médicamenteux était de 0,12 [IC95%= (0,05; 0,20); I2= 1 %. Les interventions basées uniquement sur l’entretien motivationnel [β= 0,18, IC95%= (0,00; 0,36)] et celles dans lesquelles les intervenants ont été coachés [β= 0,47, IC95%= (0,03; 0,90)] étaient les plus efficaces. Aussi, les interventions administrées en face-à-face étaient plus efficaces que celles administrées par téléphone [β= 0,27, IC95%=(0,04; 0,50)]. Conclusion : Il existe un écart entre l’intention et l’adhésion future au TADNI, qui est partiellement expliqué par le niveau d’adhésion passée. Toutefois, il n’y avait pas assez de puissance statistique pour démontrer une association statistiquement significative entre l’intention et l’adhésion future chez les non-adhérents passés. D’un autre côté, quelques solutions au problème de l’adhésion sous-optimale au TADO ont été identifiées. En effet, le fait d’aider les patients et/ou les cliniciens à être proactifs dans la gestion des effets indésirables contribue efficacement à l’amélioration de l’adhésion au TADO chez les adultes atteints de diabète de type 2. Aussi, les interventions basées sur l’entretien motivationnel améliorent efficacement l’adhésion au traitement médicamenteux chez les adultes atteints de maladie chronique. L’entretien motivationnel pourrait donc être utilisé comme un outil clinique pour soutenir les patients dans l’autogestion de leur TADO.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé : En raison de sa grande étendue, le Nord canadien présente plusieurs défis logistiques pour une exploitation rentable de ses ressources minérales. La TéléCartographie Prédictive (TCP) vise à faciliter la localisation de gisements en produisant des cartes du potentiel géologique. Des données altimétriques sont nécessaires pour générer ces cartes. Or, celles actuellement disponibles au nord du 60e parallèle ne sont pas optimales principalement parce qu’elles sont dérivés de courbes à équidistance variable et avec une valeur au mètre. Parallèlement, il est essentiel de connaître l'exactitude verticale des données altimétriques pour être en mesure de les utiliser adéquatement, en considérant les contraintes liées à son exactitude. Le projet présenté vise à aborder ces deux problématiques afin d'améliorer la qualité des données altimétriques et contribuer à raffiner la cartographie prédictive réalisée par TCP dans le Nord canadien, pour une zone d’étude située au Territoire du Nord-Ouest. Le premier objectif était de produire des points de contrôles permettant une évaluation précise de l'exactitude verticale des données altimétriques. Le second objectif était de produire un modèle altimétrique amélioré pour la zone d'étude. Le mémoire présente d'abord une méthode de filtrage pour des données Global Land and Surface Altimetry Data (GLA14) de la mission ICESat (Ice, Cloud and land Elevation SATellite). Le filtrage est basé sur l'application d'une série d'indicateurs calculés à partir d’informations disponibles dans les données GLA14 et des conditions du terrain. Ces indicateurs permettent d'éliminer les points d'élévation potentiellement contaminés. Les points sont donc filtrés en fonction de la qualité de l’attitude calculée, de la saturation du signal, du bruit d'équipement, des conditions atmosphériques, de la pente et du nombre d'échos. Ensuite, le document décrit une méthode de production de Modèles Numériques de Surfaces (MNS) améliorés, par stéréoradargrammétrie (SRG) avec Radarsat-2 (RS-2). La première partie de la méthodologie adoptée consiste à faire la stéréorestitution des MNS à partir de paires d'images RS-2, sans point de contrôle. L'exactitude des MNS préliminaires ainsi produits est calculée à partir des points de contrôles issus du filtrage des données GLA14 et analysée en fonction des combinaisons d’angles d'incidences utilisées pour la stéréorestitution. Ensuite, des sélections de MNS préliminaires sont assemblées afin de produire 5 MNS couvrant chacun la zone d'étude en totalité. Ces MNS sont analysés afin d'identifier la sélection optimale pour la zone d'intérêt. Les indicateurs sélectionnés pour la méthode de filtrage ont pu être validés comme performant et complémentaires, à l’exception de l’indicateur basé sur le ratio signal/bruit puisqu’il était redondant avec l’indicateur basé sur le gain. Autrement, chaque indicateur a permis de filtrer des points de manière exclusive. La méthode de filtrage a permis de réduire de 19% l'erreur quadratique moyenne sur l'élévation, lorsque que comparée aux Données d'Élévation Numérique du Canada (DNEC). Malgré un taux de rejet de 69% suite au filtrage, la densité initiale des données GLA14 a permis de conserver une distribution spatiale homogène. À partir des 136 MNS préliminaires analysés, aucune combinaison d’angles d’incidences des images RS-2 acquises n’a pu être identifiée comme étant idéale pour la SRG, en raison de la grande variabilité des exactitudes verticales. Par contre, l'analyse a indiqué que les images devraient idéalement être acquises à des températures en dessous de 0°C, pour minimiser les disparités radiométriques entre les scènes. Les résultats ont aussi confirmé que la pente est le principal facteur d’influence sur l’exactitude de MNS produits par SRG. La meilleure exactitude verticale, soit 4 m, a été atteinte par l’assemblage de configurations de même direction de visées. Par contre, les configurations de visées opposées, en plus de produire une exactitude du même ordre (5 m), ont permis de réduire le nombre d’images utilisées de 30%, par rapport au nombre d'images acquises initialement. Par conséquent, l'utilisation d'images de visées opposées pourrait permettre d’augmenter l’efficacité de réalisation de projets de SRG en diminuant la période d’acquisition. Les données altimétriques produites pourraient à leur tour contribuer à améliorer les résultats de la TCP, et augmenter la performance de l’industrie minière canadienne et finalement, améliorer la qualité de vie des citoyens du Nord du Canada.