945 resultados para multi-objective optimisation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to low cost and easy deployment, multi-hop wireless networks become a very attractive communication paradigm. However, IEEE 802.11 medium access control (MAC) protocol widely used in wireless LANs was not designed for multi-hop wireless networks. Although it can support some kinds of ad hoc network architecture, it does not function efficiently in those wireless networks with multi-hop connectivity. Therefore, our research is focused on studying the medium access control in multi-hop wireless networks. The objective is to design practical MAC layer protocols for supporting multihop wireless networks. Particularly, we try to prolong the network lifetime without degrading performances with small battery-powered devices and improve the system throughput with poor quality channels. ^ In this dissertation, we design two MAC protocols. The first one is aimed at minimizing energy-consumption without deteriorating communication activities, which provides energy efficiency, latency guarantee, adaptability and scalability in one type of multi-hop wireless networks (i.e. wireless sensor network). Methodologically, inspired by the phase transition phenomena in distributed networks, we define the wake-up probability, which maintained by each node. By using this probability, we can control the number of wireless connectivity within a local area. More specifically, we can adaptively adjust the wake-up probability based on the local network conditions to reduce energy consumption without increasing transmission latency. The second one is a cooperative MAC layer protocol for multi-hop wireless networks, which leverages multi-rate capability by cooperative transmission among multiple neighboring nodes. Moreover, for bidirectional traffic, the network throughput can be further increased by using the network coding technique. It is a very helpful complement for current rate-adaptive MAC protocols under the poor channel conditions of direct link. Finally, we give an analytical model to analyze impacts of cooperative node on the system throughput. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past two decades, multi-agent systems (MAS) have emerged as a new paradigm for conceptualizing large and complex distributed software systems. A multi-agent system view provides a natural abstraction for both the structure and the behavior of modern-day software systems. Although there were many conceptual frameworks for using multi-agent systems, there was no well established and widely accepted method for modeling multi-agent systems. This dissertation research addressed the representation and analysis of multi-agent systems based on model-oriented formal methods. The objective was to provide a systematic approach for studying MAS at an early stage of system development to ensure the quality of design. ^ Given that there was no well-defined formal model directly supporting agent-oriented modeling, this study was centered on three main topics: (1) adapting a well-known formal model, predicate transition nets (PrT nets), to support MAS modeling; (2) formulating a modeling methodology to ease the construction of formal MAS models; and (3) developing a technique to support machine analysis of formal MAS models using model checking technology. PrT nets were extended to include the notions of dynamic structure, agent communication and coordination to support agent-oriented modeling. An aspect-oriented technique was developed to address the modularity of agent models and compositionality of incremental analysis. A set of translation rules were defined to systematically translate formal MAS models to concrete models that can be verified through the model checker SPIN (Simple Promela Interpreter). ^ This dissertation presents the framework developed for modeling and analyzing MAS, including a well-defined process model based on nested PrT nets, and a comprehensive methodology to guide the construction and analysis of formal MAS models.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hurricane is one of the most destructive and costly natural hazard to the built environment and its impact on low-rise buildings, particularity, is beyond acceptable. The major objective of this research was to perform a parametric evaluation of internal pressure (IP) for wind-resistant design of low-rise buildings and wind-driven natural ventilation applications. For this purpose, a multi-scale experimental, i.e. full-scale at Wall of Wind (WoW) and small-scale at Boundary Layer Wind Tunnel (BLWT), and a Computational Fluid Dynamics (CFD) approach was adopted. This provided new capability to assess wind pressures realistically on internal volumes ranging from small spaces formed between roof tiles and its deck to attic to room partitions. Effects of sudden breaching, existing dominant openings on building envelopes as well as compartmentalization of building interior on the IP were systematically investigated. Results of this research indicated: (i) for sudden breaching of dominant openings, the transient overshooting response was lower than the subsequent steady state peak IP and internal volume correction for low-wind-speed testing facilities was necessary. For example a building without volume correction experienced a response four times faster and exhibited 30–40% lower mean and peak IP; (ii) for existing openings, vent openings uniformly distributed along the roof alleviated, whereas one sided openings aggravated the IP; (iii) larger dominant openings exhibited a higher IP on the building envelope, and an off-center opening on the wall exhibited (30–40%) higher IP than center located openings; (iv) compartmentalization amplified the intensity of IP and; (v) significant underneath pressure was measured for field tiles, warranting its consideration during net pressure evaluations. The study aimed at wind driven natural ventilation indicated: (i) the IP due to cross ventilation was 1.5 to 2.5 times higher for Ainlet/Aoutlet>1 compared to cases where Ainlet/Aoutlet<1, this in effect reduced the mixing of air inside the building and hence the ventilation effectiveness; (ii) the presence of multi-room partitioning increased the pressure differential and consequently the air exchange rate. Overall good agreement was found between the observed large-scale, small-scale and CFD based IP responses. Comparisons with ASCE 7-10 consistently demonstrated that the code underestimated peak positive and suction IP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2001, a weather and climate monitoring network was established along the temperature and aridity gradient between the sub-humid Moroccan High Atlas Mountains and the former end lake of the Middle Drâa in a pre-Saharan environment. The highest Automated Weather Stations (AWS) was installed just below the M'Goun summit at 3850 m, the lowest station Lac Iriki was at 450 m. This network of 13 AWS stations was funded and maintained by the German IMPETUS (BMBF Grant 01LW06001A, North Rhine-Westphalia Grant 313-21200200) project and since 2011 five stations were further maintained by the GERMAN DFG Fennec project (FI 786/3-1), this way some stations of the AWS network provided data for almost 12 years from 2001-2012. Standard meteorological variables such as temperature, humidity, and wind were measured at an altitude of 2 m above ground. Other meteorological variables comprise precipitation, station pressure, solar irradiance, soil temperature at different depths and for high mountain station snow water equivalent. The stations produced data summaries for 5-minute-precipitation-data, 10- or 15-minute-data and a daily summary of all other variables. This network is a unique resource of multi-year weather data in the remote semi-arid to arid mountain region of the Saharan flank of the Atlas Mountains. The network is described in Schulz et al. (2010) and its further continuation until 2012 is briefly discussed in Redl et al. (2015, doi:10.1175/MWR-D-15-0223.1) and Redl et al. (2016, doi:10.1002/2015JD024443).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water-alternating-gas (WAG) is an enhanced oil recovery method combining the improved macroscopic sweep of water flooding with the improved microscopic displacement of gas injection. The optimal design of the WAG parameters is usually based on numerical reservoir simulation via trial and error, limited by the reservoir engineer’s availability. Employing optimisation techniques can guide the simulation runs and reduce the number of function evaluations. In this study, robust evolutionary algorithms are utilized to optimise hydrocarbon WAG performance in the E-segment of the Norne field. The first objective function is selected to be the net present value (NPV) and two global semi-random search strategies, a genetic algorithm (GA) and particle swarm optimisation (PSO) are tested on different case studies with different numbers of controlling variables which are sampled from the set of water and gas injection rates, bottom-hole pressures of the oil production wells, cycle ratio, cycle time, the composition of the injected hydrocarbon gas (miscible/immiscible WAG) and the total WAG period. In progressive experiments, the number of decision-making variables is increased, increasing the problem complexity while potentially improving the efficacy of the WAG process. The second objective function is selected to be the incremental recovery factor (IRF) within a fixed total WAG simulation time and it is optimised using the same optimisation algorithms. The results from the two optimisation techniques are analyzed and their performance, convergence speed and the quality of the optimal solutions found by the algorithms in multiple trials are compared for each experiment. The distinctions between the optimal WAG parameters resulting from NPV and oil recovery optimisation are also examined. This is the first known work optimising over this complete set of WAG variables. The first use of PSO to optimise a WAG project at the field scale is also illustrated. Compared to the reference cases, the best overall values of the objective functions found by GA and PSO were 13.8% and 14.2% higher, respectively, if NPV is optimised over all the above variables, and 14.2% and 16.2% higher, respectively, if IRF is optimised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show experimentally a 57nm gain bandwidth for an ultra-long Raman fiber laser based amplification technique using only a single pump wavelength. The enhanced gain bandwidth and gain flatness is investigated for single and multi-cavity designs. ©2010 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

De nombreux problèmes liés aux domaines du transport, des télécommunications et de la logistique peuvent être modélisés comme des problèmes de conception de réseaux. Le problème classique consiste à transporter un flot (données, personnes, produits, etc.) sur un réseau sous un certain nombre de contraintes dans le but de satisfaire la demande, tout en minimisant les coûts. Dans ce mémoire, on se propose d'étudier le problème de conception de réseaux avec coûts fixes, capacités et un seul produit, qu'on transforme en un problème équivalent à plusieurs produits de façon à améliorer la valeur de la borne inférieure provenant de la relaxation continue du modèle. La méthode que nous présentons pour la résolution de ce problème est une méthode exacte de branch-and-price-and-cut avec une condition d'arrêt, dans laquelle nous exploitons à la fois la méthode de génération de colonnes, la méthode de génération de coupes et l'algorithme de branch-and-bound. Ces méthodes figurent parmi les techniques les plus utilisées en programmation linéaire en nombres entiers. Nous testons notre méthode sur deux groupes d'instances de tailles différentes (gran-des et très grandes), et nous la comparons avec les résultats donnés par CPLEX, un des meilleurs logiciels permettant de résoudre des problèmes d'optimisation mathématique, ainsi qu’avec une méthode de branch-and-cut. Il s'est avéré que notre méthode est prometteuse et peut donner de bons résultats, en particulier pour les instances de très grandes tailles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Person re-identification involves recognizing a person across non-overlapping camera views, with different pose, illumination, and camera characteristics. We propose to tackle this problem by training a deep convolutional network to represent a person’s appearance as a low-dimensional feature vector that is invariant to common appearance variations encountered in the re-identification problem. Specifically, a Siamese-network architecture is used to train a feature extraction network using pairs of similar and dissimilar images. We show that use of a novel multi-task learning objective is crucial for regularizing the network parameters in order to prevent over-fitting due to the small size the training dataset. We complement the verification task, which is at the heart of re-identification, by training the network to jointly perform verification, identification, and to recognise attributes related to the clothing and pose of the person in each image. Additionally, we show that our proposed approach performs well even in the challenging cross-dataset scenario, which may better reflect real-world expected performance. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce projet de recherche s’inscrit dans le domaine de la dosimétrie à scintillation en radiothérapie, plus précisément en curiethérapie à haut débit de dose (HDR). Lors de ce type de traitement, la dose est délivrée localement, ce qui implique de hauts gradients de dose autour de la source. Le but de ce travail est d’obtenir un détecteur mesurant la dose en 2 points distincts et optimisé pour la mesure de dose en curiethérapie HDR. Pour ce faire, le projet de recherche est séparé en deux études : la caractérisation spectrale du détecteur à 2-points et la caractérisation du système de photodétecteur menant à la mesure de la dose. D’abord, la chaine optique d’un détecteur à scintillation à 2-points est caractérisée à l’aide d’un spectromètre afin de déterminer les composantes scintillantes optimales. Cette étude permet de construire quelques détecteurs à partir des composantes choisies pour ensuite les tester avec le système de photodétecteur multi-point. Le système de photodétecteur est aussi caractérisé de façon à évaluer les limites de sensibilité pour le détecteur 2-points choisi précédemment. L’objectif final est de pouvoir mesurer le débit de dose avec précision et justesse aux deux points de mesure du détecteur multi-point lors d’un traitement de curiethérapie HDR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé : Les photodiodes à avalanche monophotonique (SPAD) sont d'intérêts pour les applications requérant la détection de photons uniques avec une grande résolution temporelle, comme en physique des hautes énergies et en imagerie médicale. En fait, les matrices de SPAD, souvent appelés photomultiplicateurs sur silicium (SiPM), remplacent graduellement les tubes photomultiplicateurs (PMT) et les photodiodes à avalanche (APD). De plus, il y a une tendance à utiliser les matrices de SPAD en technologie CMOS afin d'obtenir des pixels intelligents optimisés pour la résolution temporelle. La fabrication de SPAD en technologie CMOS commerciale apporte plusieurs avantages par rapport aux procédés optoélectroniques comme le faible coût, la capacité de production, l'intégration d'électronique et la miniaturisation des systèmes. Cependant, le défaut principal du CMOS est le manque de flexibilité de conception au niveau de l'architecture du SPAD, causé par le caractère fixe et standardisé des étapes de fabrication en technologie CMOS. Un autre inconvénient des matrices de SPAD CMOS est la perte de surface photosensible amenée par la présence de circuits CMOS. Ce document présente la conception, la caractérisation et l'optimisation de SPAD fabriqués dans une technologie CMOS commerciale (Teledyne DALSA 0.8µm HV CMOS - TDSI CMOSP8G). Des modifications de procédé sur mesure ont été introduites en collaboration avec l'entreprise CMOS pour optimiser les SPAD tout en gardant la compatibilité CMOS. Les matrices de SPAD produites sont dédiées à être intégrées en 3D avec de l'électronique CMOS économique (TDSI) ou avec de l'électronique CMOS submicronique avancée, produisant ainsi un SiPM 3D numérique. Ce SiPM 3D innovateur vise à remplacer les PMT, les APD et les SiPM commerciaux dans les applications à haute résolution temporelle. L'objectif principal du groupe de recherche est de développer un SiPM 3D avec une résolution temporelle de 10 ps pour usage en physique des hautes énergies et en imagerie médicale. Ces applications demandent des procédés fiables avec une capacité de production certifiée, ce qui justifie la volonté de produire le SiPM 3D avec des technologies CMOS commerciales. Ce mémoire étudie la conception, la caractérisation et l'optimisation de SPAD fabriqués en technologie TDSI-CMOSP8G.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide adaptation of Internet Protocol (IP) as de facto protocol for most communication networks has established a need for developing IP capable data link layer protocol solutions for Machine to machine (M2M) and Internet of Things (IoT) networks. However, the wireless networks used for M2M and IoT applications usually lack the resources commonly associated with modern wireless communication networks. The existing IP capable data link layer solutions for wireless IoT networks provide the necessary overhead minimising and frame optimising features, but are often built to be compatible only with IPv6 and specific radio platforms. The objective of this thesis is to design IPv4 compatible data link layer for Netcontrol Oy's narrow band half-duplex packet data radio system. Based on extensive literature research, system modelling and solution concept testing, this thesis proposes the usage of tunslip protocol as the basis for the system data link layer protocol development. In addition to the functionality of tunslip, this thesis discusses the additional network, routing, compression, security and collision avoidance changes required to be made to the radio platform in order for it to be IP compatible while still being able to maintain the point-to-multipoint and multi-hop network characteristics. The data link layer design consists of the radio application, dynamic Maximum Transmission Unit (MTU) optimisation daemon and the tunslip interface. The proposed design uses tunslip for creating an IP capable data link protocol interface. The radio application receives data from tunslip and compresses the packets and uses the IP addressing information for radio network addressing and routing before forwarding the message to radio network. The dynamic MTU size optimisation daemon controls the tunslip interface maximum MTU size according to the link quality assessment calculated from the radio network diagnostic data received from the radio application. For determining the usability of tunslip as the basis for data link layer protocol, testing of the tunslip interface is conducted with both IEEE 802.15.4 radios and packet data radios. The test cases measure the radio network usability for User Datagram Protocol (UDP) based applications without applying any header or content compression. The test results for the packet data radios reveal that the typical success rate for packet reception through a single-hop link is above 99% with a round-trip-delay of 0.315s for 63B packets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

De nombreux problèmes liés aux domaines du transport, des télécommunications et de la logistique peuvent être modélisés comme des problèmes de conception de réseaux. Le problème classique consiste à transporter un flot (données, personnes, produits, etc.) sur un réseau sous un certain nombre de contraintes dans le but de satisfaire la demande, tout en minimisant les coûts. Dans ce mémoire, on se propose d'étudier le problème de conception de réseaux avec coûts fixes, capacités et un seul produit, qu'on transforme en un problème équivalent à plusieurs produits de façon à améliorer la valeur de la borne inférieure provenant de la relaxation continue du modèle. La méthode que nous présentons pour la résolution de ce problème est une méthode exacte de branch-and-price-and-cut avec une condition d'arrêt, dans laquelle nous exploitons à la fois la méthode de génération de colonnes, la méthode de génération de coupes et l'algorithme de branch-and-bound. Ces méthodes figurent parmi les techniques les plus utilisées en programmation linéaire en nombres entiers. Nous testons notre méthode sur deux groupes d'instances de tailles différentes (gran-des et très grandes), et nous la comparons avec les résultats donnés par CPLEX, un des meilleurs logiciels permettant de résoudre des problèmes d'optimisation mathématique, ainsi qu’avec une méthode de branch-and-cut. Il s'est avéré que notre méthode est prometteuse et peut donner de bons résultats, en particulier pour les instances de très grandes tailles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Previous studies had enlisted renal medullary carcinoma (RMC) as the seventh nephropathy in sickle cell disease (SCD). Clinical experience has contradicted this claim and this study is targeted at refuting or supporting this assumption. Objective: To estimate the prevalence of RMC and describe other renal complications in SCD. Materials and methods: 14 physicians (haematologists and urologists) in 11 tertiary institutions across the country were collated from patients’ case notes and hospital SCD registers. Results: Of the 3,596 registered sickle patients, 2 (0.056%) had been diagnosed with RMC over a ten year period, thereby giving an estimated prevalence rate of 5.6 per 100,000. The most common renal complication reported by the attending physicians was chronic kidney disease (CKD). The frequency of routine renal screening for SCD patients varied widely between centres – most were done at diagnosis, annually or bi-annually. Conclusion: The ten year prevalence of RMC in Nigerian SCD patients was determined to be 5.6 (estimated incidence of 0.56). RMC is not more common in SCD patients and therefore cannot be regarded as a “Seventh Sickle nephropathy”. Most of the managing physicians reported that the commonest nephropathy observed in their SCD patients was chronic kidney disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Object recognition has long been a core problem in computer vision. To improve object spatial support and speed up object localization for object recognition, generating high-quality category-independent object proposals as the input for object recognition system has drawn attention recently. Given an image, we generate a limited number of high-quality and category-independent object proposals in advance and used as inputs for many computer vision tasks. We present an efficient dictionary-based model for image classification task. We further extend the work to a discriminative dictionary learning method for tensor sparse coding. In the first part, a multi-scale greedy-based object proposal generation approach is presented. Based on the multi-scale nature of objects in images, our approach is built on top of a hierarchical segmentation. We first identify the representative and diverse exemplar clusters within each scale. Object proposals are obtained by selecting a subset from the multi-scale segment pool via maximizing a submodular objective function, which consists of a weighted coverage term, a single-scale diversity term and a multi-scale reward term. The weighted coverage term forces the selected set of object proposals to be representative and compact; the single-scale diversity term encourages choosing segments from different exemplar clusters so that they will cover as many object patterns as possible; the multi-scale reward term encourages the selected proposals to be discriminative and selected from multiple layers generated by the hierarchical image segmentation. The experimental results on the Berkeley Segmentation Dataset and PASCAL VOC2012 segmentation dataset demonstrate the accuracy and efficiency of our object proposal model. Additionally, we validate our object proposals in simultaneous segmentation and detection and outperform the state-of-art performance. To classify the object in the image, we design a discriminative, structural low-rank framework for image classification. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mestrado em Economia e Gestão de Ciência, Tecnologia e Inovação