977 resultados para Minimum Channel Problem
Resumo:
Over the past few years, the field of global optimization has been very active, producing different kinds of deterministic and stochastic algorithms for optimization in the continuous domain. These days, the use of evolutionary algorithms (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces. EAs are well known for their ability to deal with nonlinear and complex optimization problems. Differential evolution (DE) algorithms are a family of evolutionary optimization techniques that use a rather greedy and less stochastic approach to problem solving, when compared to classical evolutionary algorithms. The main idea is to construct, at each generation, for each element of the population a mutant vector, which is constructed through a specific mutation operation based on adding differences between randomly selected elements of the population to another element. Due to its simple implementation, minimum mathematical processing and good optimization capability, DE has attracted attention. This paper proposes a new approach to solve electromagnetic design problems that combines the DE algorithm with a generator of chaos sequences. This approach is tested on the design of a loudspeaker model with 17 degrees of freedom, for showing its applicability to electromagnetic problems. The results show that the DE algorithm with chaotic sequences presents better, or at least similar, results when compared to the standard DE algorithm and other evolutionary algorithms available in the literature.
Resumo:
Planck scale physics may influence the evolution of cosmological fluctuations in the early stages of cosmological evolution. Because of the quasiexponential redshifting, which occurs during an inflationary period, the physical wavelengths of comoving scales that correspond to the present large-scale structure of the Universe were smaller than the Planck length in the early stages of the inflationary period. This trans-Planckian effect was studied before using toy models. The Horava-Lifshitz (HL) theory offers the chance to study this problem in a candidate UV complete theory of gravity. In this paper we study the evolution of cosmological perturbations according to HL gravity assuming that matter gives rise to an inflationary background. As is usually done in inflationary cosmology, we assume that the fluctuations originate in their minimum energy state. In the trans-Planckian region the fluctuations obey a nonlinear dispersion relation of Corley-Jacobson type. In the "healthy extension" of HL gravity there is an extra degree of freedom which plays an important role in the UV region but decouples in the IR, and which influences the cosmological perturbations. We find that in spite of these important changes compared to the usual description, the overall scale invariance of the power spectrum of cosmological perturbations is recovered. However, we obtain oscillations in the spectrum as a function of wave number with a relative amplitude of order unity and with an effective frequency which scales nonlinearly with wave number. Taking the usual inflationary parameters we find that the frequency of the oscillations is so large as to render the effect difficult to observe.
Resumo:
Graphene excellent properties make it a promising candidate for building future nanoelectronic devices. Nevertheless, the absence of an energy gap is an open problem for the transistor application. In this thesis, graphene nanoribbons and pattern-hydrogenated graphene, two alternatives for inducing an energy gap in graphene, are investigated by means of numerical simulations. A tight-binding NEGF code is developed for the simulation of GNR-FETs. To speed up the simulations, the non-parabolic effective mass model and the mode-space tight-binding method are developed. The code is used for simulation studies of both conventional and tunneling FETs. The simulations show the great potential of conventional narrow GNR-FETs, but highlight at the same time the leakage problems in the off-state due to various tunneling mechanisms. The leakage problems become more severe as the width of the devices is made larger, and thus the band gap smaller, resulting in a poor on/off current ratio. The tunneling FET architecture can partially solve these problems thanks to the improved subthreshold slope; however, it is also shown that edge roughness, unless well controlled, can have a detrimental effect in the off-state performance. In the second part of this thesis, pattern-hydrogenated graphene is simulated by means of a tight-binding model. A realistic model for patterned hydrogenation, including disorder, is developed. The model is validated by direct comparison of the momentum-energy resolved density of states with the experimental angle-resolved photoemission spectroscopy. The scaling of the energy gap and the localization length on the parameters defining the pattern geometry is also presented. The results suggest that a substantial transport gap can be attainable with experimentally achievable hydrogen concentration.
Resumo:
This work deals with the car sequencing (CS) problem, a combinatorial optimization problem for sequencing mixed-model assembly lines. The aim is to find a production sequence for different variants of a common base product, such that work overload of the respective line operators is avoided or minimized. The variants are distinguished by certain options (e.g., sun roof yes/no) and, therefore, require different processing times at the stations of the line. CS introduces a so-called sequencing rule H:N for each option, which restricts the occurrence of this option to at most H in any N consecutive variants. It seeks for a sequence that leads to no or a minimum number of sequencing rule violations. In this work, CS’ suitability for workload-oriented sequencing is analyzed. Therefore, its solution quality is compared in experiments to the related mixed-model sequencing problem. A new sequencing rule generation approach as well as a new lower bound for the problem are presented. Different exact and heuristic solution methods for CS are developed and their efficiency is shown in experiments. Furthermore, CS is adjusted and applied to a resequencing problem with pull-off tables.
Resumo:
When designing metaheuristic optimization methods, there is a trade-off between application range and effectiveness. For large real-world instances of combinatorial optimization problems out-of-the-box metaheuristics often fail, and optimization methods need to be adapted to the problem at hand. Knowledge about the structure of high-quality solutions can be exploited by introducing a so called bias into one of the components of the metaheuristic used. These problem-specific adaptations allow to increase search performance. This thesis analyzes the characteristics of high-quality solutions for three constrained spanning tree problems: the optimal communication spanning tree problem, the quadratic minimum spanning tree problem and the bounded diameter minimum spanning tree problem. Several relevant tree properties, that should be explored when analyzing a constrained spanning tree problem, are identified. Based on the gained insights on the structure of high-quality solutions, efficient and robust solution approaches are designed for each of the three problems. Experimental studies analyze the performance of the developed approaches compared to the current state-of-the-art.
Resumo:
Quantum channel identification, a standard problem in quantum metrology, is the task of estimating parameter(s) of a quantum channel. We investigate dissonance (quantum discord in the absence of entanglement) as an aid to quantum channel identification and find evidence for dissonance as a resource for quantum information processing. We consider the specific case of dissonant Bell-diagonal probes of the qubit depolarizing channel, using quantum Fisher information as a measure of statistical information extracted by the probe. In this setting dissonant quantum probes yield more statistical information about the depolarizing probability than do corresponding probes without dissonance and greater dissonance yields greater information. This effect only operates consistently when we control for classical correlation between the probe and its ancilla and the joint and marginal purities of the ancilla and probe.
Resumo:
The proliferation of multimedia content and the demand for new audio or video services have fostered the development of a new era based on multimedia information, which allowed the evolution of Wireless Multimedia Sensor Networks (WMSNs) and also Flying Ad-Hoc Networks (FANETs). In this way, live multimedia services require real-time video transmissions with a low frame loss rate, tolerable end-to-end delay, and jitter to support video dissemination with Quality of Experience (QoE) support. Hence, a key principle in a QoE-aware approach is the transmission of high priority frames (protect them) with a minimum packet loss ratio, as well as network overhead. Moreover, multimedia content must be transmitted from a given source to the destination via intermediate nodes with high reliability in a large scale scenario. The routing service must cope with dynamic topologies caused by node failure or mobility, as well as wireless channel changes, in order to continue to operate despite dynamic topologies during multimedia transmission. Finally, understanding user satisfaction on watching a video sequence is becoming a key requirement for delivery of multimedia content with QoE support. With this goal in mind, solutions involving multimedia transmissions must take into account the video characteristics to improve video quality delivery. The main research contributions of this thesis are driven by the research question how to provide multimedia distribution with high energy-efficiency, reliability, robustness, scalability, and QoE support over wireless ad hoc networks. The thesis addresses several problem domains with contributions on different layers of the communication stack. At the application layer, we introduce a QoE-aware packet redundancy mechanism to reduce the impact of the unreliable and lossy nature of wireless environment to disseminate live multimedia content. At the network layer, we introduce two routing protocols, namely video-aware Multi-hop and multi-path hierarchical routing protocol for Efficient VIdeo transmission for static WMSN scenarios (MEVI), and cross-layer link quality and geographical-aware beaconless OR protocol for multimedia FANET scenarios (XLinGO). Both protocols enable multimedia dissemination with energy-efficiency, reliability and QoE support. This is achieved by combining multiple cross-layer metrics for routing decision in order to establish reliable routes.
Resumo:
Charcoal particles in pollen slides are often abundant, and thus analysts are faced with the problem of setting the minimum counting sum as small as possible in order to save time. We analysed the reliability of charcoal-concentration estimates based on different counting sums, using simulated low-to high-count samples. Bootstrap simulations indicate that the variability of inferred charcoal concentrations increases progressively with decreasing sums. Below 200 items (i.e., the sum of charcoal particles and exotic marker grains), reconstructed fire incidence is either too high or too low. Statistical comparisons show that the means of bootstrap simulations stabilize after 200 counts. Moreover, a count of 200-300 items is sufficient to produce a charcoal-concentration estimate with less than+5% error if compared with high-count samples of 1000 items for charcoal/marker grain ratios 0.1-0.91. If, however, this ratio is extremely high or low (> 0.91 or < 0.1) and if such samples are frequent, we suggest that marker grains are reduced or added prior to new sample processing.
Resumo:
Background: Zooplankton play an important role in our oceans, in biogeochemical cycling and providing a food source for commercially important fish larvae. However, difficulties in correctly identifying zooplankton hinder our understanding of their roles in marine ecosystem functioning, and can prevent detection of long term changes in their community structure. The advent of massively parallel Next Generation Sequencing technology allows DNA sequence data to be recovered directly from whole community samples. Here we assess the ability of such sequencing to quantify the richness and diversity of a mixed zooplankton assemblage from a productive monitoring site in the Western English Channel. Methodology/Principle Findings: Plankton WP2 replicate net hauls (200 µm) were taken at the Western Channel Observatory long-term monitoring station L4 in September 2010 and January 2011. These samples were analysed by microscopy and metagenetic analysis of the 18S nuclear small subunit ribosomal RNA gene using the 454 pyrosequencing platform. Following quality control a total of 419,042 sequences were obtained for all samples. The sequences clustered in to 205 operational taxonomic units using a 97% similarity cut-off. Allocation of taxonomy by comparison with the National Centre for Biotechnology Information database identified 138 OTUs to species level, 11 to genus level and 1 to order, <2.5% of sequences were classified as unknowns. By comparison a skilled microscopic analyst was able to routinely enumerate only 75 taxonomic groups. Conclusions: The percentage of OTUs assigned to major eukaryotic taxonomic groups broadly aligns between the metagenetic and morphological analysis and are dominated by Copepoda. However, the metagenetics reveals a previously hidden taxonomic richness, especially for Copepoda and meroplankton such as Bivalvia, Gastropoda and Polychaeta. It also reveals rare species and parasites. We conclude that Next Generation Sequencing of 18S amplicons is a powerful tool for estimating diversity and species richness of zooplankton communities.