962 resultados para Constrained Minimization
Resumo:
BACKGROUND: Chromatin containing the histone variant CENP-A (CEN chromatin) exists as an essential domain at every centromere and heritably marks the location of kinetochore assembly. The size of the CEN chromatin domain on alpha satellite DNA in humans has been shown to vary according to underlying array size. However, the average amount of CENP-A reported at human centromeres is largely consistent, implying the genomic extent of CENP-A chromatin domains more likely reflects variations in the number of CENP-A subdomains and/or the density of CENP-A nucleosomes within individual subdomains. Defining the organizational and spatial properties of CEN chromatin would provide insight into centromere inheritance via CENP-A loading in G1 and the dynamics of its distribution between mother and daughter strands during replication. RESULTS: Using a multi-color protein strategy to detect distinct pools of CENP-A over several cell cycles, we show that nascent CENP-A is equally distributed to sister centromeres. CENP-A distribution is independent of previous or subsequent cell cycles in that centromeres showing disproportionately distributed CENP-A in one cycle can equally divide CENP-A nucleosomes in the next cycle. Furthermore, we show using extended chromatin fibers that maintenance of the CENP-A chromatin domain is achieved by a cycle-specific oscillating pattern of new CENP-A nucleosomes next to existing CENP-A nucleosomes over multiple cell cycles. Finally, we demonstrate that the size of the CENP-A domain does not change throughout the cell cycle and is spatially fixed to a similar location within a given alpha satellite DNA array. CONCLUSIONS: We demonstrate that most human chromosomes share similar patterns of CENP-A loading and distribution and that centromere inheritance is achieved through specific placement of new CENP-A near existing CENP-A as assembly occurs each cell cycle. The loading pattern fixes the location and size of the CENP-A domain on individual chromosomes. These results suggest that spatial and temporal dynamics of CENP-A are important for maintaining centromere identity and genome stability.
Resumo:
European continental shelf seas have experienced intense warming over the past 30 years1. In the North Sea, fish have been comprehensively monitored throughout this period and resulting data provide a unique record of changes in distribution and abundance in response to climate change2, 3. We use these data to demonstrate the remarkable power of generalized additive models (GAMs), trained on data earlier in the time series, to reliably predict trends in distribution and abundance in later years. Then, challenging process-based models that predict substantial and ongoing poleward shifts of cold-water species4, 5, we find that GAMs coupled with climate projections predict future distributions of demersal (bottom-dwelling) fish species over the next 50 years will be strongly constrained by availability of habitat of suitable depth. This will lead to pronounced changes in community structure, species interactions and commercial fisheries, unless individual acclimation or population-level evolutionary adaptations enable fish to tolerate warmer conditions or move to previously uninhabitable locations.
Resumo:
European continental shelf seas have experienced intense warming over the past 30 years1. In the North Sea, fish have been comprehensively monitored throughout this period and resulting data provide a unique record of changes in distribution and abundance in response to climate change2, 3. We use these data to demonstrate the remarkable power of generalized additive models (GAMs), trained on data earlier in the time series, to reliably predict trends in distribution and abundance in later years. Then, challenging process-based models that predict substantial and ongoing poleward shifts of cold-water species4, 5, we find that GAMs coupled with climate projections predict future distributions of demersal (bottom-dwelling) fish species over the next 50 years will be strongly constrained by availability of habitat of suitable depth. This will lead to pronounced changes in community structure, species interactions and commercial fisheries, unless individual acclimation or population-level evolutionary adaptations enable fish to tolerate warmer conditions or move to previously uninhabitable locations.
Resumo:
Demand response (DR) algorithms manipulate the energy consumption schedules of controllable loads so as to satisfy grid objectives. Implementation of DR algorithms using a centralized agent can be problematic for scalability reasons, and there are issues related to the privacy of data and robustness to communication failures. Thus, it is desirable to use a scalable decentralized algorithm for the implementation of DR. In this paper, a hierarchical DR scheme is proposed for peak minimization based on Dantzig-Wolfe decomposition (DWD). In addition, a time weighted maximization option is included in the cost function, which improves the quality of service for devices seeking to receive their desired energy sooner rather than later. This paper also demonstrates how the DWD algorithm can be implemented more efficiently through the calculation of the upper and lower cost bounds after each DWD iteration.
Resumo:
Large-scale multiple-input multiple-output (MIMO) communication systems can bring substantial improvement in spectral efficiency and/or energy efficiency, due to the excessive degrees-of-freedom and huge array gain. However, large-scale MIMO is expected to deploy lower-cost radio frequency (RF) components, which are particularly prone to hardware impairments. Unfortunately, compensation schemes are not able to remove the impact of hardware impairments completely, such that a certain amount of residual impairments always exists. In this paper, we investigate the impact of residual transmit RF impairments (RTRI) on the spectral and energy efficiency of training-based point-to-point large-scale MIMO systems, and seek to determine the optimal training length and number of antennas which maximize the energy efficiency. We derive deterministic equivalents of the signal-to-noise-and-interference ratio (SINR) with zero-forcing (ZF) receivers, as well as the corresponding spectral and energy efficiency, which are shown to be accurate even for small number of antennas. Through an iterative sequential optimization, we find that the optimal training length of systems with RTRI can be smaller compared to ideal hardware systems in the moderate SNR regime, while larger in the high SNR regime. Moreover, it is observed that RTRI can significantly decrease the optimal number of transmit and receive antennas.
Resumo:
Otto-von-Guericke-Universität Magdeburg, Fakultät für Mathematik, Kumulative Habilitation, 2016
Resumo:
A decision-maker, when faced with a limited and fixed budget to collect data in support of a multiple attribute selection decision, must decide how many samples to observe from each alternative and attribute. This allocation decision is of particular importance when the information gained leads to uncertain estimates of the attribute values as with sample data collected from observations such as measurements, experimental evaluations, or simulation runs. For example, when the U.S. Department of Homeland Security must decide upon a radiation detection system to acquire, a number of performance attributes are of interest and must be measured in order to characterize each of the considered systems. We identified and evaluated several approaches to incorporate the uncertainty in the attribute value estimates into a normative model for a multiple attribute selection decision. Assuming an additive multiple attribute value model, we demonstrated the idea of propagating the attribute value uncertainty and describing the decision values for each alternative as probability distributions. These distributions were used to select an alternative. With the goal of maximizing the probability of correct selection we developed and evaluated, under several different sets of assumptions, procedures to allocate the fixed experimental budget across the multiple attributes and alternatives. Through a series of simulation studies, we compared the performance of these allocation procedures to the simple, but common, allocation procedure that distributed the sample budget equally across the alternatives and attributes. We found the allocation procedures that were developed based on the inclusion of decision-maker knowledge, such as knowledge of the decision model, outperformed those that neglected such information. Beginning with general knowledge of the attribute values provided by Bayesian prior distributions, and updating this knowledge with each observed sample, the sequential allocation procedure performed particularly well. These observations demonstrate that managing projects focused on a selection decision so that the decision modeling and the experimental planning are done jointly, rather than in isolation, can improve the overall selection results.
Resumo:
Policy and decision makers dealing with environmental conservation and land use planning often require identifying potential sites for contributing to minimize sediment flow reaching riverbeds. This is the case of reforestation initiatives, which can have sediment flow minimization among their objectives. This paper proposes an Integer Programming (IP) formulation and a Heuristic solution method for selecting a predefined number of locations to be reforested in order to minimize sediment load at a given outlet in a watershed. Although the core structure of both methods can be applied for different sorts of flow, the formulations are targeted to minimization of sediment delivery. The proposed approaches make use of a Single Flow Direction (SFD) raster map covering the watershed in order to construct a tree structure so that the outlet cell corresponds to the root node in the tree. The results obtained with both approaches are in agreement with expert assessments of erosion levels, slopes and distances to the riverbeds, which in turn allows concluding that this approach is suitable for minimizing sediment flow. Since the results obtained with the IP formulation are the same as the ones obtained with the Heuristic approach, an optimality proof is included in the present work. Taking into consideration that the heuristic requires much less computation time, this solution method is more suitable to be applied in large sized problems.
Resumo:
Efficient hill climbers have been recently proposed for single- and multi-objective pseudo-Boolean optimization problems. For $k$-bounded pseudo-Boolean functions where each variable appears in at most a constant number of subfunctions, it has been theoretically proven that the neighborhood of a solution can be explored in constant time. These hill climbers, combined with a high-level exploration strategy, have shown to improve state of the art methods in experimental studies and open the door to the so-called Gray Box Optimization, where part, but not all, of the details of the objective functions are used to better explore the search space. One important limitation of all the previous proposals is that they can only be applied to unconstrained pseudo-Boolean optimization problems. In this work, we address the constrained case for multi-objective $k$-bounded pseudo-Boolean optimization problems. We find that adding constraints to the pseudo-Boolean problem has a linear computational cost in the hill climber.
Resumo:
We propose weakly-constrained stream and block codes with tunable pattern-dependent statistics and demonstrate that the block code capacity at large block sizes is close to the the prediction obtained from a simple Markov model published earlier. We demonstrate the feasibility of the code by presenting original encoding and decoding algorithms with a complexity log-linear in the block size and with modest table memory requirements. We also show that when such codes are used for mitigation of patterning effects in optical fibre communications, a gain of about 0.5dB is possible under realistic conditions, at the expense of small redundancy (≈10%). © 2010 IEEE
Resumo:
Minimization of undesirable temperature gradients in all dimensions of a planar solid oxide fuel cell (SOFC) is central to the thermal management and commercialization of this electrochemical reactor. This article explores the effective operating variables on the temperature gradient in a multilayer SOFC stack and presents a trade-off optimization. Three promising approaches are numerically tested via a model-based sensitivity analysis. The numerically efficient thermo-chemical model that had already been developed by the authors for the cell scale investigations (Tang et al. Chem. Eng. J. 2016, 290, 252-262) is integrated and extended in this work to allow further thermal studies at commercial scales. Initially, the most common approach for the minimization of stack's thermal inhomogeneity, i.e., usage of the excess air, is critically assessed. Subsequently, the adjustment of inlet gas temperatures is introduced as a complementary methodology to reduce the efficiency loss due to application of excess air. As another practical approach, regulation of the oxygen fraction in the cathode coolant stream is examined from both technical and economic viewpoints. Finally, a multiobjective optimization calculation is conducted to find an operating condition in which stack's efficiency and temperature gradient are maximum and minimum, respectively.
Resumo:
Power flow calculations are one of the most important tools for power system planning and operation. The need to account for uncertainties when performing power flow studies led, among others methods, to the development of the fuzzy power flow (FPF). This kind of models is especially interesting when a scarcity of information exists, which is a common situation in liberalized power systems (where generation and commercialization of electricity are market activities). In this framework, the symmetric/constrained fuzzy power flow (SFPF/CFPF) was proposed in order to avoid some of the problems of the original FPF model. The SFPF/CFPF models are suitable to quantify the adequacy of transmission network to satisfy “reasonable demands for the transmission of electricity” as defined, for instance, in the European Directive 2009/72/EC. In this work it is illustrated how the SFPF/CFPF may be used to evaluate the impact on the adequacy of a transmission system originated by specific investments on new network elements