28 resultados para Branch and bound algorithms
Resumo:
This paper describes two algorithms for adaptive power and bit allocations in a multiple input multiple output multiple-carrier code division multiple access (MIMO MC-CDMA) system. The first is the greedy algorithm, which has already been presented in the literature. The other one, which is proposed by the authors, is based on the use of the Lagrange multiplier method. The performances of the two algorithms are compared via Monte Carlo simulations. At present stage, the simulations are restricted to a single user MIMO MC-CDMA system, which is equivalent to a MIMO OFDM system. It is assumed that the system operates in a frequency selective fading environment. The transmitter has a partial knowledge of the channel whose properties are measured at the receiver. The use of the two algorithms results in similar system performances. The advantage of the Lagrange algorithm is that is much faster than the greedy algorithm. ©2005 IEEE
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Despite the strong influence of plant architecture on crop yield, most crop models either ignore it or deal with it in a very rudimentary way. This paper demonstrates the feasibility of linking a model that simulates the morphogenesis and resultant architecture of individual cotton plants with a crop model that simulates the effects of environmental factors on critical physiological processes and resulting yield in cotton. First the varietal parameters of the models were made concordant. Then routines were developed to allocate the flower buds produced each day by the crop model amongst the potential positions generated by the architectural model. This allocation is done according to a set of heuristic rules. The final weight of individual bolls and the shedding of buds and fruit caused by water, N, and C stresses are processed in a similar manner. Observations of the positions of harvestable fruits, both within and between plants, made under a variety of agronomic conditions that had resulted in a broad range of plant architectures were compared to those predicted by the model with the same environmental inputs. As illustrated by comparisons of plant maps, the linked models performed reasonably well, though performance of the fruiting point allocation and shedding algorithms could probably be improved by further analysis of the spatial relationships of retained fruit. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Elevated ocean temperatures can cause coral bleaching, the loss of colour from reef-building corals because of a breakdown of the symbiosis with the dinoflagellate Symbiodinium. Recent studies have warned that global climate change could increase the frequency of coral bleaching and threaten the long-term viability of coral reefs. These assertions are based on projecting the coarse output from atmosphere-ocean general circulation models (GCMs) to the local conditions around representative coral reefs. Here, we conduct the first comprehensive global assessment of coral bleaching under climate change by adapting the NOAA Coral Reef Watch bleaching prediction method to the output of a low- and high-climate sensitivity GCM. First, we develop and test algorithms for predicting mass coral bleaching with GCM-resolution sea surface temperatures for thousands of coral reefs, using a global coral reef map and 1985-2002 bleaching prediction data. We then use the algorithms to determine the frequency of coral bleaching and required thermal adaptation by corals and their endosymbionts under two different emissions scenarios. The results indicate that bleaching could become an annual or biannual event for the vast majority of the world's coral reefs in the next 30-50 years without an increase in thermal tolerance of 0.2-1.0 degrees C per decade. The geographic variability in required thermal adaptation found in each model and emissions scenario suggests that coral reefs in some regions, like Micronesia and western Polynesia, may be particularly vulnerable to climate change. Advances in modelling and monitoring will refine the forecast for individual reefs, but this assessment concludes that the global prognosis is unlikely to change without an accelerated effort to stabilize atmospheric greenhouse gas concentrations.
Resumo:
Deep-frying, which consists of immersing a wet material in a large volume of hot oil, presents a process easily adaptable to dry rather than cook materials. A suitable material for drying is sewage sludge, which may be dried using recycled cooking oils (RCO) as frying oil. One advantage is that this prepares both materials for convenient disposal by incineration. This study examines fry-drying of municipal sewage sludge using recycled cooking oil. The transport processes occurring during fry-drying were monitored through sample weight, temperature, and image analysis. Due to the thicker and wetter samples than the common fried foods, high residual moisture is observed in the sludge when the boiling front has reached the geometric center of the sample, suggesting that the operation is heat transfer controlled only during the first half of the process followed by the addition of other mechanisms that allow complete drying of the sample. A series of mechanisms comprising four stages (i.e., initial heating accompanied by a surface boiling onset, film vapor regime, transitional nucleate boiling, and bound water removal) is proposed. In order to study the effect of the operating conditions on the fry-drying kinetics, different oil temperatures (from 120 to 180 degrees C), diameter (D = 15 to 25 mm), and initial moisture content of the sample (4.8 and 5.6 kg water(.)kg(-1) total dry solids) were investigated.
Resumo:
This paper presents new laboratory data on the generation of long waves by the shoaling and breaking of transient-focused short-wave groups. Direct offshore radiation of long waves from the breakpoint is shown experimentally for the first time. High spatial resolution enables identification of the relationship between the spatial gradients of the short-wave envelope and the long-wave surface. This relationship is consistent with radiation stress theory even well inside the surf zone and appears as a result of the strong nonlinear forcing associated with the transient group. In shallow water, the change in depth across the group leads to asymmetry in the forcing which generates significant dynamic setup in front of the group during shoaling. Strong amplification of the incident dynamic setup occurs after short-wave breaking. The data show the radiation of a transient long wave dominated by a pulse of positive elevation, preceded and followed by weaker trailing waves with negative elevation. The instantaneous cross-shore structure of the long wave shows the mechanics of the reflection process and the formation of a transient node in the inner surf zone. The wave run-up and relative amplitude of the radiated and incident long waves suggests significant modification of the incident bound wave in the inner surf zone and, the dominance of long waves generated by the breaking process. It is proposed that these conditions occur when the primary short waves and bound wave are not shallow water waves at the breakpoint. A simple criterion is given to determine these conditions, which generally occur for the important case of storm waves.
Resumo:
In empirical studies of Evolutionary Algorithms, it is usually desirable to evaluate and compare algorithms using as many different parameter settings and test problems as possible, in border to have a clear and detailed picture of their performance. Unfortunately, the total number of experiments required may be very large, which often makes such research work computationally prohibitive. In this paper, the application of a statistical method called racing is proposed as a general-purpose tool to reduce the computational requirements of large-scale experimental studies in evolutionary algorithms. Experimental results are presented that show that racing typically requires only a small fraction of the cost of an exhaustive experimental study.
Resumo:
Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.
Resumo:
In order to use the finite element method for solving fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins effectively and efficiently, we have presented, in this paper, the new concept and numerical algorithms to deal with the fundamental issues associated with the fluid-rock interaction problems. These fundamental issues are often overlooked by some purely numerical modelers. (1) Since the fluid-rock interaction problem involves heterogeneous chemical reactions between reactive aqueous chemical species in the pore-fluid and solid minerals in the rock masses, it is necessary to develop the new concept of the generalized concentration of a solid mineral, so that two types of reactive mass transport equations, namely, the conventional mass transport equation for the aqueous chemical species in the pore-fluid and the degenerated mass transport equation for the solid minerals in the rock mass, can be solved simultaneously in computation. (2) Since the reaction area between the pore-fluid and mineral surfaces is basically a function of the generalized concentration of the solid mineral, there is a definite need to appropriately consider the dependence of the dissolution rate of a dissolving mineral on its generalized concentration in the numerical analysis. (3) Considering the direct consequence of the porosity evolution with time in the transient analysis of fluid-rock interaction problems; we have proposed the term splitting algorithm and the concept of the equivalent source/sink terms in mass transport equations so that the problem of variable mesh Peclet number and Courant number has been successfully converted into the problem of constant mesh Peclet and Courant numbers. The numerical results from an application example have demonstrated the usefulness of the proposed concepts and the robustness of the proposed numerical algorithms in dealing with fluid-rock interaction problems in pore-fluid saturated hydrothermal/sedimentary basins. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Minimal perfect hash functions are used for memory efficient storage and fast retrieval of items from static sets. We present an infinite family of efficient and practical algorithms for generating order preserving minimal perfect hash functions. We show that almost all members of the family construct space and time optimal order preserving minimal perfect hash functions, and we identify the one with minimum constants. Members of the family generate a hash function in two steps. First a special kind of function into an r-graph is computed probabilistically. Then this function is refined deterministically to a minimal perfect hash function. We give strong theoretical evidence that the first step uses linear random time. The second step runs in linear deterministic time. The family not only has theoretical importance, but also offers the fastest known method for generating perfect hash functions.
Resumo:
Information on decomposition of harvest residues may assist in the maintenance of soil fertility in second rotation (2R) hoop pine plantations (Araucaria cunninghamii Aiton ex A. Cunn.) of subtropical Australia. The experiment was undertaken to determine the dynamics of residue decomposition and fate of residue-derived N. We used N-15-labeled hoop pine foliage, branch, and stem material in microplots, over a 30-mo period following harvesting. We examined the decomposition of each component both singly and combined, and used C-13 cross-polarization and magic-angle spinning nuclear magnetic resonance (C-13 CPMAS NMR) to chart C transformations in decomposing foliage. Residue-derived N-15 was immobilized in the 0- to 5-cm soil layer, with approximately 40% N-15 recovery in the soil from the combined residues by the end of the 30-mo period. Total recovery of N-15 in residues and soil varied between 60 and 80% for the combined-residue microplots, with 20 to 40% of the residue N-15 apparently lost. When residues were combined within microplots the rate of foliage decomposition decreased by 30% while the rate of branch and stem decomposition increased by 50 and 40% compared with rates for these components when decomposed separately. Residue decomposition studies should include a combined-residue treatment. Based on C-15 CPMAS NMR spectra for decomposing foliage, we obtained good correlations for methoxyl C, aryl C, carbohydrate C and phenolic C with residue mass, N-15 enrichment, and total N. The ratio of carbohydrate C to methoxyl C may be useful as an indicator of harvest residue decomposition in hoop pine plantations.
Resumo:
Obstructive sleep apnea (OSA) is a highly prevalent disease in which upper airways are collapsed during sleep, leading to serious consequences. The gold standard of diagnosis, called polysomnography (PSG), requires a full-night hospital stay connected to over ten channels of measurements requiring physical contact with sensors. PSG is inconvenient, expensive and unsuited for community screening. Snoring is the earliest symptom of OSA, but its potential in clinical diagnosis is not fully recognized yet. Diagnostic systems intent on using snore-related sounds (SRS) face the tough problem of how to define a snore. In this paper, we present a working definition of a snore, and propose algorithms to segment SRS into classes of pure breathing, silence and voiced/unvoiced snores. We propose a novel feature termed the 'intra-snore-pitch-jump' (ISPJ) to diagnose OSA. Working on clinical data, we show that ISPJ delivers OSA detection sensitivities of 86-100% while holding specificity at 50-80%. These numbers indicate that snore sounds and the ISPJ have the potential to be good candidates for a take-home device for OSA screening. Snore sounds have the significant advantage in that they can be conveniently acquired with low-cost non-contact equipment. The segmentation results presented in this paper have been derived using data from eight patients as the training set and another eight patients as the testing set. ISPJ-based OSA detection results have been derived using training data from 16 subjects and testing data from 29 subjects.