882 resultados para Branch and Bound algorithms


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper deals with the determination of an optimal schedule for the so-called mixed shop problem when the makespan has to be minimized. In such a problem, some jobs have fixed machine orders (as in the job-shop), while the operations of the other jobs may be processed in arbitrary order (as in the open-shop). We prove binary NP-hardness of the preemptive problem with three machines and three jobs (two jobs have fixed machine orders and one may have an arbitrary machine order). We answer all other remaining open questions on the complexity status of mixed-shop problems with the makespan criterion by presenting different polynomial and pseudopolynomial algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present high-resolution spectroscopic observations of 21 B- type stars, selected from the Edinburgh-Cape Blue Object Survey. Model atmosphere analyses confirm that 14 of these stars are young, main-sequence B-type objects with Population I chemical compositions. The remaining seven are found to be evolved objects, including subdwarfs, horizontal branch and post-AGB objects. A kinematical analysis shows that all 14 young main-sequence stars could have formed in the disc and subsequently been ejected into the halo. These results are combined with the analysis of a previous subsample of stars taken from the Survey. Of the complete sample, 31 have been found to be young, main-sequence objects, with formation in the disc, and subsequent ejection into the halo, again being found to be a plausible scenario.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present model atmosphere analyses of high resolution Keck and VLT optical spectra for three evolved stars in globular clusters, viz. ZNG-1 in M 10, ZNG-1 in M 15 and ZNG-1 in NGC 6712. The derived atmospheric parameters and chemical compositions confirm the programme stars to be in the post- Asymptotic Giant Branch (post-AGB) evolutionary phase. Differential abundance analyses reveal CNO abundance patterns in M 10 ZNG-1, and possibly M 15 ZNG-1, which Suggest that both objects may have evolved off the AGB before the third dredge-up occurred. The abundance pattern of these stars is similar to the third class of optically, bright post-AGB objects discussed by van Winckel (1997). Furthermore, M 10 ZNG-1 exhibits a large C underabundance (with Delta[C/O] similar to -1.6 dex), typical of other hot post-AGB objects. Differential Delta[alpha/Fe] abundance ratios in both M 10 ZNG-1 and NGC 6712 ZNG-1 are found to be approximately 0.0 dex, with the Fe abundance of the former being in disagreement with the cluster metallicity of M 10. Given that the Fe absorption features in both M 10 ZNG-1 and NGC6712 ZNG-1 are well observed and reliably modelled, we believe that these differential Fe abundance estimates to be secure. However, our Fe abundance is difficult to explain in terms of previous evolutionary processes that Occur oil both the Horizontal Branch and the AGB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: We appraised 23 biomarkers previously associated with urothelial cancer in a case-control study. Our aim was to determine whether single biomarkers and/or multivariate algorithms significantly improved on the predictive power of an algorithm based on demographics for prediction of urothelial cancer in patients presenting with hematuria. METHODS: Twenty-two biomarkers in urine and carcinoembryonic antigen (CEA) in serum were evaluated using enzyme-linked immunosorbent assays (ELISAs) and biochip array technology in 2 patient cohorts: 80 patients with urothelial cancer, and 77 controls with confounding pathologies. We used Forward Wald binary logistic regression analyses to create algorithms based on demographic variables designated prior predicted probability (PPP) and multivariate algorithms, which included PPP as a single variable. Areas under the curve (AUC) were determined after receiver-operator characteristic (ROC) analysis for single biomarkers and algorithms. RESULTS: After univariate analysis, 9 biomarkers were differentially expressed (t test; P

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An electron microscopical examination has been made of the fine structure and disposition of pancreatic polypeptide immunoreactive cells associated with the egg-forming apparatus in Diclidophora merlangi. The cell bodies are positioned in the parenchyma surrounding the ootype and taper to axon-like processes that extend to the ootype wall. The terminal regions of these processes branch and anastomose and, in places, the swollen endings or varicosities form synaptic appositions with the muscle fibres in the ootype wall. The cells are characterized by an extensive GER-Golgi system that is involved in the assembly and packaging of dense-cored vesicles. The vesicles accumulate in the axons and terminal varicosities, and their contents were found to be immunoreactive with antisera raised to the C-terminal hexapeptide amide of pancreatic polypeptide. It is concluded that the cells are neurosecretory in appearance and that, functionally, their secretions may serve to regulate ootype motility and thereby help co-ordinate egg production in the worm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An ab initio approach has been applied to study multiphoton detachment rates for the negative hydrogen ion in the lowest nonvanishing order of perturbation theory. The approach is based on the use of B splines allowing an accurate treatment of the electronic repulsion. Total detachment rates have been determined for two- to six-photon processes as well as partial rates for detachment into the different final symmetries. It is shown that B-spline expansions can yield accurate continuum and bound-state wave functions in a very simple manner. The calculated total rates for two- and three-photon detachment are in good agreement with other perturbative calculations. For more than three-photon detachment little information has been available before now. While the total cross sections show little structure, a fair amount of structure is predicted in the partial cross sections. In the two-photon process, it is shown that the detached electrons mainly have s character. For four- and six-photon processes, the contribution from the d channel is the most important. For three- and five-photon processes p electrons dominate the electron emission spectrum. Detachment rates for s and p electrons show minima as a function of photon energy. © 1994 The American Physical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The incubation of the model pollutant [U-14C]'-4-fluorobiphenyl (4FBP) in soil, in the presence and absence of biphenyl (a co-substrate), was carried out in order to study the qualitative disposition and fate of the compound using 14C-HPLC and 19F NMR spectroscopy. Components accounted for using the radiolabel were volatilization, CO2 evolution, organic solvent extractable and bound residue. Quantitative analysis of these data gave a complete mass balance. After sample preparation. 14C-HPLC was used to establish the number of 4FBP related components present in the organic solvent extract. 19F NMR was also used to quantify the organic extracts and to identify the components of the extract. Both approaches showed that the composition of the solvent extractable fractions comprised only parent compound with no metabolites present. As the 14C radiolabel was found to be incorporated into the soil organic matter this indicates that metabolites were being generated, but were highly transitory as incorporation into the SOM was rapid. The inclusion of the co-substrate biphenyl was to increase the overall rate of degradation of 4FBP in soil. The kinetics of disappearance of parent from the soil using the data obtained were investigated from both techniques. This is the first report describing the degradation of a fluorinated biphenyl in soil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An appreciation of the quantity of streamflow derived from the main hydrological pathways involved in transporting diffuse contaminants is critical when addressing a wide range of water resource management issues. In order to assess hydrological pathway contributions to streams, it is necessary to provide feasible upper and lower bounds for flows in each pathway. An important first step in this process is to provide reliable estimates of the slower responding groundwater pathways and subsequently the quicker overland and interflow pathways. This paper investigates the effectiveness of a multi-faceted approach applying different hydrograph separation techniques, supplemented by lumped hydrological modelling, for calculating the Baseflow Index (BFI), for the development of an integrated approach to hydrograph separation. A semi-distributed, lumped and deterministic rainfall runoff model known as NAM has been applied to ten catchments (ranging from 5 to 699 km2). While this modelling approach is useful as a validation method, NAM itself is also an important tool for investigation. These separation techniques provide a large variation in BFI, a difference of 0.741 predicted for BFI in a catchment with the less reliable fixed and sliding interval methods and local minima turning point methods included. This variation is reduced to 0.167 with these methods omitted. The Boughton and Eckhardt algorithms, while quite subjective in their use, provide quick and easily implemented approaches for obtaining physically realistic hydrograph separations. It is observed that while the different separation techniques give varying BFI values for each of the catchments, a recharge coefficient approach developed in Ireland, when applied in conjunction with the Master recession Curve Tabulation method, predict estimates in agreement with those obtained using the NAM model, and these estimates are also consistent with the study catchments’ geology. These two separation methods, in conjunction with the NAM model, were selected to form an integrated approach to assessing BFI in catchments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To evaluate the sensitivity and specificity of the screening mode of the Humphrey-Welch Allyn frequency-doubling technology (FDT), Octopus tendency-oriented perimetry (TOP), and the Humphrey Swedish Interactive Threshold Algorithm (SITA)-fast (HSF) in patients with glaucoma. DESIGN: A comparative consecutive case series. METHODS: This was a prospective study which took place in the glaucoma unit of an academic department of ophthalmology. One eye of 70 consecutive glaucoma patients and 28 age-matched normal subjects was studied. Eyes were examined with the program C-20 of FDT, G1-TOP, and 24-2 HSF in one visit and in random order. The gold standard for glaucoma was presence of a typical glaucomatous optic disk appearance on stereoscopic examination, which was judged by a glaucoma expert. The sensitivity and specificity, positive and negative predictive value, and receiver operating characteristic (ROC) curves of two algorithms for the FDT screening test, two algorithms for TOP, and three algorithms for HSF, as defined before the start of this study, were evaluated. The time required for each test was also analyzed. RESULTS: Values for area under the ROC curve ranged from 82.5%-93.9%. The largest area (93.9%) under the ROC curve was obtained with the FDT criteria, defining abnormality as presence of at least one abnormal location. Mean test time was 1.08 ± 0.28 minutes, 2.31 ± 0.28 minutes, and 4.14 ± 0.57 minutes for the FDT, TOP, and HSF, respectively. The difference in testing time was statistically significant (P <.0001). CONCLUSIONS: The C-20 FDT, G1-TOP, and 24-2 HSF appear to be useful tools to diagnose glaucoma. The test C-20 FDT and G1-TOP take approximately 1/4 and 1/2 of the time taken by 24 to 2 HSF. © 2002 by Elsevier Science Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Late-onset Alzheimer's disease (AD) is heritable with 20 genes showing genome-wide association in the International Genomics of Alzheimer's Project (IGAP). To identify the biology underlying the disease, we extended these genetic data in a pathway analysis.

Methods: The ALIGATOR and GSEA algorithms were used in the IGAP data to identify associated functional pathways and correlated gene expression networks in human brain.

Results: ALIGATOR identified an excess of curated biological pathways showing enrichment of association. Enriched areas of biology included the immune response (P = 3.27 X 10(-12) after multiple testing correction for pathways), regulation of endocytosis (P = 1.31 X 10(-11)), cholesterol transport (P = 2.96 X 10(-9)), and proteasome-ubiquitin activity (P = 1.34 X 10(-6)). Correlated gene expression analysis identified four significant network modules, all related to the immune response (corrected P = .002-.05).

Conclusions: The immime response, regulation of endocytosis, cholesterol transport, and protein ubiquitination represent prime targets for AD therapeutics. (C) 2015 Published by Elsevier Inc. on behalf of The Alzheimer's Association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, a wide variety of centralised and decentralised algorithms have been proposed for residential charging of electric vehicles (EVs). In this paper, we present a mathematical framework which casts the EV charging scenarios addressed by these algorithms as optimisation problems having either temporal or instantaneous optimisation objectives with respect to the different actors in the power system. Using this framework and a realistic distribution network simulation testbed, we provide a comparative evaluation of a range of different residential EV charging strategies, highlighting in each case positive and negative characteristics.