862 resultados para Low Autocorrelation Binary Sequence Problem
Resumo:
An aeroelastic analysis based on finite elements in space and time is used to model the helicopter rotor in forward flight. The rotor blade is represented as an elastic cantilever beam undergoing flap and lag bending, elastic torsion and axial deformations. The objective of the improved design is to reduce vibratory loads at the rotor hub that are the main source of helicopter vibration. Constraints are imposed on aeroelastic stability, and move limits are imposed on the blade elastic stiffness design variables. Using the aeroelastic analysis, response surface approximations are constructed for the objective function (vibratory hub loads). It is found that second order polynomial response surfaces constructed using the central composite design of the theory of design of experiments adequately represents the aeroelastic model in the vicinity of the baseline design. Optimization results show a reduction in the objective function of about 30 per cent. A key accomplishment of this paper is the decoupling of the analysis problem and the optimization problems using response surface methods, which should encourage the use of optimization methods by the helicopter industry. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
A systematic approach is developed for scaling analysis of momentum, heat and species conservation equations pertaining to the case of solidification of a binary mixture. The problem formulation and description of boundary conditions are kept fairly general, so that a large class of problems can be addressed. Analysis of the momentum equations coupled with phase change considerations leads to the establishment of an advection velocity scale. Analysis of the energy equation leads to an estimation of the solid layer thickness. Different regimes corresponding to different dominant modes of transport are simultaneously identified. A comparative study involving several cases of possible thermal boundary conditions is also performed. Finally, a scaling analysis of the species conservation equation is carried out, revealing the effect of a non-equilibrium solidification model on solute segregation and species distribution. It is shown that non-equilibrium effects result in an enhanced macrosegregation compared with the case of an equilibrium model. For the sake of assessment of the scaling analysis, the predictions are validated against corresponding computational results.
Resumo:
In computational molecular biology, the aim of restriction mapping is to locate the restriction sites of a given enzyme on a DNA molecule. Double digest and partial digest are two well-studied techniques for restriction mapping. While double digest is NP-complete, there is no known polynomial-time algorithm for partial digest. Another disadvantage of the above techniques is that there can be multiple solutions for reconstruction. In this paper, we study a simple technique called labeled partial digest for restriction mapping. We give a fast polynomial time (O(n(2) log n) worst-case) algorithm for finding all the n sites of a DNA molecule using this technique. An important advantage of the algorithm is the unique reconstruction of the DNA molecule from the digest. The technique is also robust in handling errors in fragment lengths which arises in the laboratory. We give a robust O(n(4)) worst-case algorithm that can provably tolerate an absolute error of O(Delta/n) (where Delta is the minimum inter-site distance), while giving a unique reconstruction. We test our theoretical results by simulating the performance of the algorithm on a real DNA molecule. Motivated by the similarity to the labeled partial digest problem, we address a related problem of interest-the de novo peptide sequencing problem (ACM-SIAM Symposium on Discrete Algorithms (SODA), 2000, pp. 389-398), which arises in the reconstruction of the peptide sequence of a protein molecule. We give a simple and efficient algorithm for the problem without using dynamic programming. The algorithm runs in time O(k log k), where k is the number of ions and is an improvement over the algorithm in Chen et al. (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
We investigate the following problem: given a set of jobs and a set of people with preferences over the jobs, what is the optimal way of matching people to jobs? Here we consider the notion of popularity. A matching M is popular if there is no matching M' such that more people prefer M' to M than the other way around. Determining whether a given instance admits a popular matching and, if so, finding one, was studied by Abraham et al. (SIAM J. Comput. 37(4):1030-1045, 2007). If there is no popular matching, a reasonable substitute is a matching whose unpopularity is bounded. We consider two measures of unpopularity-unpopularity factor denoted by u(M) and unpopularity margin denoted by g(M). McCutchen recently showed that computing a matching M with the minimum value of u(M) or g(M) is NP-hard, and that if G does not admit a popular matching, then we have u(M) >= 2 for all matchings M in G. Here we show that a matching M that achieves u(M) = 2 can be computed in O(m root n) time (where m is the number of edges in G and n is the number of nodes) provided a certain graph H admits a matching that matches all people. We also describe a sequence of graphs: H = H(2), H(3), ... , H(k) such that if H(k) admits a matching that matches all people, then we can compute in O(km root n) time a matching M such that u(M) <= k - 1 and g(M) <= n(1 - 2/k). Simulation results suggest that our algorithm finds a matching with low unpopularity in random instances.
Resumo:
We determine the optimal allocation of power between the analog and digital sections of an RF receiver while meeting the BER constraint. Unlike conventional RF receiver designs, we treat the SNR at the output of the analog front end (SNRAD) as a design parameter rather than a specification to arrive at this optimal allocation. We first determine the relationship of the SNRAD to the resolution and operating frequency of the digital section. We then use power models for the analog and digital sections to solve the power minimization problem. As an example, we consider a 802.15.4 compliant low-IF receiver operating at 2.4 GHz in 0.13 μm technology with 1.2 V power supply. We find that the overall receiver power is minimized by having the analog front end provide an SNR of 1.3dB and the ADC and the digital section operate at 1-bit resolution with 18MHz sampling frequency while achieving a power dissipation of 7mW.
Resumo:
A low correlation interleaved QAM sequence family is presented here. In a CDMA setting, these sequences have the ability to transport a large amount of data as well as enable variable-rate signaling on the reverse link. The new interleaved selected family INQ has period N, normalized maximum correlation parameter thetasmacrmax bounded above by lsim a radicN, where a ranges from 1.17 in the 16-QAM case to 1.99 for large M2-QAM, where M = 2m, m ges 2. Each user is enabled to transfer m + 1 bits of data per period of the spreading sequence. These constructions have the lowest known value of maximum correlation of any sequence family with the same alphabet.
Resumo:
Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.
Resumo:
In this article, we consider the single-machine scheduling problem with past-sequence-dependent (p-s-d) setup times and a learning effect. The setup times are proportional to the length of jobs that are already scheduled; i.e. p-s-d setup times. The learning effect reduces the actual processing time of a job because the workers are involved in doing the same job or activity repeatedly. Hence, the processing time of a job depends on its position in the sequence. In this study, we consider the total absolute difference in completion times (TADC) as the objective function. This problem is denoted as 1/LE, (Spsd)/TADC in Kuo and Yang (2007) ('Single Machine Scheduling with Past-sequence-dependent Setup Times and Learning Effects', Information Processing Letters, 102, 22-26). There are two parameters a and b denoting constant learning index and normalising index, respectively. A parametric analysis of b on the 1/LE, (Spsd)/TADC problem for a given value of a is applied in this study. In addition, a computational algorithm is also developed to obtain the number of optimal sequences and the range of b in which each of the sequences is optimal, for a given value of a. We derive two bounds b* for the normalising constant b and a* for the learning index a. We also show that, when a < a* or b > b*, the optimal sequence is obtained by arranging the longest job in the first position and the rest of the jobs in short processing time order.
Resumo:
Large instruction windows and issue queues are key to exploiting greater instruction level parallelism in out-of-order superscalar processors. However, the cycle time and energy consumption of conventional large monolithic issue queues are high. Previous efforts to reduce cycle time segment the issue queue and pipeline wakeup. Unfortunately, this results in significant IPC loss. Other proposals which address energy efficiency issues by avoiding only the unnecessary tag-comparisons do not reduce broadcasts. These schemes also increase the issue latency.To address both these issues comprehensively, we propose the Scalable Lowpower Issue Queue (SLIQ). SLIQ augments a pipelined issue queue with direct indexing to mitigate the problem of delayed wakeups while reducing the cycle time. Also, the SLIQ design naturally leads to significant energy savings by reducing both the number of tag broadcasts and comparisons required.A 2 segment SLIQ incurs an average IPC loss of 0.2% over the entire SPEC CPU2000 suite, while achieving a 25.2% reduction in issue latency when compared to a monolithic 128-entry issue queue for an 8-wide superscalar processor. An 8 segment SLIQ improves scalability by reducing the issue latency by 38.3% while incurring an IPC loss of only 2.3%. Further, the 8 segment SLIQ significantly reduces the energy consumption and energy-delay product by 48.3% and 67.4% respectively on average.
Resumo:
A 30-d course of oral administration of a semipurified extract of the root of Withania somnifera consisting predominantly of withanolides and withanosides reversed behavioral deficits, plaque pathology, accumulation of beta-amyloid peptides (A beta) and oligomers in the brains of middle-aged and old APP/PS1 Alzheimer's disease transgenic mice. It was similarly effective in reversing behavioral deficits and plaque load in APPSwInd mice (line J20). The temporal sequence involved an increase in plasma A beta and a decrease in brain A beta monomer after 7 d, indicating increased transport of A beta from the brain to the periphery. Enhanced expression of low-density lipoprotein receptor-related protein (LRP) in brain microvessels and the A beta-degrading protease neprilysin (NEP) occurred 14-21 d after a substantial decrease in brain A beta levels. However, significant increase in liver LRP and NEP occurred much earlier, at 7 d, and were accompanied by a rise in plasma sLRP, a peripheral sink for brain A beta. In WT mice, the extract induced liver, but not brain, LRP and NEP and decreased plasma and brain A beta, indicating that increase in liver LRP and sLRP occurring independent of A beta concentration could result in clearance of A beta. Selective down-regulation of liver LRP, but not NEP, abrogated the therapeutic effects of the extract. The remarkable therapeutic effect of W. somnifera mediated through up-regulation of liver LRP indicates that targeting the periphery offers a unique mechanism for A beta clearance and reverses the behavioral deficits and pathology seen in Alzheimer's disease models.
Resumo:
Background: Development of sensitive sequence search procedures for the detection of distant relationships between proteins at superfamily/fold level is still a big challenge. The intermediate sequence search approach is the most frequently employed manner of identifying remote homologues effectively. In this study, examination of serine proteases of prolyl oligopeptidase, rhomboid and subtilisin protein families were carried out using plant serine proteases as queries from two genomes including A. thaliana and O. sativa and 13 other families of unrelated folds to identify the distant homologues which could not be obtained using PSI-BLAST. Methodology/Principal Findings: We have proposed to start with multiple queries of classical serine protease members to identify remote homologues in families, using a rigorous approach like Cascade PSI-BLAST. We found that classical sequence based approaches, like PSI-BLAST, showed very low sequence coverage in identifying plant serine proteases. The algorithm was applied on enriched sequence database of homologous domains and we obtained overall average coverage of 88% at family, 77% at superfamily or fold level along with specificity of similar to 100% and Mathew's correlation coefficient of 0.91. Similar approach was also implemented on 13 other protein families representing every structural class in SCOP database. Further investigation with statistical tests, like jackknifing, helped us to better understand the influence of neighbouring protein families. Conclusions/Significance: Our study suggests that employment of multiple queries of a family for the Cascade PSI-BLAST searches is useful for predicting distant relationships effectively even at superfamily level. We have proposed a generalized strategy to cover all the distant members of a particular family using multiple query sequences. Our findings reveal that prior selection of sequences as query and the presence of neighbouring families can be important for covering the search space effectively in minimal computational time. This study also provides an understanding of the `bridging' role of related families.
Resumo:
We propose a novel numerical method based on a generalized eigenvalue decomposition for solving the diffusion equation governing the correlation diffusion of photons in turbid media. Medical imaging modalities such as diffuse correlation tomography and ultrasound-modulated optical tomography have the (elliptic) diffusion equation parameterized by a time variable as the forward model. Hitherto, for the computation of the correlation function, the diffusion equation is solved repeatedly over the time parameter. We show that the use of a certain time-independent generalized eigenfunction basis results in the decoupling of the spatial and time dependence of the correlation function, thus allowing greater computational efficiency in arriving at the forward solution. Besides presenting the mathematical analysis of the generalized eigenvalue problem on the basis of spectral theory, we put forth the numerical results that compare the proposed numerical method with the standard technique for solving the diffusion equation.
Resumo:
Important diffusion parameters, such as-parabolic growth constant, integrated diffusivity, ratio of intrinsic diffusivities of species Ni and Sn, Kirkendall marker velocity and the activation energy for diffusion kinetics of binary Ni3Sn4 phase have been investigated with the help of incremental diffusion couple technique (Sn/Ni0.57Sn0.43) in the temperature range 200-150 degrees C. Low activation energy extracted from Arrhenius plot indicates grain boundary controlled diffusion process. The species Sn is three times faster than Ni at 200 degrees C. Further, the activation energy of Sn tracer diffusivity is greater than that of Ni.
Resumo:
Purpose-In the present work, a numerical method, based on the well established enthalpy technique, is developed to simulate the growth of binary alloy equiaxed dendrites in presence of melt convection. The paper aims to discuss these issues. Design/methodology/approach-The principle of volume-averaging is used to formulate the governing equations (mass, momentum, energy and species conservation) which are solved using a coupled explicit-implicit method. The velocity and pressure fields are obtained using a fully implicit finite volume approach whereas the energy and species conservation equations are solved explicitly to obtain the enthalpy and solute concentration fields. As a model problem, simulation of the growth of a single crystal in a two-dimensional cavity filled with an undercooled melt is performed. Findings-Comparison of the simulation results with available solutions obtained using level set method and the phase field method shows good agreement. The effects of melt flow on dendrite growth rate and solute distribution along the solid-liquid interface are studied. A faster growth rate of the upstream dendrite arm in case of binary alloys is observed, which can be attributed to the enhanced heat transfer due to convection as well as lower solute pile-up at the solid-liquid interface. Subsequently, the influence of thermal and solutal Peclet number and undercooling on the dendrite tip velocity is investigated. Originality/value-As the present enthalpy based microscopic solidification model with melt convection is based on a framework similar to popularly used enthalpy models at the macroscopic scale, it lays the foundation to develop effective multiscale solidification.
Resumo:
The two-pion contribution from low energies to the muon magnetic moment anomaly, although small, has a large relative uncertainty since in this region the experimental data on the cross sections are neither sufficient nor precise enough. It is therefore of interest to see whether the precision can be improved by means of additional theoretical information on the pion electromagnetic form factor, which controls the leading-order contribution. In the present paper, we address this problem by exploiting analyticity and unitarity of the form factor in a parametrization-free approach that uses the phase in the elastic region, known with high precision from the Fermi-Watson theorem and Roy equations for pi pi elastic scattering as input. The formalism also includes experimental measurements on the modulus in the region 0.65-0.70 GeV, taken from the most recent e(+)e(-) ->pi(+)pi(-) experiments, and recent measurements of the form factor on the spacelike axis. By combining the results obtained with inputs from CMD2, SND, BABAR, and KLOE, we make the predictions a(mu)(pi pi,LO)2m(pi), 0.30 GeV] = (0.553 +/- 0.004) x 10(-10) and a(mu)(pi pi,LO)0.30 GeV; 0.63 GeV] = (133.083 +/- 0.837) x 10(-10). These are consistent with the other recent determinations and have slightly smaller errors.