84 resultados para Large-Scale Optimization
Resumo:
Magnetic resonance imaging (MRI) magnets have very stringent constraints on the homogeneity of the static magnetic field that they generate over desired imaging regions. The magnet system also preferably generates very little stray field external to its structure, so that ease of siting and safety are assured. This work concentrates on deriving, means of rapidly computing the effect of 'cold' and 'warm' ferromagnetic material in or around the superconducting magnet system, so as to facilitate the automated design of hybrid material MR magnets. A complete scheme for the direct calculation of the spherical harmonics of the magnetic field generated by a circular ring of ferromagnetic material is derived under the conditions of arbitrary external magnetizing fields. The magnetic field produced by the superconducting coils in the system is computed using previously developed methods. The final, hybrid algorithm is fast enough for use in large-scale optimization methods. The resultant fields from a practical example of a 4 T, clinical MRI magnet containing both superconducting coils and magnetic material are presented.
Resumo:
Power systems are large scale nonlinear systems with high complexity. Various optimization techniques and expert systems have been used in power system planning. However, there are always some factors that cannot be quantified, modeled, or even expressed by expert systems. Moreover, such planning problems are often large scale optimization problems. Although computational algorithms that are capable of handling large dimensional problems can be used, the computational costs are still very high. To solve these problems, in this paper, investigation is made to explore the efficiency and effectiveness of combining mathematic algorithms with human intelligence. It had been discovered that humans can join the decision making progresses by cognitive feedback. Based on cognitive feedback and genetic algorithm, a new algorithm called cognitive genetic algorithm is presented. This algorithm can clarify and extract human's cognition. As an important application of this cognitive genetic algorithm, a practical decision method for power distribution system planning is proposed. By using this decision method, the optimal results that satisfy human expertise can be obtained and the limitations of human experts can be minimized in the mean time.
Resumo:
Coal fired power generation will continue to provide energy to the world for the foreseeable future. However, this energy use is a significant contributor to increased atmospheric CO2 concentration and, hence, global warming. Capture and disposal Of CO2 has received increased R&D attention in the last decade as the technology promises to be the most cost effective for large scale reductions in CO2 emissions. This paper addresses CO2 transport via pipeline from capture site to disposal site, in terms of system optimization, energy efficiency and overall economics. Technically, CO2 can be transported through pipelines in the form of a gas, a supercritical. fluid or in the subcooled liquid state. Operationally, most CO2 pipelines used for enhanced oil recovery transport CO2 as a supercritical fluid. In this paper, supercritical fluid and subcooled liquid transport are examined and compared, including their impacts on energy efficiency and cost. Using a commercially available process simulator, ASPEN PLUS 10.1, the results show that subcooled liquid transport maximizes the energy efficiency and minimizes the Cost Of CO2 transport over long distances under both isothermal and adiabatic conditions. Pipeline transport of subcooled liquid CO2 can be ideally used in areas of cold climate or by burying and insulating the pipeline. In very warm climates, periodic refrigeration to cool the CO2 below its critical point of 31.1 degrees C, may prove economical. Simulations have been used to determine the maximum safe pipeline distances to subsequent booster stations as a function of inlet pressure, environmental temperature and ground level heat flux conditions. (c) 2005 Published by Elsevier Ltd.
Resumo:
One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.
Resumo:
The research literature on metalieuristic and evolutionary computation has proposed a large number of algorithms for the solution of challenging real-world optimization problems. It is often not possible to study theoretically the performance of these algorithms unless significant assumptions are made on either the algorithm itself or the problems to which it is applied, or both. As a consequence, metalieuristics are typically evaluated empirically using a set of test problems. Unfortunately, relatively little attention has been given to the development of methodologies and tools for the large-scale empirical evaluation and/or comparison of metaheuristics. In this paper, we propose a landscape (test-problem) generator that can be used to generate optimization problem instances for continuous, bound-constrained optimization problems. The landscape generator is parameterized by a small number of parameters, and the values of these parameters have a direct and intuitive interpretation in terms of the geometric features of the landscapes that they produce. An experimental space is defined over algorithms and problems, via a tuple of parameters for any specified algorithm and problem class (here determined by the landscape generator). An experiment is then clearly specified as a point in this space, in a way that is analogous to other areas of experimental algorithmics, and more generally in experimental design. Experimental results are presented, demonstrating the use of the landscape generator. In particular, we analyze some simple, continuous estimation of distribution algorithms, and gain new insights into the behavior of these algorithms using the landscape generator.
Resumo:
Recent large-scale analyses of mainly full-length cDNA libraries generated from a variety of mouse tissues indicated that almost half of all representative cloned sequences did flat contain ail apparent protein-coding sequence, and were putatively derived from non-protein-coding RNA (ncRNA) genes. However, many of these clones were singletons and the majority were unspliced, raising the possibility that they may be derived from genomic DNA or unprocessed pre-rnRNA contamination during library construction, or alternatively represent nonspecific transcriptional noise. Here we Show, using reverse transcriptase-dependent PCR, microarray, and Northern blot analyses, that many of these clones were derived from genuine transcripts Of unknown function whose expression appears to be regulated. The ncRNA transcripts have larger exons and fewer introns than protein-coding transcripts. Analysis of the genomic landscape around these sequences indicates that some cDNA clones were produced not from terminal poly(A) tracts but internal priming sites within longer transcripts, only a minority of which is encompassed by known genes. A significant proportion of these transcripts exhibit tissue-specific expression patterns, as well as dynamic changes in their expression in macrophages following lipopolysaccharide Stimulation. Taken together, the data provide strong support for the conclusion that ncRNAs are an important, regulated component of the mammalian transcriptome.
Resumo:
As process management projects have increased in size due to globalised and company-wide initiatives, a corresponding growth in the size of process modeling projects can be observed. Despite advances in languages, tools and methodologies, several aspects of these projects have been largely ignored by the academic community. This paper makes a first contribution to a potential research agenda in this field by defining the characteristics of large-scale process modeling projects and proposing a framework of related issues. These issues are derived from a semi -structured interview and six focus groups conducted in Australia, Germany and the USA with enterprise and modeling software vendors and customers. The focus groups confirm the existence of unresolved problems in business process modeling projects. The outcomes provide a research agenda which directs researchers into further studies in global process management, process model decomposition and the overall governance of process modeling projects. It is expected that this research agenda will provide guidance to researchers and practitioners by focusing on areas of high theoretical and practical relevance.
Resumo:
The subject of management is renowned for its addiction to fads and fashions. Project Management is no exception. The issue of interest for this paper is the establishment of the 'College of Complex Project Managers' and their 'competency standard for complex project managers.' Both have generated significant interest in the Project Management community, and like any other human endeavour they should be subject to critical evaluation. The results of this evaluation show significant flaws in the definition of complex in this case, the process by which the College and its standard have emerged, and the content of the standard. However, there is a significant case for a portfolio of research that extends the existing bodies of knowledge into large-scale complicated (or major) projects that would be owned by the relevant practitioner communities, rather than focused on one organization. Research questions are proposed that would commence this stream of activity towards an intelligent synthesis of what is required to manage in both complicated and truly complex environments.
Resumo:
In natural estuaries, contaminant transport is driven by the turbulent momentum mixing. The predictions of scalar dispersion can rarely be predicted accurately because of a lack of fundamental understanding of the turbulence structure in estuaries. Herein detailed turbulence field measurements were conducted at high frequency and continuously for up to 50 hours per investigation in a small subtropical estuary with semi-diurnal tides. Acoustic Doppler velocimetry was deemed the most appropriate measurement technique for such small estuarine systems with shallow water depths (less than 0.5 m at low tides), and a thorough post-processing technique was applied. The estuarine flow is always a fluctuating process. The bulk flow parameters fluctuated with periods comparable to tidal cycles and other large-scale processes. But turbulence properties depended upon the instantaneous local flow properties. They were little affected by the flow history, but their structure and temporal variability were influenced by a variety of mechanisms. This resulted in behaviour which deviated from that for equilibrium turbulent boundary layer induced by velocity shear only. A striking feature of the data sets is the large fluctuations in all turbulence characteristics during the tidal cycle. This feature was rarely documented, but an important difference between the data sets used in this study from earlier reported measurements is that the present data were collected continuously at high frequency during relatively long periods. The findings bring new lights in the fluctuating nature of momentum exchange coefficients and integral time and length scales. These turbulent properties should not be assumed constant.
Resumo:
A hydraulic jump is characterized by strong energy dissipation and mixing, large-scale turbulence, air entrainment, waves and spray. Despite recent pertinent studies, the interaction between air bubbles diffusion and momentum transfer is not completely understood. The objective of this paper is to present experimental results from new measurements performed in rectangular horizontal flume with partially-developed inflow conditions. The vertical distributions of void fraction and air bubbles count rate were recorded for inflow Froude number Fr1 in the range from 5.2 to 14.3. Rapid detrainment process was observed near the jump toe, whereas the structure of the air diffusion layer was clearly observed over longer distances. These new data were compared with previous data generally collected at lower Froude numbers. The comparison demonstrated that, at a fixed distance from the jump toe, the maximum void fraction Cmax increases with the increasing Fr1. The vertical locations of the maximum void fraction and bubble count rate were consistent with previous studies. Finally, an empirical correlation between the upper boundary of the air diffusion layer and the distance from the impingement point was provided.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.
Resumo:
In high-velocity free-surface flows, air is continuously being trapped and released through the free-surface. Such high-velocity highly-aerated flows cannot be studied numerically because of the large number of relevant equations and parameters. Herein an advanced signal processing of traditional single- and dual-tip conductivity probes provides some new information on the air-water turbulent time and length scales. The technique is applied to turbulent open channel flows in a large-size facility. The auto- and cross-correlation analyses yield some characterisation of the large eddies advecting the bubbles. The transverse integral turbulent length and time scales are related to the step height: i.e., Lxy/h ~ 0.02 to 0.2, and T.sqrt(g/h) ~ 0.004 to 0.04. The results are irrespective of the Reynolds numbers. The present findings emphasise that turbulent dissipation by large-scale vortices is a significant process in the intermediate zone between the spray and bubbly flow regions (0.3 < C < 0.7). Some self-similar relationships were observed systematically at both macroscopic and microscopic levels. The results are significant because they provide a picture general enough to be used to characterise the air-water flow field in prototype spillways.
Resumo:
Except for a few large scale projects, language planners have tended to talk and argue among themselves rather than to see language policy development as an inherently political process. A comparison with a social policy example, taken from the United States, suggests that it is important to understand the problem and to develop solutions in the context of the political process, as this is where decisions will ultimately be made.