925 resultados para Two-Dimensional Search Problem
Resumo:
In this paper, we study the issues of modeling, numerical methods, and simulation with comparison to experimental data for the particle-fluid two-phase flow problem involving a solid-liquid mixed medium. The physical situation being considered is a pulsed liquid fluidized bed. The mathematical model is based on the assumption of one-dimensional flows, incompressible in both particle and fluid phases, equal particle diameters, and the wall friction force on both phases being ignored. The model consists of a set of coupled differential equations describing the conservation of mass and momentum in both phases with coupling and interaction between the two phases. We demonstrate conditions under which the system is either mathematically well posed or ill posed. We consider the general model with additional physical viscosities and/or additional virtual mass forces, both of which stabilize the system. Two numerical methods, one of them is first-order accurate and the other fifth-order accurate, are used to solve the models. A change of variable technique effectively handles the changing domain and boundary conditions. The numerical methods are demonstrated to be stable and convergent through careful numerical experiments. Simulation results for realistic pulsed liquid fluidized bed are provided and compared with experimental data. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
In the first part I perform Hartree-Fock calculations to show that quantum dots (i.e., two-dimensional systems of up to twenty interacting electrons in an external parabolic potential) undergo a gradual transition to a spin-polarized Wigner crystal with increasing magnetic field strength. The phase diagram and ground state energies have been determined. I tried to improve the ground state of the Wigner crystal by introducing a Jastrow ansatz for the wave function and performing a variational Monte Carlo calculation. The existence of so called magic numbers was also investigated. Finally, I also calculated the heat capacity associated with the rotational degree of freedom of deformed many-body states and suggest an experimental method to detect Wigner crystals.
The second part of the thesis investigates infinite nuclear matter on a cubic lattice. The exact thermal formalism describes nucleons with a Hamiltonian that accommodates on-site and next-neighbor parts of the central, spin-exchange and isospin-exchange interaction. Using auxiliary field Monte Carlo methods, I show that energy and basic saturation properties of nuclear matter can be reproduced. A first order phase transition from an uncorrelated Fermi gas to a clustered system is observed by computing mechanical and thermodynamical quantities such as compressibility, heat capacity, entropy and grand potential. The structure of the clusters is investigated with the help two-body correlations. I compare symmetry energy and first sound velocities with literature and find reasonable agreement. I also calculate the energy of pure neutron matter and search for a similar phase transition, but the survey is restricted by the infamous Monte Carlo sign problem. Also, a regularization scheme to extract potential parameters from scattering lengths and effective ranges is investigated.
Resumo:
A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.
The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.
Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.
The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.
Resumo:
This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.
Resumo:
The design of wind turbine blades is a true multi-objective engineering task. The aerodynamic effectiveness of the turbine needs to be balanced with the system loads introduced by the rotor. Moreover the problem is not dependent on a single geometric property, but besides other parameters on a combination of aerofoil family and various blade functions. The aim of this paper is therefore to present a tool which can help designers to get a deeper insight into the complexity of the design space and to find a blade design which is likely to have a low cost of energy. For the research we use a Computational Blade Optimisation and Load Deflation Tool (CoBOLDT) to investigate the three extreme point designs obtained from a multi-objective optimisation of turbine thrust, annual energy production as well as mass for a horizontal axis wind turbine blade. The optimisation algorithm utilised is based on Multi-Objective Tabu Search which constitutes the core of CoBOLDT. The methodology is capable to parametrise the spanning aerofoils with two-dimensional Free Form Deformation and blade functions with two tangentially connected cubic splines. After geometry generation we use a panel code to create aerofoil polars and a stationary Blade Element Momentum code to evaluate turbine performance. Finally, the obtained loads are fed into a structural layout module to estimate the mass and stiffness of the current blade by means of a fully stressed design. For the presented test case we chose post optimisation analysis with parallel coordinates to reveal geometrical features of the extreme point designs and to select a compromise design from the Pareto set. The research revealed that a blade with a feasible laminate layout can be obtained, that can increase the energy capture and lower steady state systems loads. The reduced aerofoil camber and an increased L/. D-ratio could be identified as the main drivers. This statement could not be made with other tools of the research community before. © 2013 Elsevier Ltd.
Resumo:
We report a series of psychophysical experiments that explore different aspects of the problem of object representation and recognition in human vision. Contrary to the paradigmatic view which holds that the representations are three-dimensional and object-centered, the results consistently support the notion of view-specific representations that include at most partial depth information. In simulated experiments that involved the same stimuli shown to the human subjects, computational models built around two-dimensional multiple-view representations replicated our main psychophysical results, including patterns of generalization errors and the time course of perceptual learning.
Resumo:
Plakhov, A.Y.; Torres, D., (2005) 'Newton's aerodynamic problem in media of chaotically moving particles', Sbornik: Mathematics 196(6) pp.885-933 RAE2008
Resumo:
Neoplastic tissue is typically highly vascularized, contains abnormal concentrations of extracellular proteins (e.g. collagen, proteoglycans) and has a high interstitial fluid pres- sure compared to most normal tissues. These changes result in an overall stiffening typical of most solid tumors. Elasticity Imaging (EI) is a technique which uses imaging systems to measure relative tissue deformation and thus noninvasively infer its mechanical stiffness. Stiffness is recovered from measured deformation by using an appropriate mathematical model and solving an inverse problem. The integration of EI with existing imaging modal- ities can improve their diagnostic and research capabilities. The aim of this work is to develop and evaluate techniques to image and quantify the mechanical properties of soft tissues in three dimensions (3D). To that end, this thesis presents and validates a method by which three dimensional ultrasound images can be used to image and quantify the shear modulus distribution of tissue mimicking phantoms. This work is presented to motivate and justify the use of this elasticity imaging technique in a clinical breast cancer screening study. The imaging methodologies discussed are intended to improve the specificity of mammography practices in general. During the development of these techniques, several issues concerning the accuracy and uniqueness of the result were elucidated. Two new algorithms for 3D EI are designed and characterized in this thesis. The first provides three dimensional motion estimates from ultrasound images of the deforming ma- terial. The novel features include finite element interpolation of the displacement field, inclusion of prior information and the ability to enforce physical constraints. The roles of regularization, mesh resolution and an incompressibility constraint on the accuracy of the measured deformation is quantified. The estimated signal to noise ratio of the measured displacement fields are approximately 1800, 21 and 41 for the axial, lateral and eleva- tional components, respectively. The second algorithm recovers the shear elastic modulus distribution of the deforming material by efficiently solving the three dimensional inverse problem as an optimization problem. This method utilizes finite element interpolations, the adjoint method to evaluate the gradient and a quasi-Newton BFGS method for optimiza- tion. Its novel features include the use of the adjoint method and TVD regularization with piece-wise constant interpolation. A source of non-uniqueness in this inverse problem is identified theoretically, demonstrated computationally, explained physically and overcome practically. Both algorithms were test on ultrasound data of independently characterized tissue mimicking phantoms. The recovered elastic modulus was in all cases within 35% of the reference elastic contrast. Finally, the preliminary application of these techniques to tomosynthesis images showed the feasiblity of imaging an elastic inclusion.
Resumo:
A well-known paradigm for load balancing in distributed systems is the``power of two choices,''whereby an item is stored at the less loaded of two (or more) random alternative servers. We investigate the power of two choices in natural settings for distributed computing where items and servers reside in a geometric space and each item is associated with the server that is its nearest neighbor. This is in fact the backdrop for distributed hash tables such as Chord, where the geometric space is determined by clockwise distance on a one-dimensional ring. Theoretically, we consider the following load balancing problem. Suppose that servers are initially hashed uniformly at random to points in the space. Sequentially, each item then considers d candidate insertion points also chosen uniformly at random from the space,and selects the insertion point whose associated server has the least load. For the one-dimensional ring, and for Euclidean distance on the two-dimensional torus, we demonstrate that when n data items are hashed to n servers,the maximum load at any server is log log n / log d + O(1) with high probability. While our results match the well-known bounds in the standard setting in which each server is selected equiprobably, our applications do not have this feature, since the sizes of the nearest-neighbor regions around servers are non-uniform. Therefore, the novelty in our methods lies in developing appropriate tail bounds on the distribution of nearest-neighbor region sizes and in adapting previous arguments to this more general setting. In addition, we provide simulation results demonstrating the load balance that results as the system size scales into the millions.
Resumo:
One-and two-dimensional cellular automata which are known to be fault-tolerant are very complex. On the other hand, only very simple cellular automata have actually been proven to lack fault-tolerance, i.e., to be mixing. The latter either have large noise probability ε or belong to the small family of two-state nearest-neighbor monotonic rules which includes local majority voting. For a certain simple automaton L called the soldiers rule, this problem has intrigued researchers for the last two decades since L is clearly more robust than local voting: in the absence of noise, L eliminates any finite island of perturbation from an initial configuration of all 0's or all 1's. The same holds for a 4-state monotonic variant of L, K, called two-line voting. We will prove that the probabilistic cellular automata Kε and Lε asymptotically lose all information about their initial state when subject to small, strongly biased noise. The mixing property trivially implies that the systems are ergodic. The finite-time information-retaining quality of a mixing system can be represented by its relaxation time Relax(⋅), which measures the time before the onset of significant information loss. This is known to grow as (1/ε)^c for noisy local voting. The impressive error-correction ability of L has prompted some researchers to conjecture that Relax(Lε) = 2^(c/ε). We prove the tight bound 2^(c1log^21/ε) < Relax(Lε) < 2^(c2log^21/ε) for a biased error model. The same holds for Kε. Moreover, the lower bound is independent of the bias assumption. The strong bias assumption makes it possible to apply sparsity/renormalization techniques, the main tools of our investigation, used earlier in the opposite context of proving fault-tolerance.
Resumo:
One problem in most three-dimensional (3D) scalar data visualization techniques is that they often overlook to depict uncertainty that comes with the 3D scalar data and thus fail to faithfully present the 3D scalar data and have risks which may mislead users’ interpretations, conclusions or even decisions. Therefore this thesis focuses on the study of uncertainty visualization in 3D scalar data and we seek to create better uncertainty visualization techniques, as well as to find out the advantages/disadvantages of those state-of-the-art uncertainty visualization techniques. To do this, we address three specific hypotheses: (1) the proposed Texture uncertainty visualization technique enables users to better identify scalar/error data, and provides reduced visual overload and more appropriate brightness than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (2) The proposed Linked Views and Interactive Specification (LVIS) uncertainty visualization technique enables users to better search max/min scalar and error data than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (3) The proposed Probabilistic Query uncertainty visualization technique, in comparison to traditional Direct Volume Rendering (DVR) methods, enables radiologists/physicians to better identify possible alternative renderings relevant to a diagnosis and the classification probabilities associated to the materials appeared on these renderings; this leads to improved decision support for diagnosis, as demonstrated in the domain of medical imaging. For each hypothesis, we test it by following/implementing a unified framework that consists of three main steps: the first main step is uncertainty data modeling, which clearly defines and generates certainty types of uncertainty associated to given 3D scalar data. The second main step is uncertainty visualization, which transforms the 3D scalar data and their associated uncertainty generated from the first main step into two-dimensional (2D) images for insight, interpretation or communication. The third main step is evaluation, which transforms the 2D images generated from the second main step into quantitative scores according to specific user tasks, and statistically analyzes the scores. As a result, the quality of each uncertainty visualization technique is determined.
Resumo:
This paper investigates the problem of seepage under the floor of hydraulic structures considering the compartment of flow that seeps through the surrounding banks of the canal. A computer program, utilizing a finite-element method and capable of handling three-dimensional (3D) saturated–unsaturated flow problems, was used. Different ratios of canal width/differential head applied on the structure were studied. The results produced from the two-dimensional (2D) analysis were observed to deviate largely from that obtained from 3D analysis of the same problem, despite the fact that the porous medium was isotropic and homogeneous. For example, the exit gradient obtained from 3D analysis was as high as 2.5 times its value obtained from 2D analysis. Uplift force acting upwards on the structure has also increased by about 46% compared with its value obtained from the 2D solution. When the canal width/ differential head ratio was 10 or higher, the 3D results were comparable to the 2D results. It is recommended to construct a core of low permeability soil in the banks of canal to reduce the seepage losses, uplift force, and exit gradient.
Resumo:
Nas últimas décadas a quiralidade tornou-se essencial na conceção, descoberta, desenvolvimento e comercialização de novos medicamentos. A importância da quiralidade na eficácia e segurança dos fármacos tem sido globalmente reconhecida tanto pelas indústrias farmacêuticas como pelas agências reguladoras de todo o mundo. De forma a produzir eficazmente medicamentos seguros e dar resposta à demanda da indústria de compostos enantiomericamente puros, a pesquisa de novos métodos de síntese assimétrica, assim como o desenvolvimento estratégico dos métodos já disponíveis tem sido um dos principais objetos de estudo de diversos grupos de investigação tanto na academia como na indústria farmacêutica No primeiro capítulo desta dissertação são introduzidos alguns dos conceitos fundamentais associados à síntese de moléculas quirais e descritas algumas das estratégias que podem ser utilizadas na sua síntese. Apresenta-se ainda uma breve revisão bibliográfica acerca dos antecedentes do grupo de investigação e sobre a ocorrência natural, atividade biológica e métodos de síntese e transformações de compostos do tipo (E,E)-cinamilidenoacetofenona. O segundo capítulo centra-se na adição de Michael enantiosseletiva de diversos nucleófilos a derivados de (E,E)-cinamilidenoacetofenona. Inicialmente descreve-se a síntese de derivados de (E,E)-cinamilidenoacetofenona através de uma condensação aldólica de acetofenonas e cinamaldeídos apropriadamente substituídos. Estes derivados são posteriormente utilizados como substratos na adição de Michael enantiosseletiva de três diferentes nucleófilos: nitrometano, malononitrilo e 2-[(difenilmetileno)amino]acetato de metilo. Nestas reações são utilizados diferentes organocatalisadores de forma a induzir enantiosseletividade nos aductos de Michael para serem utilizados na síntese de compostos com potencial interesse terapêutico. É descrita ainda uma nova metodologia de síntese de Δ1-pirrolinas através de um procedimento one-pot de redução/ciclização/desidratação mediada por ferro na presença de ácido acético de (R,E)-1,5-diaril-3-(nitrometil)pent-4-en-1-onas com bons rendimentos e excelentes excessos enantioméricos. O terceiro capítulo centra-se no estabelecimento de novas rotas de síntese e transformação de derivados do ciclo-hexano. Após uma breve revisão bibliográfica, são descritas três metodologias enantiosseletivas distintas, sendo que a primeira envolve a utilização de organocatalisadores e catalisadores de transferência de fase derivados de alcaloides cinchona. Os derivados do ciclo-hexano foram obtidos a partir da reação entre as (E,E)-cinamilidenoacetofenonas e o malononitrilo com bons rendimentos, mas baixas enantiosseletividades independentemente do catalisador utilizado. De forma a contornar este problema e uma vez que a formação do derivado do ciclo-hexano envolve inicialmente a formação in-situ do aducto de Michael, a segunda e terceira metodologias de síntese envolvem a utilização dos aductos de Michael enantiomericamente puros preparados no segundo capítulo. Assim, a reação do (S,E)-2-(1,5-diaril-1-oxopent-4-en-3-il)malononitrilo com os derivados de (E,E)-cinamilidenoacetofenona organocatalisada pela hidroquinina permitiu obter os compostos pretendidos com excelentes excessos enantioméricos. A utilização de um catalisador de transferência de fase não foi tão eficiente em termos de enantiosseletividades obtidas na reação entre as (R,E)-1,5-diaril-3-(nitrometil)pent-4-en-1-onas e os derivados de (E,E)-cinamilidenoacetofenona, apesar de estes terem sido obtidos em bons rendimentos. A preparação destes derivados levou ainda à idealização de uma nova metodologia de síntese de análogos do ácido γ-aminobutírico (GABA) devido à presença de um grupo nitro em posição gama relativamente a um grupo carboxílico. No entanto, apesar de terem sido testadas várias metodologias, não foi possível obter os compostos pretendidos. No quarto capítulo apresenta-se uma breve revisão bibliográfica acerca da ocorrência natural, atividade biológica e métodos de síntese de derivados de di-hidro- e tetra-hidropiridinas, assim como um enquadramento teórico acerca das reações pericíclicas utilizadas na síntese dos compostos pretendidos. Inicialmente é descrita a preparação de N-sulfonilazatrienos substituídos através da condensação direta de derivados de (E,E)-cinamilidenoacetofenona e sulfonamidas. Estes compostos são posteriormente utilizados na síntese de derivados de 1,2-di-hidropiridinas através de uma aza-eletrociclização-6π por duas metodologias distintas: utilização de organocatalisadores quirais e utilização de complexos metálicos de bisoxazolinas. Na síntese das tetra-hidropiridinas os N-sulfonilazatrienos são utilizados como dienos e o étoxi-eteno como dienófilo numa reação hetero-Diels-Alder inversa utilizando também os complexos metálicos de bisoxazolinas como catalisadores. Todos os novos compostos sintetizados foram caracterizados estruturalmente recorrendo a estudos de espetroscopia de ressonância magnética nuclear (RMN), incluindo espetros de 1H e 13C e estudos bidimensionais de correlação espetroscópica homonuclear e heteronuclear e de efeito nuclear de Overhauser (NOESY). Foram também efetuados, sempre que possível, espetros de massa (EM) e análises elementares ou espetros de massa de alta resolução (EMAR) para todos os novos compostos sintetizados.
Resumo:
This thesis explores the debate and issues regarding the status of visual ;,iferellces in the optical writings of Rene Descartes, George Berkeley and James 1. Gibson. It gathers arguments from across their works and synthesizes an account of visual depthperception that accurately reflects the larger, metaphysical implications of their philosophical theories. Chapters 1 and 2 address the Cartesian and Berkelean theories of depth-perception, respectively. For Descartes and Berkeley the debate can be put in the following way: How is it possible that we experience objects as appearing outside of us, at various distances, if objects appear inside of us, in the representations of the individual's mind? Thus, the Descartes-Berkeley component of the debate takes place exclusively within a representationalist setting. Representational theories of depthperception are rooted in the scientific discovery that objects project a merely twodimensional patchwork of forms on the retina. I call this the "flat image" problem. This poses the problem of depth in terms of a difference between two- and three-dimensional orders (i.e., a gap to be bridged by one inferential procedure or another). Chapter 3 addresses Gibson's ecological response to the debate. Gibson argues that the perceiver cannot be flattened out into a passive, two-dimensional sensory surface. Perception is possible precisely because the body and the environment already have depth. Accordingly, the problem cannot be reduced to a gap between two- and threedimensional givens, a gap crossed with a projective geometry. The crucial difference is not one of a dimensional degree. Chapter 3 explores this theme and attempts to excavate the empirical and philosophical suppositions that lead Descartes and Berkeley to their respective theories of indirect perception. Gibson argues that the notion of visual inference, which is necessary to substantiate representational theories of indirect perception, is highly problematic. To elucidate this point, the thesis steps into the representationalist tradition, in order to show that problems that arise within it demand a tum toward Gibson's information-based doctrine of ecological specificity (which is to say, the theory of direct perception). Chapter 3 concludes with a careful examination of Gibsonian affordallces as the sole objects of direct perceptual experience. The final section provides an account of affordances that locates the moving, perceiving body at the heart of the experience of depth; an experience which emerges in the dynamical structures that cross the body and the world.
Resumo:
One of the most important problems in the theory of cellular automata (CA) is determining the proportion of cells in a specific state after a given number of time iterations. We approach this problem using patterns in preimage sets - that is, the set of blocks which iterate to the desired output. This allows us to construct a response curve - a relationship between the proportion of cells in state 1 after niterations as a function of the initial proportion. We derive response curve formulae for many two-dimensional deterministic CA rules with L-neighbourhood. For all remaining rules, we find experimental response curves. We also use preimage sets to classify surjective rules. In the last part of the thesis, we consider a special class of one-dimensional probabilistic CA rules. We find response surface formula for these rules and experimental response surfaces for all remaining rules.