936 resultados para Two-Dimensional Search Problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We perform a three-dimensional study of steady state viscous fingers that develop in linear channels. By means of a three-dimensional lattice-Boltzmann scheme that mimics the full macroscopic equations of motion of the fluid momentum and order parameter, we study the effect of the thickness of the channel in two cases. First, for total displacement of the fluids in the channel thickness direction, we find that the steady state finger is effectively two-dimensional and that previous two-dimensional results can be recovered by taking into account the effect of a curved meniscus across the channel thickness as a contribution to surface stresses. Second, when a thin film develops in the channel thickness direction, the finger narrows with increasing channel aspect ratio in agreement with experimental results. The effect of the thin film renders the problem three-dimensional and results deviate from the two-dimensional prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the families of periodic orbits of the spatial isosceles 3-body problem (for small enough values of the mass lying on the symmetry axis) coming via the analytic continuation method from periodic orbits of the circular Sitnikov problem. Using the first integral of the angular momentum, we reduce the dimension of the phase space of the problem by two units. Since periodic orbits of the reduced isosceles problem generate invariant two-dimensional tori of the nonreduced problem, the analytic continuation of periodic orbits of the (reduced) circular Sitnikov problem at this level becomes the continuation of invariant two-dimensional tori from the circular Sitnikov problem to the nonreduced isosceles problem, each one filled with periodic or quasi-periodic orbits. These tori are not KAM tori but just isotropic, since we are dealing with a three-degrees-of-freedom system. The continuation of periodic orbits is done in two different ways, the first going directly from the reduced circular Sitnikov problem to the reduced isosceles problem, and the second one using two steps: first we continue the periodic orbits from the reduced circular Sitnikov problem to the reduced elliptic Sitnikov problem, and then we continue those periodic orbits of the reduced elliptic Sitnikov problem to the reduced isosceles problem. The continuation in one or two steps produces different results. This work is merely analytic and uses the variational equations in order to apply Poincar´e’s continuation method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This empirical study consists in an investigation of the effects, on the development of Information Problem Solving (IPS) skills, of a long-term embedded, structured and supported instruction in Secondary Education. Forty secondary students of 7th and 8th grades (13–15 years old) participated in the 2-year IPS instruction designed in this study. Twenty of them participated in the IPS instruction, and the remaining twenty were the control group. All the students were pre- and post-tested in their regular classrooms, and their IPS process and performance were logged by means of screen capture software, to warrant their ecological validity. The IPS constituent skills, the web search sub-skills and the answers given by each participant were analyzed. The main findings of our study suggested that experimental students showed a more expert pattern than the control students regarding the constituent skill ‘defining the problem’ and the following two web search sub-skills: ‘search terms’ typed in a search engine, and ‘selected results’ from a SERP. In addition, scores of task performance were statistically better in experimental students than in control group students. The paper contributes to the discussion of how well-designed and well-embedded scaffolds could be designed in instructional programs in order to guarantee the development and efficiency of the students’ IPS skills by using net information better and participating fully in the global knowledge society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Beckwith-Wiedemann syndrome is a genetic syndrome characterized by macroglossia, omphalocele, fetal gigantism and neonatal hypoglycemia. The authors report a case of Beckwith-Wiedemann syndrome diagnosed in a 32-year-old primigravida in whom two-dimensional ultrasonography revealed the presence of abdominal wall cyst, macroglossia and polycystic kidneys. Three-dimensional ultrasonography in rendering mode was of great importance to confirm the previous two-dimensional ultrasonography findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose To evaluate the precision of both two- and three-dimensional ultrasonography in determining vertebral lesion level (the first open vertebra) in patients with spina bifida. Methods This was a prospective longitudinal study comprising of fetuses with open spina bifida who were treated in the fetal medicine division of the department of obstetrics of Hospital das Clínicas of the Universidade de São Paulo between 2004 and 2013. Vertebral lesion level was established by using both two- and three-dimensional ultrasonography in 50 fetuses (two examiners in each method). The lesion level in the neonatal period was established by radiological assessment of the spine. All pregnancies were followed in our hospital prenatally, and delivery was scheduled to allow immediate postnatal surgical correction. Results Two-dimensional sonography precisely estimated the spina bifida level in 53% of the cases. The estimate error was within one vertebra in 80% of the cases, in up to two vertebrae in 89%, and in up to three vertebrae in 100%, showing a good interobserver agreement. Three-dimensional ultrasonography precisely estimated the lesion level in 50% of the cases. The estimate error was within one vertebra in 82% of the cases, in up to two vertebrae in 90%, and in up to three vertebrae in 100%, also showing good interobserver agreement. Whenever an estimate error was observed, both two- and three-dimensional ultrasonography scans tended to underestimate the true lesion level (55.3% and 62% of the cases, respectively). Conclusions No relevant difference in diagnostic performance was observed between the two- and three-dimensional ultrasonography. The use of three-dimensional ultrasonography showed no additional benefit in diagnosing the lesion level in the fetuses with spina bifida. Errors in both methods showed a tendency to underestimate lesion level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the debate and issues regarding the status of visual ;,iferellces in the optical writings of Rene Descartes, George Berkeley and James 1. Gibson. It gathers arguments from across their works and synthesizes an account of visual depthperception that accurately reflects the larger, metaphysical implications of their philosophical theories. Chapters 1 and 2 address the Cartesian and Berkelean theories of depth-perception, respectively. For Descartes and Berkeley the debate can be put in the following way: How is it possible that we experience objects as appearing outside of us, at various distances, if objects appear inside of us, in the representations of the individual's mind? Thus, the Descartes-Berkeley component of the debate takes place exclusively within a representationalist setting. Representational theories of depthperception are rooted in the scientific discovery that objects project a merely twodimensional patchwork of forms on the retina. I call this the "flat image" problem. This poses the problem of depth in terms of a difference between two- and three-dimensional orders (i.e., a gap to be bridged by one inferential procedure or another). Chapter 3 addresses Gibson's ecological response to the debate. Gibson argues that the perceiver cannot be flattened out into a passive, two-dimensional sensory surface. Perception is possible precisely because the body and the environment already have depth. Accordingly, the problem cannot be reduced to a gap between two- and threedimensional givens, a gap crossed with a projective geometry. The crucial difference is not one of a dimensional degree. Chapter 3 explores this theme and attempts to excavate the empirical and philosophical suppositions that lead Descartes and Berkeley to their respective theories of indirect perception. Gibson argues that the notion of visual inference, which is necessary to substantiate representational theories of indirect perception, is highly problematic. To elucidate this point, the thesis steps into the representationalist tradition, in order to show that problems that arise within it demand a tum toward Gibson's information-based doctrine of ecological specificity (which is to say, the theory of direct perception). Chapter 3 concludes with a careful examination of Gibsonian affordallces as the sole objects of direct perceptual experience. The final section provides an account of affordances that locates the moving, perceiving body at the heart of the experience of depth; an experience which emerges in the dynamical structures that cross the body and the world.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most important problems in the theory of cellular automata (CA) is determining the proportion of cells in a specific state after a given number of time iterations. We approach this problem using patterns in preimage sets - that is, the set of blocks which iterate to the desired output. This allows us to construct a response curve - a relationship between the proportion of cells in state 1 after niterations as a function of the initial proportion. We derive response curve formulae for many two-dimensional deterministic CA rules with L-neighbourhood. For all remaining rules, we find experimental response curves. We also use preimage sets to classify surjective rules. In the last part of the thesis, we consider a special class of one-dimensional probabilistic CA rules. We find response surface formula for these rules and experimental response surfaces for all remaining rules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La protéomique est un sujet d'intérêt puisque l'étude des fonctions et des structures de protéines est essentiel à la compréhension du fonctionnement d'un organisme donné. Ce projet se situe dans la catégorie des études structurales, ou plus précisément, la séquence primaire en acides aminés pour l’identification d’une protéine. La détermination des protéines commence par l'extraction d'un mélange protéique issu d'un tissu ou d'un fluide biologique pouvant contenir plus de 1000 protéines différentes. Ensuite, des techniques analytiques comme l’électrophorèse en gel polyacrylamide en deux dimensions (2D-SDS-PAGE), qui visent à séparer ce mélange en fonction du point isoélectrique et de la masse molaire des protéines, sont utilisées pour isoler les protéines et pour permettre leur identification par chromatographie liquide and spectrométrie de masse (MS), typiquement. Ce projet s'inspire de ce processus et propose que l'étape de fractionnement de l'extrait protéique avec la 2D-SDS-PAGE soit remplacé ou supporté par de multiples fractionnements en parallèle par électrophorèse capillaire (CE) quasi-multidimensionnelle. Les fractions obtenues, contenant une protéine seule ou un mélange de protéines moins complexe que l’extrait du départ, pourraient ensuite être soumises à des identifications de protéines par cartographie peptidique et cartographie protéique à l’aide des techniques de séparations analytiques et de la MS. Pour obtenir la carte peptidique d'un échantillon, il est nécessaire de procéder à la protéolyse enzymatique ou chimique des protéines purifiées et de séparer les fragments peptidiques issus de cette digestion. Les cartes peptidiques ainsi générées peuvent ensuite être comparées à des échantillons témoins ou les masses exactes des peptides enzymatiques sont soumises à des moteurs de recherche comme MASCOT™, ce qui permet l’identification des protéines en interrogeant les bases de données génomiques. Les avantages exploitables de la CE, par rapport à la 2D-SDS-PAGE, sont sa haute efficacité de séparation, sa rapidité d'analyse et sa facilité d'automatisation. L’un des défis à surmonter est la faible quantité de masse de protéines disponible après analyses en CE, due partiellement à l'adsorption des protéines sur la paroi du capillaire, mais due majoritairement au faible volume d'échantillon en CE. Pour augmenter ce volume, un capillaire de 75 µm était utilisé. Aussi, le volume de la fraction collectée était diminué de 1000 à 100 µL et les fractions étaient accumulées 10 fois; c’est-à-dire que 10 produits de séparations étaient contenu dans chaque fraction. D'un autre côté, l'adsorption de protéines se traduit par la variation de l'aire d'un pic et du temps de migration d'une protéine donnée ce qui influence la reproductibilité de la séparation, un aspect très important puisque 10 séparations cumulatives sont nécessaires pour la collecte de fractions. De nombreuses approches existent pour diminuer ce problème (e.g. les extrêmes de pH de l’électrolyte de fond, les revêtements dynamique ou permanent du capillaire, etc.), mais dans ce mémoire, les études de revêtement portaient sur le bromure de N,N-didodecyl-N,N-dimethylammonium (DDAB), un surfactant qui forme un revêtement semi-permanent sur la paroi du capillaire. La grande majorité du mémoire visait à obtenir une séparation reproductible d'un mélange protéique standard préparé en laboratoire (contenant l’albumine de sérum de bovin, l'anhydrase carbonique, l’α-lactalbumine et la β-lactoglobulin) par CE avec le revêtement DDAB. Les études portées sur le revêtement montraient qu'il était nécessaire de régénérer le revêtement entre chaque injection du mélange de protéines dans les conditions étudiées : la collecte de 5 fractions de 6 min chacune à travers une séparation de 30 min, suivant le processus de régénération du DDAB, et tout ça répété 10 fois. Cependant, l’analyse en CE-UV et en HPLC-MS des fractions collectées ne montraient pas les protéines attendues puisqu'elles semblaient être en-dessous de la limite de détection. De plus, une analyse en MS montrait que le DDAB s’accumule dans les fractions collectées dû à sa désorption de la paroi du capillaire. Pour confirmer que les efforts pour recueillir une quantité de masse de protéine étaient suffisants, la méthode de CE avec détection par fluorescence induite par laser (CE-LIF) était utilisée pour séparer et collecter la protéine, albumine marquée de fluorescéine isothiocyanate (FITC), sans l'utilisation du revêtement DDAB. Ces analyses montraient que l'albumine-FITC était, en fait, présente dans la fraction collecté. La cartographie peptidique a été ensuite réalisée avec succès en employant l’enzyme chymotrypsine pour la digestion et CE-LIF pour obtenir la carte peptidique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A scale-invariant moving finite element method is proposed for the adaptive solution of nonlinear partial differential equations. The mesh movement is based on a finite element discretisation of a scale-invariant conservation principle incorporating a monitor function, while the time discretisation of the resulting system of ordinary differential equations is carried out using a scale-invariant time-stepping which yields uniform local accuracy in time. The accuracy and reliability of the algorithm are successfully tested against exact self-similar solutions where available, and otherwise against a state-of-the-art h-refinement scheme for solutions of a two-dimensional porous medium equation problem with a moving boundary. The monitor functions used are the dependent variable and a monitor related to the surface area of the solution manifold. (c) 2005 IMACS. Published by Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Stochastic Diffusion Search (SDS) was developed as a solution to the best-fit search problem. Thus, as a special case it is capable of solving the transform invariant pattern recognition problem. SDS is efficient and, although inherently probabilistic, produces very reliable solutions in widely ranging search conditions. However, to date a systematic formal investigation of its properties has not been carried out. This thesis addresses this problem. The thesis reports results pertaining to the global convergence of SDS as well as characterising its time complexity. However, the main emphasis of the work, reports on the resource allocation aspect of the Stochastic Diffusion Search operations. The thesis introduces a novel model of the algorithm, generalising an Ehrenfest Urn Model from statistical physics. This approach makes it possible to obtain a thorough characterisation of the response of the algorithm in terms of the parameters describing the search conditions in case of a unique best-fit pattern in the search space. This model is further generalised in order to account for different search conditions: two solutions in the search space and search for a unique solution in a noisy search space. Also an approximate solution in the case of two alternative solutions is proposed and compared with predictions of the extended Ehrenfest Urn model. The analysis performed enabled a quantitative characterisation of the Stochastic Diffusion Search in terms of exploration and exploitation of the search space. It appeared that SDS is biased towards the latter mode of operation. This novel perspective on the Stochastic Diffusion Search lead to an investigation of extensions of the standard SDS, which would strike a different balance between these two modes of search space processing. Thus, two novel algorithms were derived from the standard Stochastic Diffusion Search, ‘context-free’ and ‘context-sensitive’ SDS, and their properties were analysed with respect to resource allocation. It appeared that they shared some of the desired features of their predecessor but also possessed some properties not present in the classic SDS. The theory developed in the thesis was illustrated throughout with carefully chosen simulations of a best-fit search for a string pattern, a simple but representative domain, enabling careful control of search conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of determining the pressure and velocity fields for a weakly compressible fluid flowing in a three-dimensional layer, composed of an inhomogeneous, anisotropic porous medium, with vertical side walls and variable upper and lower boundaries, in the presence of vertical wells injecting and/or extracting fluid. Numerical solution of this three-dimensional evolution problem may be expensive, particularly in the case that the depth scale of the layer h is small compared to the horizontal length scale l, a situation which occurs frequently in the application to oil and gas reservoir recovery and which leads to significant stiffness in the numerical problem. Under the assumption that $\epsilon\propto h/l\ll 1$, we show that, to leading order in $\epsilon$, the pressure field varies only in the horizontal directions away from the wells (the outer region). We construct asymptotic expansions in $\epsilon$ in both the inner (near the wells) and outer regions and use the asymptotic matching principle to derive expressions for all significant process quantities. The only computations required are for the solution of non-stiff linear, elliptic, two-dimensional boundary-value, and eigenvalue problems. This approach, via the method of matched asymptotic expansions, takes advantage of the small aspect ratio of the layer, $\epsilon$, at precisely the stage where full numerical computations become stiff, and also reveals the detailed structure of the dynamics of the flow, both in the neighbourhood of wells and away from wells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wave-activity conservation laws are key to understanding wave propagation in inhomogeneous environments. Their most general formulation follows from the Hamiltonian structure of geophysical fluid dynamics. For large-scale atmospheric dynamics, the Eliassen–Palm wave activity is a well-known example and is central to theoretical analysis. On the mesoscale, while such conservation laws have been worked out in two dimensions, their application to a horizontally homogeneous background flow in three dimensions fails because of a degeneracy created by the absence of a background potential vorticity gradient. Earlier three-dimensional results based on linear WKB theory considered only Doppler-shifted gravity waves, not waves in a stratified shear flow. Consideration of a background flow depending only on altitude is motivated by the parameterization of subgrid-scales in climate models where there is an imposed separation of horizontal length and time scales, but vertical coupling within each column. Here we show how this degeneracy can be overcome and wave-activity conservation laws derived for three-dimensional disturbances to a horizontally homogeneous background flow. Explicit expressions for pseudoenergy and pseudomomentum in the anelastic and Boussinesq models are derived, and it is shown how the previously derived relations for the two-dimensional problem can be treated as a limiting case of the three-dimensional problem. The results also generalize earlier three-dimensional results in that there is no slowly varying WKB-type requirement on the background flow, and the results are extendable to finite amplitude. The relationship A E =cA P between pseudoenergy A E and pseudomomentum A P, where c is the horizontal phase speed in the direction of symmetry associated with A P, has important applications to gravity-wave parameterization and provides a generalized statement of the first Eliassen–Palm theorem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the linear and nonlinear stability of stationary solutions of the forced two-dimensional Navier-Stokes equations on the domain [0,2π]x[0,2π/α], where α ϵ(0,1], with doubly periodic boundary conditions. For the linear problem we employ the classical energy{enstrophy argument to derive some fundamental properties of unstable eigenmodes. From this it is shown that forces of pure χ2-modes having wavelengths greater than 2π do not give rise to linear instability of the corresponding primary stationary solutions. For the nonlinear problem, we prove the equivalence of nonlinear stability with respect to the energy and enstrophy norms. This equivalence is then applied to derive optimal conditions for nonlinear stability, including both the high-and low-Reynolds-number limits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations, and measure the mean first-passage time $\tau(z)$ for the free end to reach a certain distance $z$ away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of $\tau(z)$ are observed, with transition between them varying as a function of chain length. We use these simulations results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations lead us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.