926 resultados para no conforming mesh
Resumo:
An efficient algorithm is presented for the solution of the steady Euler equations of gas dynamics. The scheme is based on solving linearised Riemann problems approximately and in more than one dimension incorporates operator splitting. The scheme is applied to a standard test problem of flow down a channel containing a circular arc bump for three different mesh sizes.
Resumo:
We present a finite difference scheme, with the TVD (total variation diminishing) property, for scalar conservation laws. The scheme applies to non-uniform meshes, allowing for variable mesh spacing, and is without upstream weighting. When applied to systems of conservation laws, no scalar decomposition is required, nor are any artificial tuning parameters, and this leads to an efficient, robust algorithm.
Resumo:
An algorithm based on flux difference splitting is presented for the solution of two-dimensional, open channel flows. A transformation maps a non-rectangular, physical domain into a rectangular one. The governing equations are then the shallow water equations, including terms of slope and friction, in a generalized coordinate system. A regular mesh on a rectangular computational domain can then be employed. The resulting scheme has good jump capturing properties and the advantage of using boundary/body-fitted meshes. The scheme is applied to a problem of flow in a river whose geometry induces a region of supercritical flow.
Resumo:
A solution has been found to the long-standing problem of experimental modelling of the interfacial instability in aluminium reduction cells. The idea is to replace the electrolyte overlaying molten aluminium with a mesh of thin rods supplying current down directly into the liquid metal layer. This eliminates electrolysis altogether and all the problems associated with it, such as high temperature, chemical aggressiveness of media, products of electrolysis, the necessity for electrolyte renewal, high power demands, etc. The result is a room temperature, versatile laboratory model which simulates Sele-type, rolling pad interfacial instability. Our new, safe laboratory model enables detailed experimental investigations to test the existing theoretical models for the first time.
Resumo:
This paper introduces PSOPT, an open source optimal control solver written in C++. PSOPT uses pseudospectral and local discretizations, sparse nonlinear programming, automatic differentiation, and it incorporates automatic scaling and mesh refinement facilities. The software is able to solve complex optimal control problems including multiple phases, delayed differential equations, nonlinear path constraints, interior point constraints, integral constraints, and free initial and/or final times. The software does not require any non-free platform to run, not even the operating system, as it is able to run under Linux. Additionally, the software generates plots as well as LATEX code so that its results can easily be included in publications. An illustrative example is provided.
Resumo:
We consider scattering of a time harmonic incident plane wave by a convex polygon with piecewise constant impedance boundary conditions. Standard finite or boundary element methods require the number of degrees of freedom to grow at least linearly with respect to the frequency of the incident wave in order to maintain accuracy. Extending earlier work by Chandler-Wilde and Langdon for the sound soft problem, we propose a novel Galerkin boundary element method, with the approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh with smaller elements closer to the corners of the polygon. Theoretical analysis and numerical results suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy grows only logarithmically with respect to the frequency of the incident wave.
Resumo:
We consider the scattering of a time-harmonic acoustic incident plane wave by a sound soft convex curvilinear polygon with Lipschitz boundary. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the number of degrees of freedom required to achieve a prescribed level of accuracy grows at least linearly with respect to the frequency of the incident wave. Here we propose a novel Galerkin boundary element method with a hybrid approximation space, consisting of the products of plane wave basis functions with piecewise polynomials supported on several overlapping meshes; a uniform mesh on illuminated sides, and graded meshes refined towards the corners of the polygon on illuminated and shadow sides. Numerical experiments suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy need only grow logarithmically as the frequency of the incident wave increases.
Resumo:
Commisssioned by Frieze Art for the Frieze Sculpture Park The project presents the image of a sculpture as a sculpture, installed in the form of a large scale digital print on vinyl stretched over a 14 x 28ft (4.2 x 8.4m) stretcher supported by a scaffolding structure. The image itself depicts a futuristic public sculpture, an ‘impossible’ artwork, referencing Ballard’s descriptions in his book ‘Vermillion Sands’. The work also draws upon examples of rococo ornamentation and the compositional conventions of ‘images of sculpture’ (in art magazines, catalogues, publicity photos) including examples sited in Regents park in previous years. Technical details: The image is printed on vinyl, stretched over a 14 x 28ft (4.2 x 8.4m) wooden stretcher and fixed to a deep buttressed scaffold 8m long by 6.23 deep with IBC water tanks on the back edge as kentledge (4 x I tonne IVC water containers - 1 per bay). The structure is constructed from clean silver Layher system scaffold and wrapped by a dense black mesh netting.
Resumo:
Transport and deposition of charged inhaled aerosols in double planar bifurcation representing generation three to five of human respiratory system has been studied under a light activity breathing condition. Both steady and oscillatory laminar inhalation airflow is considered. Particle trajectories are calculated using a Lagrangian reference frame, which is dominated by the fluid force driven by airflow, gravity force and electrostatic forces (both of space and image charge forces). The particle-mesh method is selected to calculate the space charge force. This numerical study investigates the deposition efficiency in the three-dimensional model under various particle sizes, charge values, and inlet particle distribution. Numerical results indicate that particles carrying an adequate level of charge can improve deposition efficiency in the airway model.
Resumo:
A new numerical modeling of inhaled charge aerosol has been developed based on a modified Weibel's model. Both the velocity profiles (slug and parabolic flows) and the particle distributions (uniform and parabolic distributions) have been considered. Inhaled particles are modeled as a dilute dispersed phase flow in which the particle motion is controlled by fluid force and external forces acting on particles. This numerical study extends the previous numerical studies by considering both space- and image-charge forces. Because of the complex computation of interacting forces due to space-charge effect, the particle-mesh (PM) method is selected to calculate these forces. In the PM technique, the charges of all particles are assigned to the space-charge field mesh, for calculating charge density. The Poisson's equation of the electrostatic potential is then solved, and the electrostatic force acting on individual particle is interpolated. It is assumed that there is no effect of humidity on charged particles. The results show that many significant factors also affect the deposition, such as the volume of particle cloud, the velocity profile and the particle distribution. This study allows a better understanding of electrostatic mechanism of aerosol transport and deposition in human airways.
Resumo:
We report the single-crystal X-ray structure for the complex of the bisacridine bis-(9-aminooctyl(2-(dimethylaminoethyl)acridine-4-carboxamide)) with the oligonucleotide d(CGTACG)2 to a resolution of 2.4 Å. Solution studies with closed circular DNA show this compound to be a bisintercalating threading agent, but so far we have no crystallographic or NMR structural data conforming to the model of contiguous intercalation within the same duplex. Here, with the hexameric duplex d(CGTACG), the DNA is observed to undergo a terminal cytosine base exchange to yield an unusual guanine quadruplex intercalation site through which the bisacridine threads its octamethylene linker to fuse two DNA duplexes. The 4-carboxamide side-chains form anchoring hydrogen-bonding interactions with guanine O6 atoms on each side of the quadruplex. This higher-order DNA structure provides insight into an unexpected property of bisintercalating threading agents, and suggests the idea of targeting such compounds specifically at four-way DNA junctions.
Resumo:
Tropical Cyclone (TC) is normally not studied at the individual level with Global Climate Models (GCMs), because the coarse grid spacing is often deemed insufficient for a realistic representation of the basic underlying processes. GCMs are indeed routinely deployed at low resolution, in order to enable sufficiently long integrations, which means that only large-scale TC proxies are diagnosed. A new class of GCMs is emerging, however, which is capable of simulating TC-type vortexes by retaining a horizontal resolution similar to that of operational NWP GCMs; their integration on the latest supercomputers enables the completion of long-term integrations. The UK-Japan Climate Collaboration and the UK-HiGEM projects have developed climate GCMs which can be run routinely for decades (with grid spacing of 60 km) or centuries (with grid spacing of 90 km); when coupled to the ocean GCM, a mesh of 1/3 degrees provides eddy-permitting resolution. The 90 km resolution model has been developed entirely by the UK-HiGEM consortium (together with its 1/3 degree ocean component); the 60 km atmospheric GCM has been developed by UJCC, in collaboration with the Met Office Hadley Centre.
Resumo:
Basic Network transactions specifies that datagram from source to destination is routed through numerous routers and paths depending on the available free and uncongested paths which results in the transmission route being too long, thus incurring greater delay, jitter, congestion and reduced throughput. One of the major problems of packet switched networks is the cell delay variation or jitter. This cell delay variation is due to the queuing delay depending on the applied loading conditions. The effect of delay, jitter accumulation due to the number of nodes along transmission routes and dropped packets adds further complexity to multimedia traffic because there is no guarantee that each traffic stream will be delivered according to its own jitter constraints therefore there is the need to analyze the effects of jitter. IP routers enable a single path for the transmission of all packets. On the other hand, Multi-Protocol Label Switching (MPLS) allows separation of packet forwarding and routing characteristics to enable packets to use the appropriate routes and also optimize and control the behavior of transmission paths. Thus correcting some of the shortfalls associated with IP routing. Therefore MPLS has been utilized in the analysis for effective transmission through the various networks. This paper analyzes the effect of delay, congestion, interference, jitter and packet loss in the transmission of signals from source to destination. In effect the impact of link failures, repair paths in the various physical topologies namely bus, star, mesh and hybrid topologies are all analyzed based on standard network conditions.
Resumo:
We evaluate a number of real estate sentiment indices to ascertain current and forward-looking information content that may be useful for forecasting the demand and supply activities. Our focus lies on sector-specific surveys targeting the players from the supply-side of both residential and non-residential real estate markets. Analyzing the dynamic relationships within a Vector Auto-Regression (VAR) framework, we test the efficacy of these indices by comparing them with other coincident indicators in predicting real estate returns. Overall, our analysis suggests that sentiment indicators convey important information which should be embedded in the modeling exercise to predict real estate market returns. Generally, sentiment indices show better information content than broad economic indicators. The goodness of fit of our models is higher for the residential market than for the non-residential real estate sector. The impulse responses, in general, conform to our theoretical expectations. Variance decompositions and out-of-sample predictions generally show desired contribution and reasonable improvement respectively, thus upholding our hypothesis. Quite remarkably, consistent with the theory, the predictability swings when we look through different phases of the cycle. This perhaps suggests that, e.g. during recessions, market players’ expectations may be more accurate predictor of the future performances, conceivably indicating a ‘negative’ information processing bias and thus conforming to the precautionary motive of consumer behaviour.
Resumo:
Adaptive methods which “equidistribute” a given positive weight function are now used fairly widely for selecting discrete meshes. The disadvantage of such schemes is that the resulting mesh may not be smoothly varying. In this paper a technique is developed for equidistributing a function subject to constraints on the ratios of adjacent steps in the mesh. Given a weight function $f \geqq 0$ on an interval $[a,b]$ and constants $c$ and $K$, the method produces a mesh with points $x_0 = a,x_{j + 1} = x_j + h_j ,j = 0,1, \cdots ,n - 1$ and $x_n = b$ such that\[ \int_{xj}^{x_{j + 1} } {f \leqq c\quad {\text{and}}\quad \frac{1} {K}} \leqq \frac{{h_{j + 1} }} {{h_j }} \leqq K\quad {\text{for}}\, j = 0,1, \cdots ,n - 1 . \] A theoretical analysis of the procedure is presented, and numerical algorithms for implementing the method are given. Examples show that the procedure is effective in practice. Other types of constraints on equidistributing meshes are also discussed. The principal application of the procedure is to the solution of boundary value problems, where the weight function is generally some error indicator, and accuracy and convergence properties may depend on the smoothness of the mesh. Other practical applications include the regrading of statistical data.