129 resultados para Efficient Solution
em University of Queensland eSpace - Australia
Resumo:
A piecewise uniform fitted mesh method turns out to be sufficient for the solution of a surprisingly wide variety of singularly perturbed problems involving steep gradients. The technique is applied to a model of adsorption in bidisperse solids for which two fitted mesh techniques, a fitted-mesh finite difference method (FMFDM) and fitted mesh collocation method (FMCM) are presented. A combination (FMCMD) of FMCM and the DASSL integration package is found to be most effective in solving the problems. Numerical solutions (FMFDM and FMCMD) were found to match the analytical solution when the adsorption isotherm is linear, even under conditions involving steep gradients for which global collocation fails. In particular, FMCMD is highly efficient for macropore diffusion control or micropore diffusion control. These techniques are simple and there is no limit on the range of the parameters. The techniques can be applied to a variety of adsorption and desorption problems in bidisperse solids with non-linear isotherm and for arbitrary particle geometry.
Resumo:
Some efficient solution techniques for solving models of noncatalytic gas-solid and fluid-solid reactions are presented. These models include those with non-constant diffusivities for which the formulation reduces to that of a convection-diffusion problem. A singular perturbation problem results for such models in the presence of a large Thiele modulus, for which the classical numerical methods can present difficulties. For the convection-diffusion like case, the time-dependent partial differential equations are transformed by a semi-discrete Petrov-Galerkin finite element method into a system of ordinary differential equations of the initial-value type that can be readily solved. In the presence of a constant diffusivity, in slab geometry the convection-like terms are absent, and the combination of a fitted mesh finite difference method with a predictor-corrector method is used to solve the problem. Both the methods are found to converge, and general reaction rate forms can be treated. These methods are simple and highly efficient for arbitrary particle geometry and parameters, including a large Thiele modulus. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
With the rapid increase in both centralized video archives and distributed WWW video resources, content-based video retrieval is gaining its importance. To support such applications efficiently, content-based video indexing must be addressed. Typically, each video is represented by a sequence of frames. Due to the high dimensionality of frame representation and the large number of frames, video indexing introduces an additional degree of complexity. In this paper, we address the problem of content-based video indexing and propose an efficient solution, called the Ordered VA-File (OVA-File) based on the VA-file. OVA-File is a hierarchical structure and has two novel features: 1) partitioning the whole file into slices such that only a small number of slices are accessed and checked during k Nearest Neighbor (kNN) search and 2) efficient handling of insertions of new vectors into the OVA-File, such that the average distance between the new vectors and those approximations near that position is minimized. To facilitate a search, we present an efficient approximate kNN algorithm named Ordered VA-LOW (OVA-LOW) based on the proposed OVA-File. OVA-LOW first chooses possible OVA-Slices by ranking the distances between their corresponding centers and the query vector, and then visits all approximations in the selected OVA-Slices to work out approximate kNN. The number of possible OVA-Slices is controlled by a user-defined parameter delta. By adjusting delta, OVA-LOW provides a trade-off between the query cost and the result quality. Query by video clip consisting of multiple frames is also discussed. Extensive experimental studies using real video data sets were conducted and the results showed that our methods can yield a significant speed-up over an existing VA-file-based method and iDistance with high query result quality. Furthermore, by incorporating temporal correlation of video content, our methods achieved much more efficient performance.
Resumo:
Objectives: To review the results of the first 403 women treated at the Abnormal Smear and Colposcopy Unit with special reference to the utility, efficacy, acceptability and economy of in-office treatment of cervical lesions by large loop or Fischer cone excision. Design: Retrospective chart review of consecutive patients treated following, referral with an abnormal smear or abnormal cervical morphology, between 1 September 1996 and I August 2001. Setting: Inner city private practice. Sample: A total of 403 consecutive General Practitioner referred women. Methods: Details of referral smear result, colposcopically directed biopsy result, subsequent treatment type and histological result including assessability number of specimens submitted, complications and follow-up assessment were extracted at chart review. Costs of public hospital inpatient and outpatient care, supplied by the Casemix and Clinical Benchmarking Service, Mater Miseraecordae Public Hospitals (with permission to publish), were compared with Medicare rebates. Main outcome measures: A total of 187 women were treated by large loop excision of the transformation zone, and 216 by Fischer cone excision. The number of women who were treated as outpatients under local anaesthetic were 395, while eight patients were treated under general anaesthesia as inpatients. There was poor correlation between referring smear, biopsy and subsequent treatment results. Eight patients had abnormal cytology at follow-up, of whom two have been retreated. Three patients had primary or secondary bleeding requiring treatment and two developed cervical stenosis. Outpatient private practice treatment of women with abnormal smears allows significant savings to the public purse over public or private hospital care. Conclusions: Outpatient treatment of women with abnormal smears, using the Fischer cone technique, is safe, wen accepted, effective and the most cost efficient solution to this public health problem.
Resumo:
A k-NN query finds the k nearest-neighbors of a given point from a point database. When it is sufficient to measure object distance using the Euclidian distance, the key to efficient k-NN query processing is to fetch and check the distances of a minimum number of points from the database. For many applications, such as vehicle movement along road networks or rover and animal movement along terrain surfaces, the distance is only meaningful when it is along a valid movement path. For this type of k-NN queries, the focus of efficient query processing is to minimize the cost of computing distances using the environment data (such as the road network data and the terrain data), which can be several orders of magnitude larger than that of the point data. Efficient processing of k-NN queries based on the Euclidian distance or the road network distance has been investigated extensively in the past. In this paper, we investigate the problem of surface k-NN query processing, where the distance is calculated from the shortest path along a terrain surface. This problem is very challenging, as the terrain data can be very large and the computational cost of finding shortest paths is very high. We propose an efficient solution based on multiresolution terrain models. Our approach eliminates the need of costly process of finding shortest paths by ranking objects using estimated lower and upper bounds of distance on multiresolution terrain models.
Resumo:
To translate and transfer solution data between two totally different meshes (i.e. mesh 1 and mesh 2), a consistent point-searching algorithm for solution interpolation in unstructured meshes consisting of 4-node bilinear quadrilateral elements is presented in this paper. The proposed algorithm has the following significant advantages: (1) The use of a point-searching strategy allows a point in one mesh to be accurately related to an element (containing this point) in another mesh. Thus, to translate/transfer the solution of any particular point from mesh 2 td mesh 1, only one element in mesh 2 needs to be inversely mapped. This certainly minimizes the number of elements, to which the inverse mapping is applied. In this regard, the present algorithm is very effective and efficient. (2) Analytical solutions to the local co ordinates of any point in a four-node quadrilateral element, which are derived in a rigorous mathematical manner in the context of this paper, make it possible to carry out an inverse mapping process very effectively and efficiently. (3) The use of consistent interpolation enables the interpolated solution to be compatible with an original solution and, therefore guarantees the interpolated solution of extremely high accuracy. After the mathematical formulations of the algorithm are presented, the algorithm is tested and validated through a challenging problem. The related results from the test problem have demonstrated the generality, accuracy, effectiveness, efficiency and robustness of the proposed consistent point-searching algorithm. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.
Resumo:
Relatively few cyclic peptides have reached the pharmaceutical marketplace during the past decade, most produced through fermentation rather than made synthetically. Generally, this class of compounds is synthesized for research purposes on milligram scales by solid-phase methods, but if the potential of macrocyclic peptidomimetics is to be realized, low-cost larger scale solution-phase syntheses need to be devised and optimized to provide sufficient quantities for preclinical, clinical, and commercial uses. Here, we describe a cheap, medium-scale, solution-phase synthesis of the first reported highly potent, selective, and orally active antagonist of the human C5a receptor. This compound, Ac-Phe[Orn-Pro-D-Cha-Trp-Arg], known as 3D53, is a macrocyclic peptidomimetic of the human plasma protein C5a and displays excellent antiinflammatory activity in numerous animal models of human disease. In a convergent approach, two tripeptide fragments Ac-Phe-Orn-(Boc)-Pro-OH and H-D-Cha-Trp(For)-Arg-OEt were first prepared by high-yielding solution-phase couplings using a mixed anhydride method before coupling them to give a linear hexapeptide which, after deprotection, was obtained in 38% overall yield from the commercially available amino acids. Cyclization in solution using BOP reagent gave the antagonist in 33% yield (13% overall) after HPLC purification. Significant features of the synthesis were that the Arg side chain was left unprotected throughout, the component Boe-D-Cha-OH was obtained very efficiently via hydrogenation Of D-Phe with PtO2 in TFA/water, the tripeptides were coupled at the Pro-Cha junction to minimize racemization via the oxazolone pathway, and the entire synthesis was carried out without purification of any intermediates. The target cyclic product was purified (>97%) by reversed-phase HPLC. This convergent synthesis with minimal use of protecting groups allowed batches of 50100 g to be prepared efficiently in high yield using standard laboratory equipment. This type of procedure should be useful for making even larger quantities of this and other macrocyclic peptidomimetic drugs.
Resumo:
Lateral-distortional buckling may occur in I-section beams with slender webs and stocky flanges. A computationally efficient method is presented in this paper to study this phenomenon. Previous studies on distortional buckling have been on the use of 3(rd) and 5(th) order polynomials to model the displacements. The present study provides an alternative way, using Fourier Series, to model the behaviour. Beams of different cross-sectional dimensions, load cases and restraint conditions are examined and compared. The accuracy and versatility of the method are verified by calibrating against the results of other published studies. The present method is believed to be a simple and efficient way of determining the buckling load and mode shapes of I-section beams that are susceptible to lateral-distortional buckling modes.
Resumo:
In many advanced applications, data are described by multiple high-dimensional features. Moreover, different queries may weight these features differently; some may not even specify all the features. In this paper, we propose our solution to support efficient query processing in these applications. We devise a novel representation that compactly captures f features into two components: The first component is a 2D vector that reflects a distance range ( minimum and maximum values) of the f features with respect to a reference point ( the center of the space) in a metric space and the second component is a bit signature, with two bits per dimension, obtained by analyzing each feature's descending energy histogram. This representation enables two levels of filtering: The first component prunes away points that do not share similar distance ranges, while the bit signature filters away points based on the dimensions of the relevant features. Moreover, the representation facilitates the use of a single index structure to further speed up processing. We employ the classical B+-tree for this purpose. We also propose a KNN search algorithm that exploits the access orders of critical dimensions of highly selective features and partial distances to prune the search space more effectively. Our extensive experiments on both real-life and synthetic data sets show that the proposed solution offers significant performance advantages over sequential scan and retrieval methods using single and multiple VA-files.
Resumo:
Retrieving large amounts of information over wide area networks, including the Internet, is problematic due to issues arising from latency of response, lack of direct memory access to data serving resources, and fault tolerance. This paper describes a design pattern for solving the issues of handling results from queries that return large amounts of data. Typically these queries would be made by a client process across a wide area network (or Internet), with one or more middle-tiers, to a relational database residing on a remote server. The solution involves implementing a combination of data retrieval strategies, including the use of iterators for traversing data sets and providing an appropriate level of abstraction to the client, double-buffering of data subsets, multi-threaded data retrieval, and query slicing. This design has recently been implemented and incorporated into the framework of a commercial software product developed at Oracle Corporation.
Resumo:
A theoretical analysis is presented to investigate fully developed (both thermally and hydrodynamically) forced convection in a duct of rectangular cross-section filled with a hyper-porous medium. The Darcy-Brinkman model for flow through porous media was adopted in the present analysis. A Fourier series type solution is applied to obtain the exact velocity and temperature distribution within the duct. The case of uniform heat flux on the walls, i.e. the H boundary condition in the terminology of Kays and Crawford [1], is treated. Values of the Nusselt number and the friction factor as a function of the aspect ratio, the Darcy number, and the viscosity ratio are reported.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Resumo:
The A(n-1)((1)) trigonometric vertex model with generic non-diagonal boundaries is studied. The double-row transfer matrix of the model is diagonalized by algebraic Bethe ansatz method in terms of the intertwiner and the corresponding face-vertex relation. The eigenvalues and the corresponding Bethe ansatz equations are obtained.
Resumo:
Superconducting pairing of electrons in nanoscale metallic particles with discrete energy levels and a fixed number of electrons is described by the reduced Bardeen, Cooper, and Schrieffer model Hamiltonian. We show that this model is integrable by the algebraic Bethe ansatz. The eigenstates, spectrum, conserved operators, integrals of motion, and norms of wave functions are obtained. Furthermore, the quantum inverse problem is solved, meaning that form factors and correlation functions can be explicitly evaluated. Closed form expressions are given for the form factors and correlation functions that describe superconducting pairing.