889 resultados para Computational effort


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis deals with the development of a novel simulation technique for macromolecules in electrolyte solutions, with the aim of a performance improvement over current molecular-dynamics based simulation methods. In solutions containing charged macromolecules and salt ions, it is the complex interplay of electrostatic interactions and hydrodynamics that determines the equilibrium and non-equilibrium behavior. However, the treatment of the solvent and dissolved ions makes up the major part of the computational effort. Thus an efficient modeling of both components is essential for the performance of a method. With the novel method we approach the solvent in a coarse-grained fashion and replace the explicit-ion description by a dynamic mean-field treatment. Hence we combine particle- and field-based descriptions in a hybrid method and thereby effectively solve the electrokinetic equations. The developed algorithm is tested extensively in terms of accuracy and performance, and suitable parameter sets are determined. As a first application we study charged polymer solutions (polyelectrolytes) in shear flow with focus on their viscoelastic properties. Here we also include semidilute solutions, which are computationally demanding. Secondly we study the electro-osmotic flow on superhydrophobic surfaces, where we perform a detailed comparison to theoretical predictions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A Reynolds-Stress Turbulence Model has been incorporated with success into the KIVA code, a computational fluid dynamics hydrocode for three-dimensional simulation of fluid flow in engines. The newly implemented Reynolds-stress turbulence model greatly improves the robustness of KIVA, which in its original version has only eddy-viscosity turbulence models. Validation of the Reynolds-stress turbulence model is accomplished by conducting pipe-flow and channel-flow simulations, and comparing the computed results with experimental and direct numerical simulation data. Flows in engines of various geometry and operating conditions are calculated using the model, to study the complex flow fields as well as confirm the model’s validity. Results show that the Reynolds-stress turbulence model is able to resolve flow details such as swirl and recirculation bubbles. The model is proven to be an appropriate choice for engine simulations, with consistency and robustness, while requiring relatively low computational effort.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data is unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. It is generally accepted that hydrologic similarity results from similar physiographic characteristics, and thus these characteristics can be used to delineate regions and classify ungauged sites. However, as currently practiced, the delineation is highly subjective and dependent on the similarity measures and classification techniques employed. A standardized procedure for delineation of hydrologically homogeneous regions is presented herein. Key aspects are a new statistical metric to identify physically discordant sites, and the identification of an appropriate set of physically based measures of extreme hydrological similarity. A combination of multivariate statistical techniques applied to multiple flood statistics and basin characteristics for gauging stations in the Southeastern U.S. revealed that basin slope, elevation, and soil drainage largely determine the extreme hydrological behavior of a watershed. Use of these characteristics as similarity measures in the standardized approach for region delineation yields regions which are more homogeneous and more efficient for quantile estimation at ungauged sites than those delineated using alternative physically-based procedures typically employed in practice. The proposed methods and key physical characteristics are also shown to be efficient for region delineation and quantile development in alternative areas composed of watersheds with statistically different physical composition. In addition, the use of aggregated values of key watershed characteristics was found to be sufficient for the regionalization of flood data; the added time and computational effort required to derive spatially distributed watershed variables does not increase the accuracy of quantile estimators for ungauged sites. This dissertation also presents a methodology by which flood quantile estimates in Haiti can be derived using relationships developed for data rich regions of the U.S. As currently practiced, regional flood frequency techniques can only be applied within the predefined area used for model development. However, results presented herein demonstrate that the regional flood distribution can successfully be extrapolated to areas of similar physical composition located beyond the extent of that used for model development provided differences in precipitation are accounted for and the site in question can be appropriately classified within a delineated region.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reducing the uncertainties related to blade dynamics by the improvement of the quality of numerical simulations of the fluid structure interaction process is a key for a breakthrough in wind-turbine technology. A fundamental step in that direction is the implementation of aeroelastic models capable of capturing the complex features of innovative prototype blades, so they can be tested at realistic full-scale conditions with a reasonable computational cost. We make use of a code based on a combination of two advanced numerical models implemented in a parallel HPC supercomputer platform: First, a model of the structural response of heterogeneous composite blades, based on a variation of the dimensional reduction technique proposed by Hodges and Yu. This technique has the capacity of reducing the geometrical complexity of the blade section into a stiffness matrix for an equivalent beam. The reduced 1-D strain energy is equivalent to the actual 3-D strain energy in an asymptotic sense, allowing accurate modeling of the blade structure as a 1-D finite-element problem. This substantially reduces the computational effort required to model the structural dynamics at each time step. Second, a novel aerodynamic model based on an advanced implementation of the BEM(Blade ElementMomentum) Theory; where all velocities and forces are re-projected through orthogonal matrices into the instantaneous deformed configuration to fully include the effects of large displacements and rotation of the airfoil sections into the computation of aerodynamic forces. This allows the aerodynamic model to take into account the effects of the complex flexo-torsional deformation that can be captured by the more sophisticated structural model mentioned above. In this thesis we have successfully developed a powerful computational tool for the aeroelastic analysis of wind-turbine blades. Due to the particular features mentioned above in terms of a full representation of the combined modes of deformation of the blade as a complex structural part and their effects on the aerodynamic loads, it constitutes a substantial advancement ahead the state-of-the-art aeroelastic models currently available, like the FAST-Aerodyn suite. In this thesis, we also include the results of several experiments on the NREL-5MW blade, which is widely accepted today as a benchmark blade, together with some modifications intended to explore the capacities of the new code in terms of capturing features on blade-dynamic behavior, which are normally overlooked by the existing aeroelastic models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article addresses the issue of kriging-based optimization of stochastic simulators. Many of these simulators depend on factors that tune the level of precision of the response, the gain in accuracy being at a price of computational time. The contribution of this work is two-fold: first, we propose a quantile-based criterion for the sequential design of experiments, in the fashion of the classical expected improvement criterion, which allows an elegant treatment of heterogeneous response precisions. Second, we present a procedure for the allocation of the computational time given to each measurement, allowing a better distribution of the computational effort and increased efficiency. Finally, the optimization method is applied to an original application in nuclear criticality safety. This article has supplementary material available online. The proposed criterion is available in the R package DiceOptim.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). In this context both, the correct associations among the observations and the orbits of the objects have to be determined. The complexity of the MTT problem is defined by its dimension S. The number S corresponds to the number of fences involved in the problem. Each fence consists of a set of observations where each observation belongs to a different object. The S ≥ 3 MTT problem is an NP-hard combinatorial optimization problem. There are two general ways to solve this. One way is to seek the optimum solution, this can be achieved by applying a branch-and- bound algorithm. When using these algorithms the problem has to be greatly simplified to keep the computational cost at a reasonable level. Another option is to approximate the solution by using meta-heuristic methods. These methods aim to efficiently explore the different possible combinations so that a reasonable result can be obtained with a reasonable computational effort. To this end several population-based meta-heuristic methods are implemented and tested on simulated optical measurements. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Any image processing object detection algorithm somehow tries to integrate the object light (Recognition Step) and applies statistical criteria to distinguish objects of interest from other objects or from pure background (Decision Step). There are various possibilities how these two basic steps can be realized, as can be seen in the different proposed detection methods in the literature. An ideal detection algorithm should provide high recognition sensitiv ity with high decision accuracy and require a reasonable computation effort . In reality, a gain in sensitivity is usually only possible with a loss in decision accuracy and with a higher computational effort. So, automatic detection of faint streaks is still a challenge. This paper presents a detection algorithm using spatial filters simulating the geometrical form of possible streaks on a CCD image. This is realized by image convolution. The goal of this method is to generate a more or less perfect match between a streak and a filter by varying the length and orientation of the filters. The convolution answers are accepted or rejected according to an overall threshold given by the ackground statistics. This approach yields as a first result a huge amount of accepted answers due to filters partially covering streaks or remaining stars. To avoid this, a set of additional acceptance criteria has been included in the detection method. All criteria parameters are justified by background and streak statistics and they affect the detection sensitivity only marginally. Tests on images containing simulated streaks and on real images containing satellite streaks show a very promising sensitivity, reliability and running speed for this detection method. Since all method parameters are based on statistics, the true alarm, as well as the false alarm probability, are well controllable. Moreover, the proposed method does not pose any extraordinary demands on the computer hardware and on the image acquisition process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Femtosecond Raman rotational coherence spectroscopy (RCS) detected by degenerate four-wave mixing is a background-free method that allows to determine accurate gas-phase rotational constants of non-polar molecules. Raman RCS has so far mostly been applied to the regular coherence patterns of symmetric-top molecules, while its application to nonpolar asymmetric tops has been hampered by the large number of RCS transient types, the resulting variability of the RCS patterns, and the 10³–10⁴ times larger computational effort to simulate and fit rotational Raman RCS transients. We present the rotational Raman RCS spectra of the nonpolar asymmetric top 1,4-difluorobenzene (para-difluorobenzene, p-DFB) measured in a pulsed Ar supersonic jet and in a gas cell over delay times up to ~2.5 ns. p-DFB exhibits rotational Raman transitions with ΔJ = 0, 1, 2 and ΔK = 0, 2, leading to the observation of J −, K −, A −, and C–type transients, as well as a novel transient (S–type) that has not been characterized so far. The jet and gas cell RCS measurements were fully analyzed and yield the ground-state (v = 0) rotational constants Aₒ = 5637.68(20) MHz, Bₒ = 1428.23(37) MHz, and Cₒ = 1138.90(48) MHz (1σ uncertainties). Combining the Aₒ, Bₒ, and Cₒ constants with coupled-cluster with single-, double- and perturbatively corrected triple-excitation calculations using large basis sets allows to determine the semi-experimental equilibrium bond lengths rₑ(C₁–C₂) = 1.3849(4) Å, rₑ(C₂–C³) = 1.3917(4) Å, rₑ(C–F) = 1.3422(3) Å, and rₑ(C₂–H₂) = 1.0791(5) Å.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new method is presented to generate reduced order models (ROMs) in Fluid Dynamics problems of industrial interest. The method is based on the expansion of the flow variables in a Proper Orthogonal Decomposition (POD) basis, calculated from a limited number of snapshots, which are obtained via Computational Fluid Dynamics (CFD). Then, the POD-mode amplitudes are calculated as minimizers of a properly defined overall residual of the equations and boundary conditions. The method includes various ingredients that are new in this field. The residual can be calculated using only a limited number of points in the flow field, which can be scattered either all over the whole computational domain or over a smaller projection window. The resulting ROM is both computationally efficient(reconstructed flow fields require, in cases that do not present shock waves, less than 1 % of the time needed to compute a full CFD solution) and flexible(the projection window can avoid regions of large localized CFD errors).Also, for problems related with aerodynamics, POD modes are obtained from a set of snapshots calculated by a CFD method based on the compressible Navier Stokes equations and a turbulence model (which further more includes some unphysical stabilizing terms that are included for purely numerical reasons), but projection onto the POD manifold is made using the inviscid Euler equations, which makes the method independent of the CFD scheme. In addition, shock waves are treated specifically in the POD description, to avoid the need of using a too large number of snapshots. Various definitions of the residual are also discussed, along with the number and distribution of snapshots, the number of retained modes, and the effect of CFD errors. The method is checked and discussed on several test problems that describe (i) heat transfer in the recirculation region downstream of a backwards facing step, (ii) the flow past a two-dimensional airfoil in both the subsonic and transonic regimes, and (iii) the flow past a three-dimensional horizontal tail plane. The method is both efficient and numerically robust in the sense that the computational effort is quite small compared to CFD and results are both reasonably accurate and largely insensitive to the definition of the residual, to CFD errors, and to the CFD method itself, which may contain artificial stabilizing terms. Thus, the method is amenable for practical engineering applications. Resumen Se presenta un nuevo método para generar modelos de orden reducido (ROMs) aplicado a problemas fluidodinámicos de interés industrial. El nuevo método se basa en la expansión de las variables fluidas en una base POD, calculada a partir de un cierto número de snapshots, los cuales se han obtenido gracias a simulaciones numéricas (CFD). A continuación, las amplitudes de los modos POD se calculan minimizando un residual global adecuadamente definido que combina las ecuaciones y las condiciones de contorno. El método incluye varios ingredientes que son nuevos en este campo de estudio. El residual puede calcularse utilizando únicamente un número limitado de puntos del campo fluido. Estos puntos puede encontrarse dispersos a lo largo del dominio computacional completo o sobre una ventana de proyección. El modelo ROM obtenido es tanto computacionalmente eficiente (en aquellos casos que no presentan ondas de choque reconstruir los campos fluidos requiere menos del 1% del tiempo necesario para calcular una solución CFD) como flexible (la ventana de proyección puede escogerse de forma que evite contener regiones con errores en la solución CFD localizados y grandes). Además, en problemas aerodinámicos, los modos POD se obtienen de un conjunto de snapshots calculados utilizando un código CFD basado en la versión compresible de las ecuaciones de Navier Stokes y un modelo de turbulencia (el cual puede incluir algunos términos estabilizadores sin sentido físico que se añaden por razones puramente numéricas), aunque la proyección en la variedad POD se hace utilizando las ecuaciones de Euler, lo que hace al método independiente del esquema utilizado en el código CFD. Además, las ondas de choque se tratan específicamente en la descripción POD para evitar la necesidad de utilizar un número demasiado grande de snapshots. Varias definiciones del residual se discuten, así como el número y distribución de los snapshots,el número de modos retenidos y el efecto de los errores debidos al CFD. El método se comprueba y discute para varios problemas de evaluación que describen (i) la transferencia de calor en la región de recirculación aguas abajo de un escalón, (ii) el flujo alrededor de un perfil bidimensional en regímenes subsónico y transónico y (iii) el flujo alrededor de un estabilizador horizontal tridimensional. El método es tanto eficiente como numéricamente robusto en el sentido de que el esfuerzo computacional es muy pequeño comparado con el requerido por el CFD y los resultados son razonablemente precisos y muy insensibles a la definición del residual, los errores debidos al CFD y al método CFD en sí mismo, el cual puede contener términos estabilizadores artificiales. Por lo tanto, el método puede utilizarse en aplicaciones prácticas de ingeniería.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Typical streak computations present in the literature correspond to linear streaks or to small amplitude nonlinear streaks computed using DNS or nonlinear PSE. We use the Reduced Navier-Stokes (RNS) equations to compute the streamwise evolution of fully non-linear streaks with high amplitude in a laminar flat plate boundary layer. The RNS formulation provides Reynolds number independent solutions that are asymptotically exact in the limit $Re \gg 1$, it requires much less computational effort than DNS, and it does not have the consistency and convergence problems of the PSE. We present various streak computations to show that the flow configuration changes substantially when the amplitude of the streaks grows and the nonlinear effects come into play. The transversal motion (in the wall normal-streamwise plane) becomes more important and strongly distorts the streamwise velocity profiles, that end up being quite different from those of the linear case. We analyze in detail the resulting flow patterns for the nonlinearly saturated streaks and compare them with available experimental results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En esta Tesis Doctoral se aborda la utilización de filtros de difusión no lineal para obtener imágenes constantes a trozos como paso previo al proceso de segmentación. En una primera parte se propone un formulación intrínseca para la ecuación de difusión no lineal que proporcione las condiciones de diseño necesarias sobre los filtros de difusión. A partir del marco teórico propuesto, se proporciona una nueva familia de difusividades; éstas son obtenidas a partir de técnicas de difusión no lineal relacionadas con los procesos de difusión regresivos. El objetivo es descomponer la imagen en regiones cerradas que sean homogéneas en sus niveles de grises sin contornos difusos. Asimismo, se prueba que la función de difusividad propuesta satisface las condiciones de un correcto planteamiento semi-discreto. Esto muestra que mediante el esquema semi-implícito habitualmente utilizado, realmente se hace un proceso de difusión no lineal directa, en lugar de difusión inversa, conectando con proceso de preservación de bordes. Bajo estas condiciones establecidas, se plantea un criterio de parada para el proceso de difusión, para obtener imágenes constantes a trozos con un bajo coste computacional. Una vez aplicado todo el proceso al caso unidimensional, se extienden los resultados teóricos, al caso de imágenes en 2D y 3D. Para el caso en 3D, se detalla el esquema numérico para el problema evolutivo no lineal, con condiciones de contorno Neumann homogéneas. Finalmente, se prueba el filtro propuesto para imágenes reales en 2D y 3D y se ilustran los resultados de la difusividad propuesta como método para obtener imágenes constantes a trozos. En el caso de imágenes 3D, se aborda la problemática del proceso previo a la segmentación del hígado, mediante imágenes reales provenientes de Tomografías Axiales Computarizadas (TAC). En ese caso, se obtienen resultados sobre la estimación de los parámetros de la función de difusividad propuesta. This Ph.D. Thesis deals with the case of using nonlinear diffusion filters to obtain piecewise constant images as a previous process for segmentation techniques. I have first shown an intrinsic formulation for the nonlinear diffusion equation to provide some design conditions on the diffusion filters. According to this theoretical framework, I have proposed a new family of diffusivities; they are obtained from nonlinear diffusion techniques and are related with backward diffusion. Their goal is to split the image in closed contours with a homogenized grey intensity inside and with no blurred edges. It has also proved that the proposed filters satisfy the well-posedness semi-discrete and full discrete scale-space requirements. This shows that by using semi-implicit schemes, a forward nonlinear diffusion equation is solved, instead of a backward nonlinear diffusion equation, connecting with an edgepreserving process. Under the conditions established for the diffusivity and using a stopping criterion I for the diffusion time, I have obtained piecewise constant images with a low computational effort. The whole process in the one-dimensional case is extended to the case where 2D and 3D theoretical results are applied to real images. For 3D, develops in detail the numerical scheme for nonlinear evolutionary problem with homogeneous Neumann boundary conditions. Finally, I have tested the proposed filter with real images for 2D and 3D and I have illustrated the effects of the proposed diffusivity function as a method to get piecewise constant images. For 3D I have developed a preprocess for liver segmentation with real images from CT (Computerized Tomography). In this case, I have obtained results on the estimation of the parameters of the given diffusivity function.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An efficient approach is presented to improve the local and global approximation and modelling capability of Takagi-Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy. The main problem is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the use of the T-S method because this type of membership function has been widely used during the last two decades in the stability, controller design and are popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S method with optimized performance in approximating nonlinear functions. A simple approach with few computational effort, based on the well known parameters' weighting method is suggested for tuning T-S parameters to improve the choice of the performance index and minimize it. A global fuzzy controller (FC) based Linear Quadratic Regulator (LQR) is proposed in order to show the effectiveness of the estimation method developed here in control applications. Illustrative examples of an inverted pendulum and Van der Pol system are chosen to evaluate the robustness and remarkable performance of the proposed method and the high accuracy obtained in approximating nonlinear and unstable systems locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity and generality of the algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A unified solution framework is presented for one-, two- or three-dimensional complex non-symmetric eigenvalue problems, respectively governing linear modal instability of incompressible fluid flows in rectangular domains having two, one or no homogeneous spatial directions. The solution algorithm is based on subspace iteration in which the spatial discretization matrix is formed, stored and inverted serially. Results delivered by spectral collocation based on the Chebyshev-Gauss-Lobatto (CGL) points and a suite of high-order finite-difference methods comprising the previously employed for this type of work Dispersion-Relation-Preserving (DRP) and Padé finite-difference schemes, as well as the Summationby- parts (SBP) and the new high-order finite-difference scheme of order q (FD-q) have been compared from the point of view of accuracy and efficiency in standard validation cases of temporal local and BiGlobal linear instability. The FD-q method has been found to significantly outperform all other finite difference schemes in solving classic linear local, BiGlobal, and TriGlobal eigenvalue problems, as regards both memory and CPU time requirements. Results shown in the present study disprove the paradigm that spectral methods are superior to finite difference methods in terms of computational cost, at equal accuracy, FD-q spatial discretization delivering a speedup of ð (10 4). Consequently, accurate solutions of the three-dimensional (TriGlobal) eigenvalue problems may be solved on typical desktop computers with modest computational effort.