98 resultados para collocation
Resumo:
This paper addresses robust model-order reduction of a high dimensional nonlinear partial differential equation (PDE) model of a complex biological process. Based on a nonlinear, distributed parameter model of the same process which was validated against experimental data of an existing, pilot-scale BNR activated sludge plant, we developed a state-space model with 154 state variables in this work. A general algorithm for robustly reducing the nonlinear PDE model is presented and based on an investigation of five state-of-the-art model-order reduction techniques, we are able to reduce the original model to a model with only 30 states without incurring pronounced modelling errors. The Singular perturbation approximation balanced truncating technique is found to give the lowest modelling errors in low frequency ranges and hence is deemed most suitable for controller design and other real-time applications. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
A new wavelet-based adaptive framework for solving population balance equations (PBEs) is proposed in this work. The technique is general, powerful and efficient without the need for prior assumptions about the characteristics of the processes. Because there are steeply varying number densities across a size range, a new strategy is developed to select the optimal order of resolution and the collocation points based on an interpolating wavelet transform (IWT). The proposed technique has been tested for size-independent agglomeration, agglomeration with a linear summation kernel and agglomeration with a nonlinear kernel. In all cases, the predicted and analytical particle size distributions (PSDs) are in excellent agreement. Further work on the solution of the general population balance equations with nucleation, growth and agglomeration and the solution of steady-state population balance equations will be presented in this framework. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
Recently, operational matrices were adapted for solving several kinds of fractional differential equations (FDEs). The use of numerical techniques in conjunction with operational matrices of some orthogonal polynomials, for the solution of FDEs on finite and infinite intervals, produced highly accurate solutions for such equations. This article discusses spectral techniques based on operational matrices of fractional derivatives and integrals for solving several kinds of linear and nonlinear FDEs. More precisely, we present the operational matrices of fractional derivatives and integrals, for several polynomials on bounded domains, such as the Legendre, Chebyshev, Jacobi and Bernstein polynomials, and we use them with different spectral techniques for solving the aforementioned equations on bounded domains. The operational matrices of fractional derivatives and integrals are also presented for orthogonal Laguerre and modified generalized Laguerre polynomials, and their use with numerical techniques for solving FDEs on a semi-infinite interval is discussed. Several examples are presented to illustrate the numerical and theoretical properties of various spectral techniques for solving FDEs on finite and semi-infinite intervals.
Resumo:
El objetivo de la presente investigación es analizar el tratamiento que algunos de los diccionarios generales monolingües del español aparecidos en los últimos diez años han dado al fenómeno de las colocaciones léxicas, a saber, aquellas combinaciones de palabras que, desde el punto de vista de la norma, presentan ciertas restricciones combinatorias, esencialmente de carácter semántico, impuestas por el uso (Corpas: 1996). Los repertorios objeto de análisis han sido: el "Diccionario Salamanca de la lengua española", dirigido por Juan Gutiérrez (1996); el "Diccionario del español actual", de Manuel Seco, Olimpia Andrés y Gabino Ramos (1999); la vigésima segunda edición del diccionario de la RAE (2001); y el "Gran diccionario de uso del español actual. Basado en el corpus Cumbre", dirigido por Aquilino Sánchez (2001). Nuestro estudio se ha fundamentado en un corpus de 52 colocaciones léxicas confeccionado a partir del análisis de las subentradas contenidas en la letra "b" de cada uno de los diccionarios seleccionados. Posteriormente, hemos examinado las entradas correspondientes a cada uno de los elementos que constituyen la colocación (base y colocativo) con el fin de observar si los diccionarios estudiados dan cuenta de estas mismas combinaciones en otras partes del artículo lexicográfico, como son las definiciones o los ejemplos. A la hora de analizar la información lexicográfica hemos centrado nuestra atención en cuatro aspectos: a) la información contenida en las páginas preliminares de cada una de las obras; b) la ubicación de las colocaciones en el artículo lexicográfico; c) la asignación de la colocación a un artículo determinado; y d) la marcación gramatical.
Resumo:
The topic of this study is the language of the educational policies of the British Labour party in the General Election manifestos between the years 1983-2005. The twenty-year period studied has been a period of significant changes in world politics, and in British politics, especially for the Labour party. The emergence educational policy as a vote-winner of the manifestos of the nineties has been noteworthy. The aim of the thesis is two-fold: to look at the structure of the political manifesto as an example of genre writing and to analyze the content utilizing the approach of critical discourse analysis. Furthermore, the aim of this study is not to pinpoint policy positions but to look at what is the image that the Labour Party creates of itself through these manifestos. The analysis of the content is done by a method of close-reading. Based on the findings, the methodology for the analysis of the content was created. This study utilized methodological triangulation which means that the material is analyzed from several methodological aspects. The aspects used in this study are ones of lexical features (collocation, coordination, euphemisms, metaphors and naming), grammatical features (thematic roles, tense, aspect, voice and modal auxiliaries) and rhetoric (Burke, Toulmin and Perelman). From the analysis of the content a generic description is built. By looking at the lexical, grammatical and rhetorical features a clear change in language of the Labour Party can be detected. This change is foreshadowed already in the 1992 manifesto but culminates in the 1997 manifesto which would lead Labour to a landslide victory in the General Election. During this twenty-year period Labour has moved away from the old commitments and into the new sphere of “something for everybody”. The pervasiveness of promotional language and market inspired vocabulary into the sphere of manifesto writing is clear. The use of the metaphors seemed to be the tool for the creation of the image of the party represented through the manifestos. A limited generic description can be constructed from the findings based on the content and structure of the manifestos: especially more generic findings such as the use of the exclusive we, the lack of certain anatomical parts of argument structure, the use of the future tense and the present progressive aspect can shed light to the description of the genre of manifesto writing. While this study is only the beginning, it proves that the combination of looking at the lexical, grammatical and rhetorical features in the study of manifestos is a promising one.
Resumo:
This article discusses three possible ways to derive time domain boundary integral representations for elastodynamics. This discussion points out possible difficulties found when using those formulations to deal with practical applications. The discussion points out recommendations to select the convenient integral representation to deal with elastodynamic problems and opens the possibility of deriving simplified schemes. The proper way to take into account initial conditions applied to the body is an interesting topict shown. It illustrates the main differences between the discussed boundary integral representation expressions, their singularities and possible numerical problems. The correct way to use collocation points outside the analyzed domain is carefully described. Some applications are shown at the end of the paper, in order to demonstrate the capabilities of the technique when properly used.
Resumo:
"Mémoire présenté à la Faculté des études supérieures en vue de l'obtention du grade de maîtrise en droit des affaires"
Resumo:
Ce mémoire présente une évaluation des différentes méthodes utilisées en lexicographie afin d’identifier les liens lexicaux dans les dictionnaires où sont répertoriées des collocations. Nous avons ici comparé le contenu de fiches du DiCo, un dictionnaire de dérivés sémantiques et de collocations créé selon les principes de la lexicologie explicative et combinatoire, avec les listes de cooccurrents générées automatiquement à partir du corpus Le Monde 2002. Notre objectif est ici de proposer des améliorations méthodologiques à la création de fiches de dictionnaire du type du DiCo, c’est-à-dire, des dictionnaires d’approche qualitative, où la collocation est définie comme une association récurrente et arbitraire entre deux items lexicaux et où les principaux outils méthodologiques utilisés sont la compétence linguistique de ses lexicographes et la consultation manuelle de corpus de textes. La consultation de listes de cooccurrents est une pratique associée habituellement à une approche lexicographique quantitative, qui définit la collocation comme une association entre deux items lexicaux qui est plus fréquente, dans un corpus, que ce qui pourrait être attendu si ces deux items lexicaux y étaient distribués de façon aléatoire. Nous voulons mesurer ici dans quelle mesure les outils utilisés traditionnellement dans une approche quantitative peuvent être utiles à la création de fiches lexicographiques d’approche qualitative, et de quelle façon leur utilisation peut être intégrée à la méthodologie actuelle de création de ces fiches.
Resumo:
Cette recherche vise à décrire 1) les erreurs lexicales commises en production écrite par des élèves francophones de 3e secondaire et 2) le rapport à l’erreur lexicale d’enseignants de français (conception de l’erreur lexicale, pratiques d’évaluation du vocabulaire en production écrite, modes de rétroaction aux erreurs lexicales). Le premier volet de la recherche consiste en une analyse d’erreurs à trois niveaux : 1) une description linguistique des erreurs à l’aide d’une typologie, 2) une évaluation de la gravité des erreurs et 3) une explication de leurs sources possibles. Le corpus analysé est constitué de 300 textes rédigés en classe de français par des élèves de 3e secondaire. L’analyse a révélé 1144 erreurs lexicales. Les plus fréquentes sont les problèmes sémantiques (30%), les erreurs liées aux propriétés morphosyntaxiques des unités lexicales (21%) et l’utilisation de termes familiers (17%). Cette répartition démontre que la moitié des erreurs lexicales sont attribuables à une méconnaissance de propriétés des mots autres que le sens et la forme. L’évaluation de la gravité des erreurs repose sur trois critères : leur acceptation linguistique selon les dictionnaires, leur impact sur la compréhension et leur degré d’intégration à l’usage. Les problèmes liés aux registres de langue sont généralement ceux qui sont considérés comme les moins graves et les erreurs sémantiques représentent la quasi-totalité des erreurs graves. Le troisième axe d’analyse concerne la source des erreurs et fait ressortir trois sources principales : l’influence de la langue orale, la proximité sémantique et la parenté formelle entre le mot utilisé et celui visé. Le second volet de la thèse concerne le rapport des enseignants de français à l’erreur lexicale et repose sur l’analyse de 224 rédactions corrigées ainsi que sur une série de huit entrevues menées avec des enseignants de 3e secondaire. Lors de la correction, les enseignants relèvent surtout les erreurs orthographiques ainsi que celles relevant des propriétés morphosyntaxiques des mots (genre, invariabilité, régime), qu’ils classent parmi les erreurs de grammaire. Les erreurs plus purement lexicales, c’est-à-dire les erreurs sémantiques, l’emploi de termes familiers et les erreurs de collocation, demeurent peu relevées, et les annotations des enseignants concernant ces types d’erreurs sont vagues et peu systématiques, donnant peu de pistes aux élèves pour la correction. L’évaluation du vocabulaire en production écrite est toujours soumise à une appréciation qualitative, qui repose sur l’impression générale des enseignants plutôt que sur des critères précis, le seul indicateur clair étant la répétition. Les explications des enseignants concernant les erreurs lexicales reposent beaucoup sur l’intuition, ce qui témoigne de certaines lacunes dans leur formation en lien avec le vocabulaire. Les enseignants admettent enseigner très peu le vocabulaire en classe au secondaire et expliquent ce choix par le manque de temps et d’outils adéquats. L’enseignement du vocabulaire est toujours subordonné à des tâches d’écriture ou de lecture et vise davantage l’acquisition de mots précis que le développement d’une réelle compétence lexicale.
Resumo:
Cette recherche constitue une première étape dans l’élaboration d’un dictionnaire de collocations du lexique scientifique transdisciplinaire (LST), conçu pour aider des étudiants ou des chercheurs dans la rédaction de discours scientifiques ou universitaires, quel que soit leur domaine d’études. Elle a permis de concevoir deux modèles originaux d’articles de dictionnaire donnant accès aux collocations de termes nominaux et verbaux caractéristiques du LST. Les modèles d’articles sont ensuite appliqués à la description d’un échantillon de termes nominaux : analyse, caractéristique, figure, hypothèse, rapport et résultat; et verbaux : décrire et étudier. Les articles conçus dans ce mémoire offrent un accès convivial aux collocations du LST en situation de rédaction. Ils ont l’avantage de proposer une organisation cohérente de ce lexique sur les plans syntaxique et sémantique. En outre, ils permettent de présenter les termes du LST dans des contextes variés, ce qui peut contribuer au développement de la compétence lexicale.
Resumo:
We consider a first order implicit time stepping procedure (Euler scheme) for the non-stationary Stokes equations in smoothly bounded domains of R3. Using energy estimates we can prove optimal convergence properties in the Sobolev spaces Hm(G) (m = 0;1;2) uniformly in time, provided that the solution of the Stokes equations has a certain degree of regularity. For the solution of the resulting Stokes resolvent boundary value problems we use a representation in form of hydrodynamical volume and boundary layer potentials, where the unknown source densities of the latter can be determined from uniquely solvable boundary integral equations’ systems. For the numerical computation of the potentials and the solution of the boundary integral equations a boundary element method of collocation type is used. Some simulations of a model problem are carried out and illustrate the efficiency of the method.
Resumo:
Course notes for the Numerical Methods course (joint MATH3018 and MATH6111). Originally by Giampaolo d'Alessandro, modified by Ian Hawke. These contain only minimal examples and are distributed as is; examples are given in the lectures.
Resumo:
In this paper we consider boundary integral methods applied to boundary value problems for the positive definite Helmholtz-type problem -DeltaU + alpha U-2 = 0 in a bounded or unbounded domain, with the parameter alpha real and possibly large. Applications arise in the implementation of space-time boundary integral methods for the heat equation, where alpha is proportional to 1/root deltat, and deltat is the time step. The corresponding layer potentials arising from this problem depend nonlinearly on the parameter alpha and have kernels which become highly peaked as alpha --> infinity, causing standard discretization schemes to fail. We propose a new collocation method with a robust convergence rate as alpha --> infinity. Numerical experiments on a model problem verify the theoretical results.
Resumo:
[English] This paper is a tutorial introduction to pseudospectral optimal control. With pseudospectral methods, a function is approximated as a linear combination of smooth basis functions, which are often chosen to be Legendre or Chebyshev polynomials. Collocation of the differential-algebraic equations is performed at orthogonal collocation points, which are selected to yield interpolation of high accuracy. Pseudospectral methods directly discretize the original optimal control problem to recast it into a nonlinear programming format. A numerical optimizer is then employed to find approximate local optimal solutions. The paper also briefly describes the functionality and implementation of PSOPT, an open source software package written in C++ that employs pseudospectral discretization methods to solve multi-phase optimal control problems. The software implements the Legendre and Chebyshev pseudospectral methods, and it has useful features such as automatic differentiation, sparsity detection, and automatic scaling. The use of pseudospectral methods is illustrated in two problems taken from the literature on computational optimal control. [Portuguese] Este artigo e um tutorial introdutorio sobre controle otimo pseudo-espectral. Em metodos pseudo-espectrais, uma funcao e aproximada como uma combinacao linear de funcoes de base suaves, tipicamente escolhidas como polinomios de Legendre ou Chebyshev. A colocacao de equacoes algebrico-diferenciais e realizada em pontos de colocacao ortogonal, que sao selecionados de modo a minimizar o erro de interpolacao. Metodos pseudoespectrais discretizam o problema de controle otimo original de modo a converte-lo em um problema de programa cao nao-linear. Um otimizador numerico e entao empregado para obter solucoes localmente otimas. Este artigo tambem descreve sucintamente a funcionalidade e a implementacao de um pacote computacional de codigo aberto escrito em C++ chamado PSOPT. Tal pacote emprega metodos de discretizacao pseudo-spectrais para resolver problemas de controle otimo com multiplas fase. O PSOPT permite a utilizacao de metodos de Legendre ou Chebyshev, e possui caractersticas uteis tais como diferenciacao automatica, deteccao de esparsidade e escalonamento automatico. O uso de metodos pseudo-espectrais e ilustrado em dois problemas retirados da literatura de controle otimo computacional.