926 resultados para special linear system
Resumo:
A fully numerical two-dimensional solution of the Schrödinger equation is presented for the linear polyatomic molecule H^2+_3 using the finite element method (FEM). The Coulomb singularities at the nuclei are rectified by using both a condensed element distribution around the singularities and special elements. The accuracy of the results for the 1\sigma and 2\sigma orbitals is of the order of 10^-7 au.
Resumo:
Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.
Resumo:
Sei $N/K$ eine galoissche Zahlkörpererweiterung mit Galoisgruppe $G$, so dass es in $N$ eine Stelle mit voller Zerlegungsgruppe gibt. Die vorliegende Arbeit beschäftigt sich mit Algorithmen, die für das gegebene Fallbeispiel $N/K$, die äquivariante Tamagawazahlvermutung von Burns und Flach für das Paar $(h^0(Spec(N), \mathbb{Z}[G]))$ (numerisch) verifizieren. Grob gesprochen stellt die äquivariante Tamagawazahlvermutung (im Folgenden ETNC) in diesem Spezialfall einen Zusammenhang her zwischen Werten von Artinschen $L$-Reihen zu den absolut irreduziblen Charakteren von $G$ und einer Eulercharakteristik, die man in diesem Fall mit Hilfe einer sogenannten Tatesequenz konstruieren kann. Unter den Voraussetzungen 1. es gibt eine Stelle $v$ von $N$ mit voller Zerlegungsgruppe, 2. jeder irreduzible Charakter $\chi$ von $G$ erfüllt eine der folgenden Bedingungen 2a) $\chi$ ist abelsch, 2b) $\chi(G) \subset \mathbb{Q}$ und $\chi$ ist eine ganzzahlige Linearkombination von induzierten trivialen Charakteren; wird ein Algorithmus entwickelt, der ETNC für jedes Fallbeispiel $N/\mathbb{Q}$ vollständig beweist. Voraussetzung 1. erlaubt es eine Idee von Chinburg ([Chi89]) umzusetzen zur algorithmischen Berechnung von Tatesequenzen. Dabei war es u.a. auch notwendig lokale Fundamentalklassen zu berechnen. Im höchsten zahm verzweigten Fall haben wir hierfür einen Algorithmus entwickelt, der ebenfalls auf den Ideen von Chinburg ([Chi85]) beruht, die auf Arbeiten von Serre [Ser] zurück gehen. Für nicht zahm verzweigte Erweiterungen benutzen wir den von Debeerst ([Deb11]) entwickelten Algorithmus, der ebenfalls auf Serre's Arbeiten beruht. Voraussetzung 2. wird benötigt, um Quotienten aus den $L$-Werten und Regulatoren exakt zu berechnen. Dies gelingt, da wir im Fall von abelschen Charakteren auf die Theorie der zyklotomischen Einheiten zurückgreifen können und im Fall (b) auf die analytische Klassenzahlformel von Zwischenkörpern. Ohne die Voraussetzung 2. liefern die Algorithmen für jedes Fallbeispiel $N/K$ immer noch eine numerische Verifikation bis auf Rechengenauigkeit. Den Algorithmus zur numerischen Verifikation haben wir für $A_4$-Erweiterungen über $\mathbb{Q}$ in das Computeralgebrasystem MAGMA implementiert und für 27 Erweiterungen die äquivariante Tamagawazahlvermutung numerisch verifiziert.
Resumo:
Since 1999, with the adoption of expansion policy in higher education by the Chinese government, enrollment and graduate numbers have been increasing at an unprecedented speed. Accustomed to a system in which university graduates were placed, many students are not trained in “selling themselves”, which exacerbates the situation leading to a skyrocketing unemployment rate among new graduates. The idea of emphasizing career services comes with increasing employment pressure among university graduates in recent years. The 1998 “Higher Education Act” made it a legislative requirement. Thereafter, the Ministry of Education issued a series of documents in order to promote the development of career services. All higher education institutions are required to set up special career service centers and to set a ratio of 1:500 between career staff and the total number of students. Related career management courses, especially career planning classes, are required to be clearly included as specific modules into the teaching plan with a requirement of no less than 38 sessions in one semester at all universities. Developing career services in higher education has thus become a hot issue. One of the more notable trends in higher education in recent years has been the transformation of university career service centers from merely being the coordinators of on-campus placement into full service centers for international career development. The traditional core of career services in higher education had been built around guidance, information and placements (Watts, 1997). This core was still in place, but the role of higher education career services has changed considerably in recent years and the nature of each part is being transformed (Watts, 1997). Most services are undertaking a range of additional activities, and the career guidance issue is emphasized much more than before. Career management courses, especially career planning classes, are given special focus in developing career services in the Chinese case. This links career services clearly and directly with the course provision function. In China, most career service centers are engaging in the transformation period from a “management-oriented” organization to a “service-oriented” organization. Besides guidance services, information services and placement activities, there is a need to blend them together with the new additional teaching function, which follows the general trend as regulated by the government. The role of career services has been expanding and this has brought more challenges to its development in Chinese higher education. Chinese universities still remain in the period of exploration and establishment in developing their own career services. In the face of the new situation, it is very important and meaningful to explore and establish a comprehensive career services system to address student needs in the universities. A key part in developing this system is the introduction of career courses and delivering related career management skills to the students. So there is the need to restructure the career service sectors within the Chinese universities in general. The career service centers will operate as a hub and function as a spoke in the wheel of this model system, providing support and information to staff located in individual teaching departments who are responsible for the delivery of career education, information, advice and guidance. The career service centers will also provide training and career planning classes. The purpose of establishing a comprehensive career services system is to provide a strong base for student career development. The students can prepare themselves well in psychology, ideology and ability before employment with the assistance of effective career services. To conclude, according to the different characteristics and needs of students, there will be appropriate services and guidance in different stages and different ways. In other words, related career services and career guidance activities would be started for newly enrolled freshmen and continue throughout their whole university process. For the operation of a comprehensive services system, there is a need for strong support by the government in the form of macro-control and policy guarantee, but support by the government in the form of macro-control and policy guarantee, but also a need for close cooperation with the academic administration and faculties to be actively involved in career planning and employment programs. As an integral function within the universities, career services must develop and maintain productive relationships with relevant campus offices and key stakeholders both within the universities and externally.
Resumo:
A large class of special functions are solutions of systems of linear difference and differential equations with polynomial coefficients. For a given function, these equations considered as operator polynomials generate a left ideal in a noncommutative algebra called Ore algebra. This ideal with finitely many conditions characterizes the function uniquely so that Gröbner basis techniques can be applied. Many problems related to special functions which can be described by such ideals can be solved by performing elimination of appropriate noncommutative variables in these ideals. In this work, we mainly achieve the following: 1. We give an overview of the theoretical algebraic background as well as the algorithmic aspects of different methods using noncommutative Gröbner elimination techniques in Ore algebras in order to solve problems related to special functions. 2. We describe in detail algorithms which are based on Gröbner elimination techniques and perform the creative telescoping method for sums and integrals of special functions. 3. We investigate and compare these algorithms by illustrative examples which are performed by the computer algebra system Maple. This investigation has the objective to test how far noncommutative Gröbner elimination techniques may be efficiently applied to perform creative telescoping.
Resumo:
Linear graph reduction is a simple computational model in which the cost of naming things is explicitly represented. The key idea is the notion of "linearity". A name is linear if it is only used once, so with linear naming you cannot create more than one outstanding reference to an entity. As a result, linear naming is cheap to support and easy to reason about. Programs can be translated into the linear graph reduction model such that linear names in the program are implemented directly as linear names in the model. Nonlinear names are supported by constructing them out of linear names. The translation thus exposes those places where the program uses names in expensive, nonlinear ways. Two applications demonstrate the utility of using linear graph reduction: First, in the area of distributed computing, linear naming makes it easy to support cheap cross-network references and highly portable data structures, Linear naming also facilitates demand driven migration of tasks and data around the network without requiring explicit guidance from the programmer. Second, linear graph reduction reveals a new characterization of the phenomenon of state. Systems in which state appears are those which depend on certain -global- system properties. State is not a localizable phenomenon, which suggests that our usual object oriented metaphor for state is flawed.
Resumo:
This thesis presents a perceptual system for a humanoid robot that integrates abilities such as object localization and recognition with the deeper developmental machinery required to forge those competences out of raw physical experiences. It shows that a robotic platform can build up and maintain a system for object localization, segmentation, and recognition, starting from very little. What the robot starts with is a direct solution to achieving figure/ground separation: it simply 'pokes around' in a region of visual ambiguity and watches what happens. If the arm passes through an area, that area is recognized as free space. If the arm collides with an object, causing it to move, the robot can use that motion to segment the object from the background. Once the robot can acquire reliable segmented views of objects, it learns from them, and from then on recognizes and segments those objects without further contact. Both low-level and high-level visual features can also be learned in this way, and examples are presented for both: orientation detection and affordance recognition, respectively. The motivation for this work is simple. Training on large corpora of annotated real-world data has proven crucial for creating robust solutions to perceptual problems such as speech recognition and face detection. But the powerful tools used during training of such systems are typically stripped away at deployment. Ideally they should remain, particularly for unstable tasks such as object detection, where the set of objects needed in a task tomorrow might be different from the set of objects needed today. The key limiting factor is access to training data, but as this thesis shows, that need not be a problem on a robotic platform that can actively probe its environment, and carry out experiments to resolve ambiguity. This work is an instance of a general approach to learning a new perceptual judgment: find special situations in which the perceptual judgment is easy and study these situations to find correlated features that can be observed more generally.
Resumo:
We present a technique for the rapid and reliable evaluation of linear-functional output of elliptic partial differential equations with affine parameter dependence. The essential components are (i) rapidly uniformly convergent reduced-basis approximations — Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N (optimally) selected points in parameter space; (ii) a posteriori error estimation — relaxations of the residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs; and (iii) offline/online computational procedures — stratagems that exploit affine parameter dependence to de-couple the generation and projection stages of the approximation process. The operation count for the online stage — in which, given a new parameter value, we calculate the output and associated error bound — depends only on N (typically small) and the parametric complexity of the problem. The method is thus ideally suited to the many-query and real-time contexts. In this paper, based on the technique we develop a robust inverse computational method for very fast solution of inverse problems characterized by parametrized partial differential equations. The essential ideas are in three-fold: first, we apply the technique to the forward problem for the rapid certified evaluation of PDE input-output relations and associated rigorous error bounds; second, we incorporate the reduced-basis approximation and error bounds into the inverse problem formulation; and third, rather than regularize the goodness-of-fit objective, we may instead identify all (or almost all, in the probabilistic sense) system configurations consistent with the available experimental data — well-posedness is reflected in a bounded "possibility region" that furthermore shrinks as the experimental error is decreased.
Resumo:
We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.
Resumo:
We make a comparative study of payment systems for E.U. -fifteen countries for the 1996-2002 period. Special attention is paid to the introduction of the new European single currency. The overall trend in payments is for a move from cash to noncash payment instruments, although electronic instruments are not widely used yet. We find a significant impact from the introduction of the new banknotes and coins on card use
Resumo:
Lecture slides, handouts for tutorials, exam papers, and numerical examples for a third year course on Control System Design.
Resumo:
El marcaje de proteínas con ubiquitina, conocido como ubiquitinación, cumple diferentes funciones que incluyen la regulación de varios procesos celulares, tales como: la degradación de proteínas por medio del proteosoma, la reparación del ADN, la señalización mediada por receptores de membrana, y la endocitosis, entre otras (1). Las moléculas de ubiquitina pueden ser removidas de sus sustratos gracias a la acción de un gran grupo de proteasas, llamadas enzimas deubiquitinizantes (DUBs) (2). Las DUBs son esenciales para la manutención de la homeostasis de la ubiquitina y para la regulación del estado de ubiquitinación de diferentes sustratos. El gran número y la diversidad de DUBs descritas refleja tanto su especificidad como su utilización para regular un amplio espectro de sustratos y vías celulares. Aunque muchas DUBs han sido estudiadas a profundidad, actualmente se desconocen los sustratos y las funciones biológicas de la mayoría de ellas. En este trabajo se investigaron las funciones de las DUBs: USP19, USP4 y UCH-L1. Utilizando varias técnicas de biología molecular y celular se encontró que: i) USP19 es regulada por las ubiquitin ligasas SIAH1 y SIAH2 ii) USP19 es importante para regular HIF-1α, un factor de transcripción clave en la respuesta celular a hipoxia, iii) USP4 interactúa con el proteosoma, iv) La quimera mCherry-UCH-L1 reproduce parcialmente los fenotipos que nuestro grupo ha descrito previamente al usar otros constructos de la misma enzima, y v) UCH-L1 promueve la internalización de la bacteria Yersinia pseudotuberculosis.
Resumo:
Una de las actuaciones posibles para la gestión de los residuos sólidos urbanos es la valorización energética, es decir la incineración con recuperación de energía. Sin embargo es muy importante controlar adecuadamente el proceso de incineración para evitar en lo posible la liberación de sustancias contaminantes a la atmósfera que puedan ocasionar problemas de contaminación industrial.Conseguir que tanto el proceso de incineración como el tratamiento de los gases se realice en condiciones óptimas presupone tener un buen conocimiento de las dependencias entre las variables de proceso. Se precisan métodos adecuados de medida de las variables más importantes y tratar los valores medidos con modelos adecuados para transformarlos en magnitudes de mando. Un modelo clásico para el control parece poco prometedor en este caso debido a la complejidad de los procesos, la falta de descripción cuantitativa y la necesidad de hacer los cálculos en tiempo real. Esto sólo se puede conseguir con la ayuda de las modernas técnicas de proceso de datos y métodos informáticos, tales como el empleo de técnicas de simulación, modelos matemáticos, sistemas basados en el conocimiento e interfases inteligentes. En [Ono, 1989] se describe un sistema de control basado en la lógica difusa aplicado al campo de la incineración de residuos urbanos. En el centro de investigación FZK de Karslruhe se están desarrollando aplicaciones que combinan la lógica difusa con las redes neuronales [Jaeschke, Keller, 1994] para el control de la planta piloto de incineración de residuos TAMARA. En esta tesis se plantea la aplicación de un método de adquisición de conocimiento para el control de sistemas complejos inspirado en el comportamiento humano. Cuando nos encontramos ante una situación desconocida al principio no sabemos como actuar, salvo por la extrapolación de experiencias anteriores que puedan ser útiles. Aplicando procedimientos de prueba y error, refuerzo de hipótesis, etc., vamos adquiriendo y refinando el conocimiento, y elaborando un modelo mental. Podemos diseñar un método análogo, que pueda ser implementado en un sistema informático, mediante el empleo de técnicas de Inteligencia Artificial.Así, en un proceso complejo muchas veces disponemos de un conjunto de datos del proceso que a priori no nos dan información suficientemente estructurada para que nos sea útil. Para la adquisición de conocimiento pasamos por una serie de etapas: - Hacemos una primera selección de cuales son las variables que nos interesa conocer. - Estado del sistema. En primer lugar podemos empezar por aplicar técnicas de clasificación (aprendizaje no supervisado) para agrupar los datos y obtener una representación del estado de la planta. Es posible establecer una clasificación, pero normalmente casi todos los datos están en una sola clase, que corresponde a la operación normal. Hecho esto y para refinar el conocimiento utilizamos métodos estadísticos clásicos para buscar correlaciones entre variables (análisis de componentes principales) y así poder simplificar y reducir la lista de variables. - Análisis de las señales. Para analizar y clasificar las señales (por ejemplo la temperatura del horno) es posible utilizar métodos capaces de describir mejor el comportamiento no lineal del sistema, como las redes neuronales. Otro paso más consiste en establecer relaciones causales entre las variables. Para ello nos sirven de ayuda los modelos analíticos - Como resultado final del proceso se pasa al diseño del sistema basado en el conocimiento. El objetivo principal es aplicar el método al caso concreto del control de una planta de tratamiento de residuos sólidos urbanos por valorización energética. En primer lugar, en el capítulo 2 Los residuos sólidos urbanos, se trata el problema global de la gestión de los residuos, dando una visión general de las diferentes alternativas existentes, y de la situación nacional e internacional en la actualidad. Se analiza con mayor detalle la problemática de la incineración de los residuos, poniendo especial interés en aquellas características de los residuos que tienen mayor importancia de cara al proceso de combustión.En el capítulo 3, Descripción del proceso, se hace una descripción general del proceso de incineración y de los distintos elementos de una planta incineradora: desde la recepción y almacenamiento de los residuos, pasando por los distintos tipos de hornos y las exigencias de los códigos de buena práctica de combustión, el sistema de aire de combustión y el sistema de humos. Se presentan también los distintos sistemas de depuración de los gases de combustión, y finalmente el sistema de evacuación de cenizas y escorias.El capítulo 4, La planta de tratamiento de residuos sólidos urbanos de Girona, describe los principales sistemas de la planta incineradora de Girona: la alimentación de residuos, el tipo de horno, el sistema de recuperación de energía, y el sistema de depuración de los gases de combustión Se describe también el sistema de control, la operación, los datos de funcionamiento de la planta, la instrumentación y las variables que son de interés para el control del proceso de combustión.En el capítulo 5, Técnicas utilizadas, se proporciona una visión global de los sistemas basados en el conocimiento y de los sistemas expertos. Se explican las diferentes técnicas utilizadas: redes neuronales, sistemas de clasificación, modelos cualitativos, y sistemas expertos, ilustradas con algunos ejemplos de aplicación.Con respecto a los sistemas basados en el conocimiento se analizan en primer lugar las condiciones para su aplicabilidad, y las formas de representación del conocimiento. A continuación se describen las distintas formas de razonamiento: redes neuronales, sistemas expertos y lógica difusa, y se realiza una comparación entre ellas. Se presenta una aplicación de las redes neuronales al análisis de series temporales de temperatura.Se trata también la problemática del análisis de los datos de operación mediante técnicas estadísticas y el empleo de técnicas de clasificación. Otro apartado está dedicado a los distintos tipos de modelos, incluyendo una discusión de los modelos cualitativos.Se describe el sistema de diseño asistido por ordenador para el diseño de sistemas de supervisión CASSD que se utiliza en esta tesis, y las herramientas de análisis para obtener información cualitativa del comportamiento del proceso: Abstractores y ALCMEN. Se incluye un ejemplo de aplicación de estas técnicas para hallar las relaciones entre la temperatura y las acciones del operador. Finalmente se analizan las principales características de los sistemas expertos en general, y del sistema experto CEES 2.0 que también forma parte del sistema CASSD que se ha utilizado.El capítulo 6, Resultados, muestra los resultados obtenidos mediante la aplicación de las diferentes técnicas, redes neuronales, clasificación, el desarrollo de la modelización del proceso de combustión, y la generación de reglas. Dentro del apartado de análisis de datos se emplea una red neuronal para la clasificación de una señal de temperatura. También se describe la utilización del método LINNEO+ para la clasificación de los estados de operación de la planta.En el apartado dedicado a la modelización se desarrolla un modelo de combustión que sirve de base para analizar el comportamiento del horno en régimen estacionario y dinámico. Se define un parámetro, la superficie de llama, relacionado con la extensión del fuego en la parrilla. Mediante un modelo linealizado se analiza la respuesta dinámica del proceso de incineración. Luego se pasa a la definición de relaciones cualitativas entre las variables que se utilizan en la elaboración de un modelo cualitativo. A continuación se desarrolla un nuevo modelo cualitativo, tomando como base el modelo dinámico analítico.Finalmente se aborda el desarrollo de la base de conocimiento del sistema experto, mediante la generación de reglas En el capítulo 7, Sistema de control de una planta incineradora, se analizan los objetivos de un sistema de control de una planta incineradora, su diseño e implementación. Se describen los objetivos básicos del sistema de control de la combustión, su configuración y la implementación en Matlab/Simulink utilizando las distintas herramientas que se han desarrollado en el capítulo anterior.Por último para mostrar como pueden aplicarse los distintos métodos desarrollados en esta tesis se construye un sistema experto para mantener constante la temperatura del horno actuando sobre la alimentación de residuos.Finalmente en el capítulo Conclusiones, se presentan las conclusiones y resultados de esta tesis.
Resumo:
The linear viscoelastic (LVE) spectrum is one of the primary fingerprints of polymer solutions and melts, carrying information about most relaxation processes in the system. Many single chain theories and models start with predicting the LVE spectrum to validate their assumptions. However, until now, no reliable linear stress relaxation data were available from simulations of multichain systems. In this work, we propose a new efficient way to calculate a wide variety of correlation functions and mean-square displacements during simulations without significant additional CPU cost. Using this method, we calculate stress−stress autocorrelation functions for a simple bead−spring model of polymer melt for a wide range of chain lengths, densities, temperatures, and chain stiffnesses. The obtained stress−stress autocorrelation functions were compared with the single chain slip−spring model in order to obtain entanglement related parameters, such as the plateau modulus or the molecular weight between entanglements. Then, the dependence of the plateau modulus on the packing length is discussed. We have also identified three different contributions to the stress relaxation: bond length relaxation, colloidal and polymeric. Their dependence on the density and the temperature is demonstrated for short unentangled systems without inertia.
Resumo:
The decadal predictability of three-dimensional Atlantic Ocean anomalies is examined in a coupled global climate model (HadCM3) using a Linear Inverse Modelling (LIM) approach. It is found that the evolution of temperature and salinity in the Atlantic, and the strength of the meridional overturning circulation (MOC), can be effectively described by a linear dynamical system forced by white noise. The forecasts produced using this linear model are more skillful than other reference forecasts for several decades. Furthermore, significant non-normal amplification is found under several different norms. The regions from which this growth occurs are found to be fairly shallow and located in the far North Atlantic. Initially, anomalies in the Nordic Seas impact the MOC, and the anomalies then grow to fill the entire Atlantic basin, especially at depth, over one to three decades. It is found that the structure of the optimal initial condition for amplification is sensitive to the norm employed, but the initial growth seems to be dominated by MOC-related basin scale changes, irrespective of the choice of norm. The consistent identification of the far North Atlantic as the most sensitive region for small perturbations suggests that additional observations in this region would be optimal for constraining decadal climate predictions.