933 resultados para Parametric Linear System
Resumo:
The method of approximate approximations, introduced by Maz'ya [1], can also be used for the numerical solution of boundary integral equations. In this case, the matrix of the resulting algebraic system to compute an approximate source density depends only on the position of a finite number of boundary points and on the direction of the normal vector in these points (Boundary Point Method). We investigate this approach for the Stokes problem in the whole space and for the Stokes boundary value problem in a bounded convex domain G subset R^2, where the second part consists of three steps: In a first step the unknown potential density is replaced by a linear combination of exponentially decreasing basis functions concentrated near the boundary points. In a second step, integration over the boundary partial G is replaced by integration over the tangents at the boundary points such that even analytical expressions for the potential approximations can be obtained. In a third step, finally, the linear algebraic system is solved to determine an approximate density function and the resulting solution of the Stokes boundary value problem. Even not convergent the method leads to an efficient approximation of the form O(h^2) + epsilon, where epsilon can be chosen arbitrarily small.
Resumo:
Linear graph reduction is a simple computational model in which the cost of naming things is explicitly represented. The key idea is the notion of "linearity". A name is linear if it is only used once, so with linear naming you cannot create more than one outstanding reference to an entity. As a result, linear naming is cheap to support and easy to reason about. Programs can be translated into the linear graph reduction model such that linear names in the program are implemented directly as linear names in the model. Nonlinear names are supported by constructing them out of linear names. The translation thus exposes those places where the program uses names in expensive, nonlinear ways. Two applications demonstrate the utility of using linear graph reduction: First, in the area of distributed computing, linear naming makes it easy to support cheap cross-network references and highly portable data structures, Linear naming also facilitates demand driven migration of tasks and data around the network without requiring explicit guidance from the programmer. Second, linear graph reduction reveals a new characterization of the phenomenon of state. Systems in which state appears are those which depend on certain -global- system properties. State is not a localizable phenomenon, which suggests that our usual object oriented metaphor for state is flawed.
Resumo:
We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.
Resumo:
Lecture slides, handouts for tutorials, exam papers, and numerical examples for a third year course on Control System Design.
Resumo:
Se presenta el análisis de sensibilidad de un modelo de percepción de marca y ajuste de la inversión en marketing desarrollado en el Laboratorio de Simulación de la Universidad del Rosario. Este trabajo de grado consta de una introducción al tema de análisis de sensibilidad y su complementario el análisis de incertidumbre. Se pasa a mostrar ambos análisis usando un ejemplo simple de aplicación del modelo mediante la aplicación exhaustiva y rigurosa de los pasos descritos en la primera parte. Luego se hace una discusión de la problemática de medición de magnitudes que prueba ser el factor más complejo de la aplicación del modelo en el contexto práctico y finalmente se dan conclusiones sobre los resultados de los análisis.
Resumo:
We analyze the effect of a parametric reform of the fully-funded pension regime in Colombia on the intensive margin of the labor supply. We take advantage of a threshold defined by law in order to identify the causal effect using a regression discontinuity design. We find that a pension system that increases retirement age and the minimum weeks during which workers must contribute to claim pension benefits causes an increase of around 2 hours on the number of weekly worked hours; this corresponds to 4% of the average number of weekly worked hours or around 14% of a standard deviation of weekly worked hours. The effect is robust to different specifications, polynomial orders and sample sizes.
Resumo:
Los sistemas tales como edificios y veh¨ªculos est¨¢n sujetos a vibraciones que pueden causar mal funcionamiento, incomodidad o colapso. Para mitigar estas vibraciones, se suelen instalar amortiguadores. Estas estructuras se convierten en sistemas adaptr¨®nicos cuando los amortiguadores son controlables. Esta tesis se enfoca en la soluci¨®n del problema de vibraciones en edificios y veh¨ªculos usando amortiguadores magnetoreol¨®gicos (MR). Estos son unos amortiguadores controlables caracterizados por una din¨¢mica altamente no lineal. Adem¨¢s, los sistemas donde se instalan se caracterizan por la incertidumbre param¨¦trica, la limitaci¨®n de medidas y las perturbaciones desconocidas, lo que obliga al uso de t¨¦cnicas complejas de control. En esta tesis se usan Backstepping, QFT y H2/H¡Þ mixto para resolver el problema. Las leyes de control se verifican mediante simulaci¨®n y experimentaci¨®n.
Resumo:
The linear viscoelastic (LVE) spectrum is one of the primary fingerprints of polymer solutions and melts, carrying information about most relaxation processes in the system. Many single chain theories and models start with predicting the LVE spectrum to validate their assumptions. However, until now, no reliable linear stress relaxation data were available from simulations of multichain systems. In this work, we propose a new efficient way to calculate a wide variety of correlation functions and mean-square displacements during simulations without significant additional CPU cost. Using this method, we calculate stress−stress autocorrelation functions for a simple bead−spring model of polymer melt for a wide range of chain lengths, densities, temperatures, and chain stiffnesses. The obtained stress−stress autocorrelation functions were compared with the single chain slip−spring model in order to obtain entanglement related parameters, such as the plateau modulus or the molecular weight between entanglements. Then, the dependence of the plateau modulus on the packing length is discussed. We have also identified three different contributions to the stress relaxation: bond length relaxation, colloidal and polymeric. Their dependence on the density and the temperature is demonstrated for short unentangled systems without inertia.
Resumo:
The decadal predictability of three-dimensional Atlantic Ocean anomalies is examined in a coupled global climate model (HadCM3) using a Linear Inverse Modelling (LIM) approach. It is found that the evolution of temperature and salinity in the Atlantic, and the strength of the meridional overturning circulation (MOC), can be effectively described by a linear dynamical system forced by white noise. The forecasts produced using this linear model are more skillful than other reference forecasts for several decades. Furthermore, significant non-normal amplification is found under several different norms. The regions from which this growth occurs are found to be fairly shallow and located in the far North Atlantic. Initially, anomalies in the Nordic Seas impact the MOC, and the anomalies then grow to fill the entire Atlantic basin, especially at depth, over one to three decades. It is found that the structure of the optimal initial condition for amplification is sensitive to the norm employed, but the initial growth seems to be dominated by MOC-related basin scale changes, irrespective of the choice of norm. The consistent identification of the far North Atlantic as the most sensitive region for small perturbations suggests that additional observations in this region would be optimal for constraining decadal climate predictions.
Resumo:
An X-ray micro-tomography system has been designed that is dedicated to the low-dose imaging of radiation sensitive living organisms and has been used to image the early development of the first few days of plant development immediately after germination. The system is based on third-generation X-ray micro-tomography system and consists of an X-ray tube, two-dimensional X-ray detector and a mechanical sample manipulation stage. The X-ray source is a 50 kVp X-ray tube with a silver target with a filter to centre the X-ray spectrum on 22 keV.A 100 mm diameter X-ray image intensifier (XRII) is used to collect the two-dimensional projection images. The rotation tomography table incorporates a linear translation mechanism to eliminate ring artefact that is commonly associated with third-generation tomography systems' Developing maize seeds (Triticum aestivum) have been imaged using the system with a cubic voxel linear dimension of 100 mum, over a diameter of 25 mm and the root lengths and volumes measured. The X-ray dose to the plants was also assessed and found to have no effect on the plant root development. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The length and time scales accessible to optical tweezers make them an ideal tool for the examination of colloidal systems. Embedded high-refractive-index tracer particles in an index-matched hard sphere suspension provide 'handles' within the system to investigate the mechanical behaviour. Passive observations of the motion of a single probe particle give information about the linear response behaviour of the system, which can be linked to the macroscopic frequency-dependent viscous and elastic moduli of the suspension. Separate 'dragging' experiments allow observation of a sample's nonlinear response to an applied stress on a particle-by particle basis. Optical force measurements have given new data about the dynamics of phase transitions and particle interactions; an example in this study is the transition from liquid-like to solid-like behaviour, and the emergence of a yield stress and other effects attributable to nearest-neighbour caging effects. The forces needed to break such cages and the frequency of these cage breaking events are investigated in detail for systems close to the glass transition.
Resumo:
Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.
Resumo:
The current energy requirements system used in the United Kingdom for lactating dairy cows utilizes key parameters such as metabolizable energy intake (MEI) at maintenance (MEm), the efficiency of utilization of MEI for 1) maintenance, 2) milk production (k(l)), 3) growth (k(g)), and the efficiency of utilization of body stores for milk production (k(t)). Traditionally, these have been determined using linear regression methods to analyze energy balance data from calorimetry experiments. Many studies have highlighted a number of concerns over current energy feeding systems particularly in relation to these key parameters, and the linear models used for analyzing. Therefore, a database containing 652 dairy cow observations was assembled from calorimetry studies in the United Kingdom. Five functions for analyzing energy balance data were considered: straight line, two diminishing returns functions, (the Mitscherlich and the rectangular hyperbola), and two sigmoidal functions (the logistic and the Gompertz). Meta-analysis of the data was conducted to estimate k(g) and k(t). Values of 0.83 to 0.86 and 0.66 to 0.69 were obtained for k(g) and k(t) using all the functions (with standard errors of 0.028 and 0.027), respectively, which were considerably different from previous reports of 0.60 to 0.75 for k(g) and 0.82 to 0.84 for k(t). Using the estimated values of k(g) and k(t), the data were corrected to allow for body tissue changes. Based on the definition of k(l) as the derivative of the ratio of milk energy derived from MEI to MEI directed towards milk production, MEm and k(l) were determined. Meta-analysis of the pooled data showed that the average k(l) ranged from 0.50 to 0.58 and MEm ranged between 0.34 and 0.64 MJ/kg of BW0.75 per day. Although the constrained Mitscherlich fitted the data as good as the straight line, more observations at high energy intakes (above 2.4 MJ/kg of BW0.75 per day) are required to determine conclusively whether milk energy is related to MEI linearly or not.
Resumo:
Few studies have linked density dependence of parasitism and the tritrophic environment within which a parasitoid forages. In the non-crop plant-aphid, Centaurea nigra-Uroleucon jaceae system, mixed patterns of density-dependent parasitism by the parasitoids Aphidius funebris and Trioxys centaureae were observed in a survey of a natural population. Breakdown of density-dependent parasitism revealed that density dependence was inverse in smaller colonies but direct in large colonies (>20 aphids), suggesting there is a threshold effect in parasitoid response to aphid density. The CV2 of searching parasitoids was estimated from parasitism data using a hierarchical generalized linear model, and CV2>1 for A. funebris between plant patches, while for T. centaureae CV2>1 within plant patches. In both cases, density independent heterogeneity was more important than density-dependent heterogeneity in parasitism. Parasitism by T. centaureae increased with increasing plant patch size. Manipulation of aphid colony size and plant patch size revealed that parasitism by A. funebris was directly density dependent at the range of colony sizes tested (50-200 initial aphids), and had a strong positive relationship with plant patch size. The effects of plant patch size detected for both species indicate that the tritrophic environment provides a source of host density independent heterogeneity in parasitism, and can modify density-dependent responses. (c) 2007 Gessellschaft fur Okologie. Published by Elsevier GmbH. All rights reserved.
Resumo:
The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society