934 resultados para Linear system solve
Resumo:
Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of nontextured objets or objets for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance
Resumo:
El problema de transporte en Bogotá es cada vez algo mas grande, pues las medidas actuales y los planes a futuro para el desarrollo de un sistema integrado de transporte parecen no ser suficientes para la magnitud poblacional de la capital Colombiana; de igual manera los precios son elevados y representan un inconveniente para los ciudadanos puesto que la cantidad de estos que puede pagar un pasaje del actual sistema transmilenio es cada vez más baja debido al alto incremento que su tarifa tiene anualmente. Por esta razón durante lo largo de este escrito se justificaran las razones que indican que los planes aplicados y por aplicar por el distrito no son suficientes para cubrir el vacío que existe en Bogotá a nivel de un sistema integrado de transporte público.
Resumo:
Lecture slides, handouts for tutorials, exam papers, and numerical examples for a third year course on Control System Design.
Resumo:
El test de circuits és una fase del procés de producció que cada vegada pren més importància quan es desenvolupa un nou producte. Les tècniques de test i diagnosi per a circuits digitals han estat desenvolupades i automatitzades amb èxit, mentre que aquest no és encara el cas dels circuits analògics. D'entre tots els mètodes proposats per diagnosticar circuits analògics els més utilitzats són els diccionaris de falles. En aquesta tesi se'n descriuen alguns, tot analitzant-ne els seus avantatges i inconvenients. Durant aquests últims anys, les tècniques d'Intel·ligència Artificial han esdevingut un dels camps de recerca més importants per a la diagnosi de falles. Aquesta tesi desenvolupa dues d'aquestes tècniques per tal de cobrir algunes de les mancances que presenten els diccionaris de falles. La primera proposta es basa en construir un sistema fuzzy com a eina per identificar. Els resultats obtinguts son força bons, ja que s'aconsegueix localitzar la falla en un elevat tant percent dels casos. Per altra banda, el percentatge d'encerts no és prou bo quan a més a més s'intenta esbrinar la desviació. Com que els diccionaris de falles es poden veure com una aproximació simplificada al Raonament Basat en Casos (CBR), la segona proposta fa una extensió dels diccionaris de falles cap a un sistema CBR. El propòsit no és donar una solució general del problema sinó contribuir amb una nova metodologia. Aquesta consisteix en millorar la diagnosis dels diccionaris de falles mitjançant l'addició i l'adaptació dels nous casos per tal d'esdevenir un sistema de Raonament Basat en Casos. Es descriu l'estructura de la base de casos així com les tasques d'extracció, de reutilització, de revisió i de retenció, fent èmfasi al procés d'aprenentatge. En el transcurs del text s'utilitzen diversos circuits per mostrar exemples dels mètodes de test descrits, però en particular el filtre biquadràtic és l'utilitzat per provar les metodologies plantejades, ja que és un dels benchmarks proposats en el context dels circuits analògics. Les falles considerades son paramètriques, permanents, independents i simples, encara que la metodologia pot ser fàcilment extrapolable per a la diagnosi de falles múltiples i catastròfiques. El mètode es centra en el test dels components passius, encara que també es podria extendre per a falles en els actius.
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
The linear viscoelastic (LVE) spectrum is one of the primary fingerprints of polymer solutions and melts, carrying information about most relaxation processes in the system. Many single chain theories and models start with predicting the LVE spectrum to validate their assumptions. However, until now, no reliable linear stress relaxation data were available from simulations of multichain systems. In this work, we propose a new efficient way to calculate a wide variety of correlation functions and mean-square displacements during simulations without significant additional CPU cost. Using this method, we calculate stress−stress autocorrelation functions for a simple bead−spring model of polymer melt for a wide range of chain lengths, densities, temperatures, and chain stiffnesses. The obtained stress−stress autocorrelation functions were compared with the single chain slip−spring model in order to obtain entanglement related parameters, such as the plateau modulus or the molecular weight between entanglements. Then, the dependence of the plateau modulus on the packing length is discussed. We have also identified three different contributions to the stress relaxation: bond length relaxation, colloidal and polymeric. Their dependence on the density and the temperature is demonstrated for short unentangled systems without inertia.
Resumo:
We present the extension of a methodology to solve moving boundary value problems from the second-order case to the case of the third-order linear evolution PDE qt + qxxx = 0. This extension is the crucial step needed to generalize this methodology to PDEs of arbitrary order. The methodology is based on the derivation of inversion formulae for a class of integral transforms that generalize the Fourier transform and on the analysis of the global relation associated with the PDE. The study of this relation and its inversion using the appropriate generalized transform are the main elements of the proof of our results.
Resumo:
The decadal predictability of three-dimensional Atlantic Ocean anomalies is examined in a coupled global climate model (HadCM3) using a Linear Inverse Modelling (LIM) approach. It is found that the evolution of temperature and salinity in the Atlantic, and the strength of the meridional overturning circulation (MOC), can be effectively described by a linear dynamical system forced by white noise. The forecasts produced using this linear model are more skillful than other reference forecasts for several decades. Furthermore, significant non-normal amplification is found under several different norms. The regions from which this growth occurs are found to be fairly shallow and located in the far North Atlantic. Initially, anomalies in the Nordic Seas impact the MOC, and the anomalies then grow to fill the entire Atlantic basin, especially at depth, over one to three decades. It is found that the structure of the optimal initial condition for amplification is sensitive to the norm employed, but the initial growth seems to be dominated by MOC-related basin scale changes, irrespective of the choice of norm. The consistent identification of the far North Atlantic as the most sensitive region for small perturbations suggests that additional observations in this region would be optimal for constraining decadal climate predictions.
Resumo:
An X-ray micro-tomography system has been designed that is dedicated to the low-dose imaging of radiation sensitive living organisms and has been used to image the early development of the first few days of plant development immediately after germination. The system is based on third-generation X-ray micro-tomography system and consists of an X-ray tube, two-dimensional X-ray detector and a mechanical sample manipulation stage. The X-ray source is a 50 kVp X-ray tube with a silver target with a filter to centre the X-ray spectrum on 22 keV.A 100 mm diameter X-ray image intensifier (XRII) is used to collect the two-dimensional projection images. The rotation tomography table incorporates a linear translation mechanism to eliminate ring artefact that is commonly associated with third-generation tomography systems' Developing maize seeds (Triticum aestivum) have been imaged using the system with a cubic voxel linear dimension of 100 mum, over a diameter of 25 mm and the root lengths and volumes measured. The X-ray dose to the plants was also assessed and found to have no effect on the plant root development. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The length and time scales accessible to optical tweezers make them an ideal tool for the examination of colloidal systems. Embedded high-refractive-index tracer particles in an index-matched hard sphere suspension provide 'handles' within the system to investigate the mechanical behaviour. Passive observations of the motion of a single probe particle give information about the linear response behaviour of the system, which can be linked to the macroscopic frequency-dependent viscous and elastic moduli of the suspension. Separate 'dragging' experiments allow observation of a sample's nonlinear response to an applied stress on a particle-by particle basis. Optical force measurements have given new data about the dynamics of phase transitions and particle interactions; an example in this study is the transition from liquid-like to solid-like behaviour, and the emergence of a yield stress and other effects attributable to nearest-neighbour caging effects. The forces needed to break such cages and the frequency of these cage breaking events are investigated in detail for systems close to the glass transition.
Resumo:
Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.
Resumo:
The current energy requirements system used in the United Kingdom for lactating dairy cows utilizes key parameters such as metabolizable energy intake (MEI) at maintenance (MEm), the efficiency of utilization of MEI for 1) maintenance, 2) milk production (k(l)), 3) growth (k(g)), and the efficiency of utilization of body stores for milk production (k(t)). Traditionally, these have been determined using linear regression methods to analyze energy balance data from calorimetry experiments. Many studies have highlighted a number of concerns over current energy feeding systems particularly in relation to these key parameters, and the linear models used for analyzing. Therefore, a database containing 652 dairy cow observations was assembled from calorimetry studies in the United Kingdom. Five functions for analyzing energy balance data were considered: straight line, two diminishing returns functions, (the Mitscherlich and the rectangular hyperbola), and two sigmoidal functions (the logistic and the Gompertz). Meta-analysis of the data was conducted to estimate k(g) and k(t). Values of 0.83 to 0.86 and 0.66 to 0.69 were obtained for k(g) and k(t) using all the functions (with standard errors of 0.028 and 0.027), respectively, which were considerably different from previous reports of 0.60 to 0.75 for k(g) and 0.82 to 0.84 for k(t). Using the estimated values of k(g) and k(t), the data were corrected to allow for body tissue changes. Based on the definition of k(l) as the derivative of the ratio of milk energy derived from MEI to MEI directed towards milk production, MEm and k(l) were determined. Meta-analysis of the pooled data showed that the average k(l) ranged from 0.50 to 0.58 and MEm ranged between 0.34 and 0.64 MJ/kg of BW0.75 per day. Although the constrained Mitscherlich fitted the data as good as the straight line, more observations at high energy intakes (above 2.4 MJ/kg of BW0.75 per day) are required to determine conclusively whether milk energy is related to MEI linearly or not.
Resumo:
Few studies have linked density dependence of parasitism and the tritrophic environment within which a parasitoid forages. In the non-crop plant-aphid, Centaurea nigra-Uroleucon jaceae system, mixed patterns of density-dependent parasitism by the parasitoids Aphidius funebris and Trioxys centaureae were observed in a survey of a natural population. Breakdown of density-dependent parasitism revealed that density dependence was inverse in smaller colonies but direct in large colonies (>20 aphids), suggesting there is a threshold effect in parasitoid response to aphid density. The CV2 of searching parasitoids was estimated from parasitism data using a hierarchical generalized linear model, and CV2>1 for A. funebris between plant patches, while for T. centaureae CV2>1 within plant patches. In both cases, density independent heterogeneity was more important than density-dependent heterogeneity in parasitism. Parasitism by T. centaureae increased with increasing plant patch size. Manipulation of aphid colony size and plant patch size revealed that parasitism by A. funebris was directly density dependent at the range of colony sizes tested (50-200 initial aphids), and had a strong positive relationship with plant patch size. The effects of plant patch size detected for both species indicate that the tritrophic environment provides a source of host density independent heterogeneity in parasitism, and can modify density-dependent responses. (c) 2007 Gessellschaft fur Okologie. Published by Elsevier GmbH. All rights reserved.
Resumo:
The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society
Resumo:
In this study, the extraction properties of a synergistic system consisting of 2,6-bis-(benzoxazolyl)-4-dodecyloxylpyridine (BODO) and 2-bromodecanoic acid (HA) in tert-butyl benzene (TBB) have been investigated as a function of ionic strength by varying the nitrate ion and perchlorate ion concentrations. The influence of the hydrogen ion concentration has also been investigated. Distribution ratios between 0.03-12 and 0.003-0.8 have been found for Am(III) and Eu(HI), respectively, but there were no attempts to maximize these values. It has been shown that the distribution ratios decrease with increasing amounts of ClO4-, NO3-, and H+. The mechanisms, however, by which the decrease occurs, are different. In the case of increasing perchlorate ion concentration, the decrease in extraction is linear in a log-log plot of the distribution ratio vs. the ionic strength, while in the nitrate case the complexation between nitrate and Am or Eu increases at high nitrate ion concentrations and thereby decreases the distribution ratio in a non-linearway. The decrease in extraction could be caused by changes in activity coefficients that can be explained with specific ion interaction theory (SIT); shielding of the metal ions, and by nitrate complexation with Am and Eu as competing mechanism at high ionic strengths. The separation factor between Am and Eu reaches a maximum at similar to1 M nitrate ion concentration. Thereafter the values decrease with increasing nitrate ion concentrations.