979 resultados para ARRAY INTERPOLATION
Resumo:
Dissertação de mestrado em Técnicas de Caracterização e Análise Química
Resumo:
OBJECTIVE - To analyze the trends in risk of death due to cardiovascular diseases in the northern, northeastern, southern, southeastern, and central western Brazilian geographic regions from 1979 to 1996. METHODS - Data on mortality due to cardiovascular, cardiac ischemic, and cerebrovascular diseases in 5 Brazilian geographic regions were obtained from the Ministry of Health. Population estimates for the time period from 1978 to 1996 in the 5 Brazilian geographic regions were calculated by interpolation with the Lagrange method, based on the census data from 1970, 1980, 1991, and the population count of 1996, for each age bracket and sex. Trends were analyzed with the multiple linear regression model. RESULTS - Cardiovascular diseases showed a declining trend in the southern, southeastern, and northern Brazilian geographic regions in all age brackets and for both sexes. In the northeastern and central western regions, an increasing trend in the risk of death due to cardiovascular diseases occurred, except for the age bracket from 30 to 39 years, which showed a slight reduction. This resulted from the trends of cardiac ischemic and cerebrovascular diseases. The analysis of the trend in the northeastern and northern regions was impaired by the great proportion of poorly defined causes of death. CONCLUSION - The risk of death due to cardiovascular, cerebrovascular, and cardiac ischemic diseases decreased in the southern and southeastern regions, which are the most developed regions in the country, and increased in the least developed regions, mainly in the central western region.
Resumo:
OBJECTIVE: To assess the trends of the risk of death due to circulatory (CD), cerebrovascular (CVD), and ischemic heart diseases (IHD) in 11 Brazilian capitals from 1980 to 1998. METHODS: Data on mortality due to CD, CVD and IHD were obtained from the Brazilian Health Ministry, and the population estimates were calculated by interpolation with the Lagrange method based on census data from 1980 and 1991 and the population count of 1996. The trends were analyzed with the multiple linear regression method. RESULTS: CD showed a trend towards a decrease in most capitals, except for Brasília, where a mild increase was observed. The cities of Porto Alegre, Curitiba, Rio de Janeiro, Cuiabá, Goiânia, Belém, and Manaus showed a decrease in the risk of death due to CVD and IHD, while the city of Brasília showed an increase in CVD and IHD. The city of São Paulo showed a mild increase in IHD for individuals of both sexes aged 30 to 39 years and for females aged 40 to 59 years. In the cities of Recife and Salvador, a reduction in CD was observed for all ages and both sexes. In the city of Recife, however, an increase in IHD was observed at younger ages (30 to 49 years), and this trend decreased until a mild reduction (-4%) was observed in males ³ 70 years. CONCLUSION: In general, a reduction in the risk of death due to CD and an increase in IHD were observed, mainly in the cities of Recife and Brasília.
Resumo:
El crecimiento exponencial del tráfico de datos es uno de los mayores desafíos que enfrentan actualmente los sistemas de comunicaciones, debiendo los mismos ser capaces de soportar velocidades de procesamiento de datos cada vez mas altas. En particular, el consumo de potencia se ha transformado en uno de los parámetros de diseño más críticos, generando la necesidad de investigar el uso de nuevas arquitecturas y algoritmos para el procesamiento digital de la información. Por otro lado, el análisis y evaluación de nuevas técnicas de procesamiento presenta dificultades dadas las altas velocidades a las que deben operar, resultando frecuentemente ineficiente el uso de la simulación basada en software como método. En este contexto, el uso de electrónica programable ofrece una oportunidad a bajo costo donde no solo se evaluan nuevas técnicas de diseño de alta velocidad sino también se valida su implementación en desarrollos tecnológicos. El presente proyecto tiene como objetivo principal el estudio y desarrollo de nuevas arquitecturas y algoritmos en electrónica programable para el procesamiento de datos a alta velocidad. El método a utilizar será la programación en dispositivos FPGA (Field-Programmable Gate Array) que ofrecen una buena relación costo-beneficio y gran flexibilidad para integrarse con otros dispositivos de comunicaciones. Para la etapas de diseño, simulación y programación se utilizaran herramientas CAD (Computer-Aided Design) orientadas a sistemas electrónicos digitales. El proyecto beneficiara a estudiantes de grado y postgrado de carreras afines a la informática y las telecomunicaciones, contribuyendo al desarrollo de proyectos finales y tesis doctorales. Los resultados del proyecto serán publicados en conferencias y/o revistas nacionales e internacionales y divulgados a través de charlas de difusión y/o encuentros. El proyecto se enmarca dentro de un área de gran importancia para la Provincia de Córdoba, como lo es la informática y las telecomunicaciones, y promete generar conocimiento de gran valor agregado que pueda ser transferido a empresas tecnológicas de la Provincia de Córdoba a través de consultorias o desarrollos de productos.
Resumo:
FUNDAMENTO: A programação ideal da energia de choque do CDI deve ser pelo menos 10 J acima do limiar de desfibrilação (LDF), necessitando de técnicas alternativas quando o LDF é elevado. OBJETIVO: Avaliar o comportamento clínico dos portadores de CDI com LDF>25 J e a eficácia da terapêutica escolhida. MÉTODOS: Foram selecionados portadores de CDI, entre janeiro de 2000 e agosto de 2004 (banco de dados prospectivo), com LDF>25 J intra-operatório, e analisaram-se: características clínicas, FEVE, resgate de eventos arrítmicos pelo CDI e óbitos. RESULTADOS: dentre 476 pacientes, 16 (3,36%) apresentaram LDF>25J. Idade média de 56,5 anos, sendo 13 pacientes (81%) do sexo masculino. Quanto à cardiopatia de base 09 eram chagásicos, 04 isquêmicos e 03 com etiologia idiopática. A FEVE média dos pacientes foi 37% e 94% utilizavam amiodarona. O seguimento médio foi de 25,3 meses. Em 02 pacientes com LDF > Choque Máximo (CM), foi necessário implante de eletrodo de choque adicional (array), sendo mantido programação com CM em zona de FV (>182bpm) nos demais. Durante o seguimento 03 pacientes apresentaram 67 terapias de choque apropriadas (TCA) com sucesso. Ocorreram 07 óbitos sendo 5 por causas não cardíacas e 2 por insuficiência cardíaca avançada. Os pacientes que foram a óbito apresentaram níveis de LDF maiores (p=0,0446), entretanto sem relação com a causa dos mesmos tendo em vista que não ocorreram TCA sem sucesso. CONCLUSÃO: Nessa coorte de pacientes com CDI, a ocorrência de LDF elevado foi baixa, implicando terapêuticas alternativas. Houve associação com disfunção ventricular grave, entretanto sem correlação com as causas de óbito.
Resumo:
En dispositivos electrónicos de última generación destinados a funciones de comunicación o control automático, los algoritmos de procesamiento digital de señales trasladados al hardware han ocupado un lugar fundamental. Es decir el estado de arte en el área de las comunicaciones y control puede resumirse en algoritmos basados en procesamiento digital de señales. Las implementaciones digitales de estos algoritmos han sido estudiadas en áreas de la informática desde hace tiempo. Sin embargo, aunque el incremento en la complejidad de los algoritmos modernos permite alcanzar desempeños atractivos en aplicaciones específicas, a su vez impone restricciones en la velocidad de operación que han motivado el diseño directamente en hardware de arquitecturas para alto rendimiento. En este contexto, los circuitos electrónicos basados en lógica programable, principalmente los basados en FPGA (Field-Programmable Gate Array), permiten obtener medidas de desempeño altamente confiables que proporcionan el acercamiento necesario hacia el diseño electrónico de circuitos para aplicaciones específicas “ASIC-VLSI” (Application Specific Integrated Circuit - Very Large Scale Integration). En este proyecto se analiza el diseño y la implementación de aquitecturas electrónicas para el procesamiento digital de señales, con el objeto de obtener medidas reales sobre el comportamiento del canal inalámbrico y su influencia sobre la estimación y el control de trayectoria en vehículos aéreos no tripulados (UAV, Unmanned Aerial Vehicle). Para esto se propone analizar un dispositivo híbrido basado en microcontroladores y circuitos FPGA y sobre este mismo dispositivo implementar mediante algoritmo un control de trayectoria que permita mantener un punto fijo en el centro del cuadro de una cámara de video a bordo de un UAV, que sea eficiente en términos de velocidad de operación, dimensiones y consumo de energía.
Resumo:
Transmission of Cherenkov light through the atmosphere is strongly influenced by the optical clarity of the atmosphere and the prevailing weather conditions. The performance of telescopes measuring this light is therefore dependent on atmospheric effects. This thesis presents software and hardware developed to implement a prototype sky monitoring system for use on the proposed next-generation gamma-ray telescope array, VERITAS. The system, consisting of a CCD camera and a far-infrared pyrometer, was successfully installed and tested on the ten metre atmospheric Cherenkov imaging telescope operated by the VERITAS Collaboration at the F.L. Whipple Observatory in Arizona. The thesis also presents the results of observations of the BL Lacertae object, 1ES1959+650, made with the Whipple ten metre telescope. The observations provide evidence for TeV gamma-ray emission from the BL Lacertae object, 1ES1959+650, at a level of more than 15 standard deviations above background. This represents the first unequivocal detection of this object at TeV energies, making it only the third extragalactic source seen at such levels of significance in this energy range. The flux variability of the source on a number of timescales is also investigated.
Resumo:
ორიგინალურ და ცნობილ ინტერპოლაციურ ფორმულებზე დაყრდნობით გაანალიზებულია ფაზური სივრცის ქვეფენებს შორის გარდამავალ არეებში სპექტრალური ფუნქციების სიმკვრივეების ყოფაქცევის თავისებურებანი ნეიტრალურ და გამტარ ატმოსფეროს შემთხვევაში. რიცხვითი გამოთვლების შედეგები წარმოდგენილია გრაფიკების სახით.
Resumo:
Elliptic differential equations, finite element method, mortar element method, streamline diffusion FEM, upwind method, numerical method, error estimate, interpolation operator, grid generation, adaptive refinement
Resumo:
The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.
Resumo:
The author proves that equation, Σy n ΣZx | ΣxyZx ΣxZx ΣxZ2x | = 0, Σy ΣZx Σy2x | where Z = 10-cq and q is a numerical constant, used by Pimentel Gomes and Malavolta in several articles for the interpolation of Mitscherlih's equation y = A [ 1 - 10 - c (x + b) ] by the least squares method, always has a zero of order three for Z = 1. Therefore, equation A Zm + A1Zm -1 + ........... + Am = 0 obtained from that determinant can be divided by (Z-1)³. This property provides a good test for the correctness of the computations and facilitates the solution of the equation.
Resumo:
The comparative analysis of continuous signals restoration by different kinds of approximation is performed. The software product, allowing to define optimal method of different original signals restoration by Lagrange polynomial, Kotelnikov interpolation series, linear and cubic splines, Haar wavelet and Kotelnikov-Shannon wavelet based on criterion of minimum value of mean-square deviation is proposed. Practical recommendations on the selection of approximation function for different class of signals are obtained.
Resumo:
Stink bugs are seed/fruit sucking insects feeding on an array of host plants. Among them, an exotic tree called privet, Ligustrum lucidum Ait. (Oleaceae), is very common in the urban areas of the Brazilian subtropics, where it is utilized as food source and shelter for over a decem species of bugs, year round. The species composition, their performance and abundance on this host, and possible causes for this association are discussed and illustrated.
Resumo:
We quantify the long-time behavior of a system of (partially) inelastic particles in a stochastic thermostat by means of the contractivity of a suitable metric in the set of probability measures. Existence, uniqueness, boundedness of moments and regularity of a steady state are derived from this basic property. The solutions of the kinetic model are proved to converge exponentially as t→ ∞ to this diffusive equilibrium in this distance metrizing the weak convergence of measures. Then, we prove a uniform bound in time on Sobolev norms of the solution, provided the initial data has a finite norm in the corresponding Sobolev space. These results are then combined, using interpolation inequalities, to obtain exponential convergence to the diffusive equilibrium in the strong L¹-norm, as well as various Sobolev norms.
Resumo:
Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.