994 resultados para Form Error Compensation
Resumo:
Following treatment with bracken fern (Pteridium aquilinum) extract and bracken spores a number of DNA adducts were detected by P-32-postlabeling. Three of these adducts have been described previously (Povey et al., Br. J. Cancer (1996) 74, 1342-1348) and in this study, using a slightly different protocol, four new adducts, with higher chromatographic mobility, were detected at levels ranging from 50 to 230% of those previously described, When DNA was treated in vitro with activated ptaquiloside (APT) and analysed by butanol extraction or nuclease P1 treatment, only one adduct was detected by P-32-postlabeling, This adduct was not present in the DNA from mice treated with bracken fern or spores, suggesting either that bracken contains genotoxins other than ptaquiloside or that the metabolism of ptaquiloside produces genotoxins not reflected by activated ptaquiloside. However, as the ATP-derived adduct has been detected previously in ileal DNA of bracken-fed calves, species-specific differences in the metabolism of bracken genotoxins may exist, thereby leading to differences in their biological outcomes. (C) 2001 Academic Press.
Resumo:
Error condition detected We consider discrete two-point boundary value problems of the form D-2 y(k+1) = f (kh, y(k), D y(k)), for k = 1,...,n - 1, (0,0) = G((y(0),y(n));(Dy-1,Dy-n)), where Dy-k = (y(k) - Yk-I)/h and h = 1/n. This arises as a finite difference approximation to y" = f(x,y,y'), x is an element of [0,1], (0,0) = G((y(0),y(1));(y'(0),y'(1))). We assume that f and G = (g(0), g(1)) are continuous and fully nonlinear, that there exist pairs of strict lower and strict upper solutions for the continuous problem, and that f and G satisfy additional assumptions that are known to yield a priori bounds on, and to guarantee the existence of solutions of the continuous problem. Under these assumptions we show that there are at least three distinct solutions of the discrete approximation which approximate solutions to the continuous problem as the grid size, h, goes to 0. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Admission controls, such as trunk reservation, are often used in loss networks to optimise their performance. Since the numerical evaluation of performance measures is complex, much attention has been given to finding approximation methods. The Erlang Fixed-Point (EFP) approximation, which is based on an independent blocking assumption, has been used for networks both with and without controls. Several more elaborate approximation methods which account for dependencies in blocking behaviour have been developed for the uncontrolled setting. This paper is an exploratory investigation of extensions and synthesis of these methods to systems with controls, in particular, trunk reservation. In order to isolate the dependency factor, we restrict our attention to a highly linear network. We will compare the performance of the resulting approximations against the benchmark of the EFP approximation extended to the trunk reservation setting. By doing this, we seek to gain insight into the critical factors in constructing an effective approximation. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This study evaluated the degree to which the disturbance to posture from respiration is compensated for in healthy normals and whether this is different in people with recurrent low back pain (LBP), and to compare the changes when respiratory demand is increased. Angular displacement of the lumbar spine and hips, and motion of the centre of pressure (COP), were recorded with high resolution and respiratory phase was recorded from ribcage motion. With subjects standing in a relaxed posture, recordings were made during quiet breathing, while breathing with increased dead-space to induce hypercapnoea, and while subjects voluntarily increased their respiration to match ribcage expansion that was induced in the hypercapnoea condition. The relationship between respiration and the movement parameters was measured from the coherence between breathing and COP and angular motion at the frequency of respiration, and from averages triggered from the respiratory data. Small angular changes in the lumbopelvic and hip angles were evident at the frequency of respiration in both groups. However, in quiet standing, the LBP subjects had a greater displacement of their COP that was associated with respiration than the control subjects. The LBP group had a trend for less hip motion. There were no changes in the movement parameters when respiratory demand increased involuntarily via hypercapnoea, but when respiration increased voluntarily, the amplitude of motion and the displacement of the COP increased in both groups. The present data suggest that the postural compensation to respiration counteracts at least part of the disturbance to posture caused by respiration and that this compensation may be less effective in people with LBP.
Resumo:
O soro de leite é um subproduto da fabricação do queijo, seja por acidificação ou por processo enzimático. Em condições ideais, a caseína do leite se agrega formando um gel, que posteriormente cortado, induz a separação e liberação do soro. É utilizado de diversas formas em toda a indústria alimentícia, possui rica composição em lactose, sais minerais e proteínas. A desidratação é um dos principais processos utilizados para beneficiamento e transformação do soro. Diante disto, o objetivo deste trabalho foi avaliar a influência dos métodos de secagem: liofilização, leito de espuma (nas temperaturas de 40, 50, 60, 70 e 80ºC) e spray-dryer (nas temperaturas de 55, 60, 65, 70 e 75ºC), sobre as características de umidade, proteína, cor e solubilidade do soro, bem como estudar o seu processo de secagem. O soro foi obtido e desidratado após concentração por osmose reversa, testando 11 tratamentos, em 3 repetições, utilizando um delineamento inteiramente casualizado. Os resultados demonstraram que o modelo matemático que melhor se ajustou foi o modelo de Page, apresentado um coeficiente de determinação ajustado acima de 0,98 e erro padrão da regressão em todas as temperaturas abaixo de 0,04 para o método por leito de espuma. Para o método de liofilização os respectivos valores foram 0,9975 e 0,01612. A partir disso, pode-se elaborar um modelo matemático generalizado, apresentando um coeficiente de determinação igual a 0,9888. No caso do leito de espuma, observou-se que à medida que se aumenta a temperatura do ar de secagem, o tempo de secagem diminui e os valores do coeficiente de difusão efetiva aumentam. Porém, a redução no tempo de secagem entre os intervalos de temperatura, diminui com o aumento da mesma. A energia de ativação para a difusão no processo de secagem do soro foi de 26,650 kJ/mol e para todas as avaliações físico-químicas e tecnológicas, a análise de variância apresentou um valor de F significativo (p<0,05), indicando que há pelo menos um contraste entre as médias dos tratamentos que é significativo.
Resumo:
A hierarchical matrix is an efficient data-sparse representation of a matrix, especially useful for large dimensional problems. It consists of low-rank subblocks leading to low memory requirements as well as inexpensive computational costs. In this work, we discuss the use of the hierarchical matrix technique in the numerical solution of a large scale eigenvalue problem arising from a finite rank discretization of an integral operator. The operator is of convolution type, it is defined through the first exponential-integral function and, hence, it is weakly singular. We develop analytical expressions for the approximate degenerate kernels and deduce error upper bounds for these approximations. Some computational results illustrating the efficiency and robustness of the approach are presented.
Resumo:
Over the past 25 years, expatriate managers have voiced increased disenchantment with their compensation packages whíle abroad. This paper takes a prescriptive approach, outlíning severa I elements of a successful human resources strategy and stressing key ingredients of effective international compensation programs. Particular ettention is given to the adherence of cultural values and distrlbutive justice when working across nations and cultures.
Resumo:
Fluorescent protein microscopy imaging is nowadays one of the most important tools in biomedical research. However, the resulting images present a low signal to noise ratio and a time intensity decay due to the photobleaching effect. This phenomenon is a consequence of the decreasing on the radiation emission efficiency of the tagging protein. This occurs because the fluorophore permanently loses its ability to fluoresce, due to photochemical reactions induced by the incident light. The Poisson multiplicative noise that corrupts these images, in addition with its quality degradation due to photobleaching, make long time biological observation processes very difficult. In this paper a denoising algorithm for Poisson data, where the photobleaching effect is explicitly taken into account, is described. The algorithm is designed in a Bayesian framework where the data fidelity term models the Poisson noise generation process as well as the exponential intensity decay caused by the photobleaching. The prior term is conceived with Gibbs priors and log-Euclidean potential functions, suitable to cope with the positivity constrained nature of the parameters to be estimated. Monte Carlo tests with synthetic data are presented to characterize the performance of the algorithm. One example with real data is included to illustrate its application.
Resumo:
A motivação para este trabalho vem da necessidade que o autor tem em poder registar as notas tocadas na guitarra durante o processo de improviso. Quando o músico está a improvisar na guitarra, muitas vezes não se recorda das notas tocadas no momento, este trabalho trata o desenvolvimento de uma aplicação para guitarristas, que permita registar as notas tocadas na guitarra eléctrica ou clássica. O sinal é adquirido a partir da guitarra e processado com requisitos de tempo real na captura do sinal. As notas produzidas pela guitarra eléctrica, ligada ao computador, são representadas no formato de tablatura e/ou partitura. Para este efeito a aplicação capta o sinal proveniente da guitarra eléctrica a partir da placa de som do computador e utiliza algoritmos de detecção de frequência e algoritmos de estimação de duração de cada sinal para construir o registo das notas tocadas. A aplicação é desenvolvida numa perspectiva multi-plataforma, podendo ser executada em diferentes sistemas operativos Windows e Linux, usando ferramentas e bibliotecas de domínio público. Os resultados obtidos mostram a possibilidade de afinar a guitarra com valores de erro na ordem de 2 Hz em relação às frequências de afinação standard. A escrita da tablatura apresenta resultados satisfatórios, mas que podem ser melhorados. Para tal será necessário melhorar a implementação de técnicas de processamento do sinal bem como a comunicação entre processos para resolver os problemas encontrados nos testes efectuados.
Resumo:
This paper addresses the voltage droop compensation associated with long pulses generated by solid-stated based high-voltage Marx topologies. In particular a novel design scheme for voltage droop compensation in solid-state based bipolar Marx generators, using low-cost circuitry design and control, is described. The compensation consists of adding one auxiliary PWM stage to the existing Marx stages, without changing the modularity and topology of the circuit, which controls the output voltage and a LC filter that smoothes the voltage droop in both the positive and negative output pulses. Simulation results are presented for 5 stages Marx circuit using 1 kV per stage, with 1 kHz repetition rate and 10% duty cycle.
Resumo:
This paper presents a variable speed autonomous squirrel cage generator excited by a current-controlled voltage source inverter to be used in stand-alone micro-hydro power plants. The paper proposes a system control strategy aiming to properly excite the machine as well as to achieve the load voltage control. A feed-forward control sets the appropriate generator flux by taking into account the actual speed and the desired load voltage. A load voltage control loop is used to adjust the generated active power in order to sustain the load voltage at a reference value. The control system is based on a rotor flux oriented vector control technique which takes into account the machine saturation effect. The proposed control strategy and the adopted system models were validated both by numerical simulation and by experimental results obtained from a laboratory prototype. Results covering the prototype start-up, as well as its steady-state and dynamical behavior are presented. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Electricity markets are complex environments, involving a large number of different entities, playing in a dynamic scene to obtain the best advantages and profits. MASCEM is a multi-agent electricity market simu-lator to model market players and simulate their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. MASCEM pro-vides several dynamic strategies for agents’ behaviour. This paper presents a method that aims to provide market players strategic bidding capabilities, allowing them to obtain the higher possible gains out of the market. This method uses an auxiliary forecasting tool, e.g. an Artificial Neural Net-work, to predict the electricity market prices, and analyses its forecasting error patterns. Through the recognition of such patterns occurrence, the method predicts the expected error for the next forecast, and uses it to adapt the actual forecast. The goal is to approximate the forecast to the real value, reducing the forecasting error.