983 resultados para analogy calculation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a new method for the calculation of the fractional expressions in the presence of sensor redundancy and noise, is presented. An algorithm, taking advantage of the signal characteristics and the sensor redundancy, is tuned and optimized through genetic algorithms. The results demonstrate the good performance for different types of expressions and distinct levels of noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the calculation of derivatives of fractional order for non-smooth data. The noise is avoided by adopting an optimization formulation using genetic algorithms (GA). Given the flexibility of the evolutionary schemes, a hierarchical GA composed by a series of two GAs, each one with a distinct fitness function, is established.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hydraulic systems are dynamically susceptible in the presence of entrapped air pockets, leading to amplified transient reactions. In order to model the dynamic action of an entrapped air pocket in a confined system, a heuristic mathematical formulation based on a conceptual analogy to a mechanical spring-damper system is proposed. The formulation is based on the polytropic relationship of an ideal gas and includes an additional term, which encompasses the combined damping effects associated with the thermodynamic deviations from the theoretical transformation, as well as those arising from the transient vorticity developed in both fluid domains (air and water). These effects represent the key factors that account for flow energy dissipation and pressure damping. Model validation was completed via numerical simulation of experimental measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The calculation of fractional derivatives is an important topic in scientific research. While formal definitions are clear from the mathematical point of view, they pose limitations in applied sciences that have not been yet tackled. This paper addresses the problem of obtaining left and right side derivatives when adopting numerical approximations. The results reveal the relationship between the resulting distinct values for different fractional orders and types of signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O Modelo Escoras e Tirantes surgiu no início do Século XX quando Ritter e Mörsch estabeleceram a analogia entre uma treliça clássica e uma viga de betão armado. Desde então pesquisadores têm estudado esse modelo como método de dimensionamento. A partir dos anos 90 várias normas apresentaram a utilização do modelo escoras e tirantes como relevante no dimensionamento de elementos de betão armado. Os critérios de segurança do Modelo Escoras e Tirantes são neste trabalho explicados de acordo com o Eurocódigo 2:2010, e comparadas com as normas NBR 6118:2014 e ACI 318:2011. É unanime dizer que a utilização do método torna-se mais vantajosa para regiões de descontinuidade. Em todos os elementos de betão armado, o Método Escoras e Tirantes são representações dos campos de tensão idealizados por elementos comprimidos e tracionados. Para a definição destes elementos é proposto o processo de “caminho de carga” em que conhecidas as tensões elásticas e suas direções principais utilizando o Método dos Elementos Finitos o modelo das escoras e tirantes é de fácil concepção. É também possível a definição deste a partir de modelos padrão já concebidos para determinados tipos de elementos estruturais de betão armado. Para o elemento descontinuo viga-parede estudado foram apresentados cinco modelos de cálculo até otimizar a solução validando as tensões com o Método dos Elementos Finitos. Em todos os modelos foram analisadas a definição do Modelo Escoras e Tirantes, a resistência das escoras, dos tirantes e dos nós até chegar à solução construtiva da viga-parede. Concluiu-se que a definição do modelo é a chave do dimensionamento.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Public opinion surveys have become progressively incorporated into systems of official statistics. Surveys of the economic climate are usually qualitative because they collect opinions of businesspeople and/or experts about the long-term indicators described by a number of variables. In such cases the responses are expressed in ordinal numbers, that is, the respondents verbally report, for example, whether during a given trimester the sales or the new orders have increased, decreased or remained the same as in the previous trimester. These data allow to calculate the percent of respondents in the total population (results are extrapolated), who select every one of the three options. Data are often presented in the form of an index calculated as the difference between the percent of those who claim that a given variable has improved in value and of those who claim that it has deteriorated. As in any survey conducted on a sample the question of the measurement of the sample error of the results has to be addressed, since the error influences both the reliability of the results and the calculation of the sample size adequate for a desired confidence interval. The results presented here are based on data from the Survey of the Business Climate (Encuesta de Clima Empresarial) developed through the collaboration of the Statistical Institute of Catalonia (Institut d’Estadística de Catalunya) with the Chambers of Commerce (Cámaras de Comercio) of Sabadell and Terrassa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper considers the use of artificial regression in calculating different types of score test when the log

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To further validate the doubly labeled water method for measurement of CO2 production and energy expenditure in humans, we compared it with near-continuous respiratory gas exchange in nine healthy young adult males. Subjects were housed in a respiratory chamber for 4 days. Each received 2H2(18)O at either a low (n = 6) or a moderate (n = 3) isotope dose. Low and moderate doses produced initial 2H enrichments of 5 and 10 X 10(-3) atom percent excess, respectively, and initial 18O enrichments of 2 and 2.5 X 10(-2) atom percent excess, respectively. Total body water was calculated from isotope dilution in saliva collected at 4 and 5 h after the dose. CO2 production was calculated by the two-point method using the isotopic enrichments of urines collected just before each subject entered and left the chamber. Isotope enrichments relative to predose samples were measured by isotope ratio mass spectrometry. At low isotope dose, doubly labeled water overestimated average daily energy expenditure by 8 +/- 9% (SD) (range -7 to 22%). At moderate dose the difference was reduced to +4 +/- 5% (range 0-9%). The isotope elimination curves for 2H and 18O from serial urines collected from one of the subjects showed expected diurnal variations but were otherwise quite smooth. The overestimate may be due to approximations in the corrections for isotope fractionation and isotope dilution. An alternative approach to the corrections is presented that reduces the overestimate to 1%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AbstractBreast cancer is one of the most common cancers affecting one in eight women during their lives. Survival rates have increased steadily thanks to early diagnosis with mammography screening and more efficient treatment strategies. Post-operative radiation therapy is a standard of care in the management of breast cancer and has been shown to reduce efficiently both local recurrence rate and breast cancer mortality. Radiation therapy is however associated with some late effects for long-term survivors. Radiation-induced secondary cancer is a relatively rare but severe late effect of radiation therapy. Currently, radiotherapy plans are essentially optimized to maximize tumor control and minimize late deterministic effects (tissue reactions) that are mainly associated with high doses (» 1 Gy). With improved cure rates and new radiation therapy technologies, it is also important to evaluate and minimize secondary cancer risks for different treatment techniques. This is a particularly challenging task due to the large uncertainties in the dose-response relationship.In contrast with late deterministic effects, secondary cancers may be associated with much lower doses and therefore out-of-field doses (also called peripheral doses) that are typically inferior to 1 Gy need to be determined accurately. Out-of-field doses result from patient scatter and head scatter from the treatment unit. These doses are particularly challenging to compute and we characterized it by Monte Carlo (MC) calculation. A detailed MC model of the Siemens Primus linear accelerator has been thoroughly validated with measurements. We investigated the accuracy of such a model for retrospective dosimetry in epidemiological studies on secondary cancers. Considering that patients in such large studies could be treated on a variety of machines, we assessed the uncertainty in reconstructed peripheral dose due to the variability of peripheral dose among various linac geometries. For large open fields (> 10x10 cm2), the uncertainty would be less than 50%, but for small fields and wedged fields the uncertainty in reconstructed dose could rise up to a factor of 10. It was concluded that such a model could be used for conventional treatments using large open fields only.The MC model of the Siemens Primus linac was then used to compare out-of-field doses for different treatment techniques in a female whole-body CT-based phantom. Current techniques such as conformai wedged-based radiotherapy and hybrid IMRT were investigated and compared to older two-dimensional radiotherapy techniques. MC doses were also compared to those of a commercial Treatment Planning System (TPS). While the TPS is routinely used to determine the dose to the contralateral breast and the ipsilateral lung which are mostly out of the treatment fields, we have shown that these doses may be highly inaccurate depending on the treatment technique investigated. MC shows that hybrid IMRT is dosimetrically similar to three-dimensional wedge-based radiotherapy within the field, but offers substantially reduced doses to out-of-field healthy organs.Finally, many different approaches to risk estimations extracted from the literature were applied to the calculated MC dose distribution. Absolute risks varied substantially as did the ratio of risk between two treatment techniques, reflecting the large uncertainties involved with current risk models. Despite all these uncertainties, the hybrid IMRT investigated resulted in systematically lower cancer risks than any of the other treatment techniques. More epidemiological studies with accurate dosimetry are required in the future to construct robust risk models. In the meantime, any treatment strategy that reduces out-of-field doses to healthy organs should be investigated. Electron radiotherapy might offer interesting possibilities with this regard.RésuméLe cancer du sein affecte une femme sur huit au cours de sa vie. Grâce au dépistage précoce et à des thérapies de plus en plus efficaces, le taux de guérison a augmenté au cours du temps. La radiothérapie postopératoire joue un rôle important dans le traitement du cancer du sein en réduisant le taux de récidive et la mortalité. Malheureusement, la radiothérapie peut aussi induire des toxicités tardives chez les patients guéris. En particulier, les cancers secondaires radio-induits sont une complication rare mais sévère de la radiothérapie. En routine clinique, les plans de radiothérapie sont essentiellement optimisées pour un contrôle local le plus élevé possible tout en minimisant les réactions tissulaires tardives qui sont essentiellement associées avec des hautes doses (» 1 Gy). Toutefois, avec l'introduction de différentes nouvelles techniques et avec l'augmentation des taux de survie, il devient impératif d'évaluer et de minimiser les risques de cancer secondaire pour différentes techniques de traitement. Une telle évaluation du risque est une tâche ardue étant donné les nombreuses incertitudes liées à la relation dose-risque.Contrairement aux effets tissulaires, les cancers secondaires peuvent aussi être induits par des basses doses dans des organes qui se trouvent hors des champs d'irradiation. Ces organes reçoivent des doses périphériques typiquement inférieures à 1 Gy qui résultent du diffusé du patient et du diffusé de l'accélérateur. Ces doses sont difficiles à calculer précisément, mais les algorithmes Monte Carlo (MC) permettent de les estimer avec une bonne précision. Un modèle MC détaillé de l'accélérateur Primus de Siemens a été élaboré et validé avec des mesures. La précision de ce modèle a également été déterminée pour la reconstruction de dose en épidémiologie. Si on considère que les patients inclus dans de larges cohortes sont traités sur une variété de machines, l'incertitude dans la reconstruction de dose périphérique a été étudiée en fonction de la variabilité de la dose périphérique pour différents types d'accélérateurs. Pour de grands champs (> 10x10 cm ), l'incertitude est inférieure à 50%, mais pour de petits champs et des champs filtrés, l'incertitude de la dose peut monter jusqu'à un facteur 10. En conclusion, un tel modèle ne peut être utilisé que pour les traitements conventionnels utilisant des grands champs.Le modèle MC de l'accélérateur Primus a été utilisé ensuite pour déterminer la dose périphérique pour différentes techniques dans un fantôme corps entier basé sur des coupes CT d'une patiente. Les techniques actuelles utilisant des champs filtrés ou encore l'IMRT hybride ont été étudiées et comparées par rapport aux techniques plus anciennes. Les doses calculées par MC ont été comparées à celles obtenues d'un logiciel de planification commercial (TPS). Alors que le TPS est utilisé en routine pour déterminer la dose au sein contralatéral et au poumon ipsilatéral qui sont principalement hors des faisceaux, nous avons montré que ces doses peuvent être plus ou moins précises selon la technTque étudiée. Les calculs MC montrent que la technique IMRT est dosimétriquement équivalente à celle basée sur des champs filtrés à l'intérieur des champs de traitement, mais offre une réduction importante de la dose aux organes périphériques.Finalement différents modèles de risque ont été étudiés sur la base des distributions de dose calculées par MC. Les risques absolus et le rapport des risques entre deux techniques de traitement varient grandement, ce qui reflète les grandes incertitudes liées aux différents modèles de risque. Malgré ces incertitudes, on a pu montrer que la technique IMRT offrait une réduction du risque systématique par rapport aux autres techniques. En attendant des données épidémiologiques supplémentaires sur la relation dose-risque, toute technique offrant une réduction des doses périphériques aux organes sains mérite d'être étudiée. La radiothérapie avec des électrons offre à ce titre des possibilités intéressantes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Through the history of Electrical Engineering education, vectorial and phasorial diagrams have been used as a fundamental learning tool. At present, computational power has replaced them by long data lists, the result of solving equation systems by means of numerical methods. In this sense, diagrams have been shifted to an academic background and although theoretically explained, they are not used in a practical way within specific examples. This fact may be against the understanding of the complex behavior of the electrical power systems by students. This article proposes a modification of the classical Perrine-Baum diagram construction to allowing both a more practical representation and a better understanding of the behavior of a high-voltage electric line under different levels of load. This modification allows, at the same time, the forecast of the obsolescence of this behavior and line’s loading capacity. Complementary, we evaluate the impact of this tool in the learning process showing comparative undergraduate results during three academic years

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the static field limit, the vibrational hyperpolarizability consists of two contributions due to: (1) the shift in the equilibrium geometry (known as nuclear relaxation), and (2) the change in the shape of the potential energy surface (known as curvature). Simple finite field methods have previously been developed for evaluating these static field contributions and also for determining the effect of nuclear relaxation on dynamic vibrational hyperpolarizabilities in the infinite frequency approximation. In this paper the finite field approach is extended to include, within the infinite frequency approximation, the effect of curvature on the major dynamic nonlinear optical processes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vibrational configuration interaction method used to obtain static vibrational (hyper)polarizabilities is extended to dynamic nonlinear optical properties in the infinite optical frequency approximation. Illustrative calculations are carried out on H2 O and N H3. The former molecule is weakly anharmonic while the latter contains a strongly anharmonic umbrella mode. The effect on vibrational (hyper)polarizabilities due to various truncations of the potential energy and property surfaces involved in the calculation are examined