930 resultados para Heat - Transmission - Computer simulation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uma ocorrência ferroviária tem danos imprevisíveis, desde um simples atraso do horário do trem enquanto o socorro ferroviário encarrilha o vagão, até prejuízos milionários com grande perda de ativos (material rodante e via permanente) e, em casos extremos, até vidas humanas. Portanto, as ferrovias nacionais sempre buscam maneiras de programar ações que minimizam este risco. Uma das principais ações é estabelecer critérios de manutenção sempre justos. Entretanto, estes critérios geralmente não contemplam de maneira conjunta a dinâmica veicular e a geometria da via permanente. Neste sentido, este trabalho elabora um modelo matemático de um vagão ferroviário de alta capacidade em conjunto com a flexibilidade do suporte da via permanente. O modelo matemático foi validado e considerado satisfatório, a partir da comparação das frequências naturais obtidas no vagão real e na comparação de seu resultado produzido a partir de uma entrada medida com equipamentos de controle de geometria de linha e de medições dinâmicas realizadas por vagão instrumentado. Um método estratégico para análise da segurança do veículo foi sugerida e utilizada mostrando-se capaz de determinar os comprimentos de onda da via permanente que devem ser priorizados na manutenção, bem como na análise da segurança do vagão quando na adoção de restrições de velocidades.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’épaule est l’articulation la plus mobile et la plus instable du corps humain dû à la faible quantité de contraintes osseuses et au rôle des tissus mous qui lui confèrent au moins une dizaine de degrés de liberté. La mobilité de l’épaule est un facteur de performance dans plusieurs sports. Mais son instabilité engendre des troubles musculo-squelettiques, dont les déchirures de la coiffe des rotateurs sont fréquentes et les plus handicapantes. L’évaluation de l’amplitude articulaire est un indice commun de la fonction de l’épaule, toutefois elle est souvent limitée à quelques mesures planaires pour lesquelles les degrés de liberté varient indépendamment les uns des autres. Ces valeurs utilisées dans les modèles de simulation musculo-squelettiques peuvent amener à des solutions non physiologiques. L’objectif de cette thèse était de développer des outils pour la caractérisation de la mobilité articulaire tri-dimensionnelle de l’épaule, en passant par i) fournir une méthode et son approche expérimentale pour évaluer l’amplitude articulaire tridimensionnelle de l’épaule incluant des interactions entre les degrés de liberté ; ii) proposer une représentation permettant d’interpréter les données tri-dimensionnelles obtenues; iii) présenter des amplitudes articulaires normalisées, iv) implémenter une amplitude articulaire tridimensionnelle au sein d’un modèle de simulation numérique afin de générer des mouvements sportifs optimaux plus réalistes; v) prédire des amplitudes articulaires sécuritaires et vi) des exercices de rééducation sécuritaires pour des patients ayant subi une réparation de la coiffe des rotateurs. i) Seize sujets ont été réalisé séries de mouvements d’amplitudes maximales actifs avec des combinaisons entre les différents degrés de liberté de l’épaule. Un système d’analyse du mouvement couplé à un modèle cinématique du membre supérieur a été utilisé pour estimer les cinématiques articulaires tridimensionnelles. ii) L’ensemble des orientations définies par une séquence de trois angles a été inclus dans un polyèdre non convexe représentant l’espace de mobilité articulaire prenant en compte les interactions entre les degrés de liberté. La combinaison des séries d’élévation et de rotation est recommandée pour évaluer l’amplitude articulaire complète de l’épaule. iii) Un espace de mobilité normalisé a également été défini en englobant les positions atteintes par au moins 50% des sujets et de volume moyen. iv) Cet espace moyen, définissant la mobilité physiologiques, a été utilisé au sein d’un modèle de simulation cinématique utilisé pour optimiser la technique d’un élément acrobatique de lâcher de barres réalisée par des gymnastes. Avec l’utilisation régulière de limites articulaires planaires pour contraindre la mobilité de l’épaule, seulement 17% des solutions optimales sont physiologiques. En plus, d’assurer le réalisme des solutions, notre contrainte articulaire tridimensionnelle n’a pas affecté le coût de calculs de l’optimisation. v) et vi) Les seize participants ont également réalisé des séries d’amplitudes articulaires passives et des exercices de rééducation passifs. La contrainte dans l’ensemble des muscles de la coiffe des rotateurs au cours de ces mouvements a été estimée à l’aide d’un modèle musculo-squelettique reproduisant différents types et tailles de déchirures. Des seuils de contrainte sécuritaires ont été utilisés pour distinguer les amplitudes de mouvements risquées ou non pour l’intégrité de la réparation chirurgicale. Une taille de déchirure plus grande ainsi que les déchirures affectant plusieurs muscles ont contribué à réduire l’espace de mobilité articulaire sécuritaire. Principalement les élévations gléno-humérales inférieures à 38° et supérieures à 65°, ou réalisées avec le bras maintenu en rotation interne engendrent des contraintes excessives pour la plupart des types et des tailles de blessure lors de mouvements d’abduction, de scaption ou de flexion. Cette thèse a développé une représentation innovante de la mobilité de l’épaule, qui tient compte des interactions entre les degrés de liberté. Grâce à cette représentation, l’évaluation clinique pourra être plus exhaustive et donc élargir les possibilités de diagnostiquer les troubles de l’épaule. La simulation de mouvement peut maintenant être plus réaliste. Finalement, nous avons montré l’importance de personnaliser la rééducation des patients en termes d’amplitude articulaire, puisque des exercices passifs de rééducation précoces peuvent contribuer à une re-déchirure à cause d’une contrainte trop importante qu’ils imposent aux tendons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis--University of Illinois.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"August 1981."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"December 1982."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plant breeders use many different breeding methods to develop superior cultivars. However, it is difficult, cumbersome, and expensive to evaluate the performance of a breeding method or to compare the efficiencies of different breeding methods within an ongoing breeding program. To facilitate comparisons, we developed a QU-GENE module called QuCim that can simulate a large number of breeding strategies for self-pollinated species. The wheat breeding strategy Selected Bulk used by CIMMYT's wheat breeding program was defined in QuCim as an example of how this is done. This selection method was simulated in QuCim to investigate the effects of deviations from the additive genetic model, in the form of dominance and epistasis, on selection outcomes. The simulation results indicate that the partial dominance model does not greatly influence genetic advance compared with the pure additive model. Genetic advance in genetic systems with overdominance and epistasis are slower than when gene effects are purely additive or partially dominant. The additive gene effect is an appropriate indicator of the change in gene frequency following selection when epistasis is absent. In the absence of epistasis, the additive variance decreases rapidly with selection. However, after several cycles of selection it remains relatively fixed when epistasis is present. The variance from partial dominance is relatively small and therefore hard to detect by the covariance among half sibs and the covariance among full sibs. The dominance variance from the overdominance model can be identified successfully, but it does not change significantly, which confirms that overdominance cannot be utilized by an inbred breeding program. QuCim is an effective tool to compare selection strategies and to validate some theories in quantitative genetics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Patellamide D (patH(4)) is a cyclic octapeptide isolated from the ascidian Lissoclinum patella. The peptide possesses a 24-azacrown-8 macrocyclic structure containing two oxazoline and two thiazole rings, each separated by an amino acid. The present spectrophotometric, electron paramagnetic resonance (EPR) and mass spectral studies show that patellamide D reacts with CuCl, and triethylamine in acetonitrile to form mononuclear and binuclear copper(II) complexes containing chloride. Molecular modelling and EPR studies suggest that the chloride anion bridges the copper(II) ions in the binuclear complex [Cu-2(patH(2))(mu-Cl)](+). These results contrast with a previous study employing both base and methanol, the latter substituting for chloride in the copper(II) complexes en route to the stable mu-carbonato binuclear copper(II) complex [Cu-2 (patH(2))(mu-CO3)]. Solvent clearly plays an important role in both stabilising these metal ion complexes and influencing their chemical reactivities. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Granulation is one of the fundamental operations in particulate processing and has a very ancient history and widespread use. Much fundamental particle science has occurred in the last two decades to help understand the underlying phenomena. Yet, until recently the development of granulation systems was mostly based on popular practice. The use of process systems approaches to the integrated understanding of these operations is providing improved insight into the complex nature of the processes. Improved mathematical representations, new solution techniques and the application of the models to industrial processes are yielding better designs, improved optimisation and tighter control of these systems. The parallel development of advanced instrumentation and the use of inferential approaches provide real-time access to system parameters necessary for improvements in operation. The use of advanced models to help develop real-time plant diagnostic systems provides further evidence of the utility of process system approaches to granulation processes. This paper highlights some of those aspects of granulation. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we investigate the effects of various potential models in the description of vapor–liquid equilibria (VLE) and adsorption of simple gases on highly graphitized thermal carbon black. It is found that some potential models proposed in the literature are not suitable for the description of VLE (saturated gas and liquid densities and the vapor pressure with temperature). Simple gases, such as neon, argon, krypton, xenon, nitrogen, and methane are studied in this paper. To describe the isotherms on graphitized thermal carbon black correctly, the surface mediation damping factor introduced in our recent publication should be used to calculate correctly the fluid–fluid interaction energy between particles close to the surface. It is found that the damping constant for the noble gases family is linearly dependent on the polarizability, suggesting that the electric field of the graphite surface has a direct induction effect on the induced dipole of these molecules. As a result of this polarization by the graphite surface, the fluid–fluid interaction energy is reduced whenever two particles are near the surface. In the case of methane, we found that the damping constant is less than that of a noble gas having the similar polarizability, while in the case of nitrogen the damping factor is much greater and this could most likely be due to the quadrupolar nature of nitrogen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advent of molecular markers as a tool to aid selection has provided plant breeders with the opportunity to rapidly deliver superior genetic solutions to problems in agricultural production systems. However, a major constraint to the implementation of marker-assisted selection (MAS) in pragmatic breeding programs in the past has been the perceived high relative cost of MAS compared to conventional phenotypic selection. In this paper, computer simulation was used to design a genetically effective and economically efficient marker-assisted breeding strategy aimed at a specific outcome. Under investigation was a strategy involving the integration of both restricted backcrossing and doubled haploid (DH) technology. The point at which molecular markers are applied in a selection strategy can be critical to the effectiveness and cost efficiency of that strategy. The application of molecular markers was considered at three phases in the strategy: allele enrichment in the BC1F1 population, gene selection at the haploid stage and the selection for recurrent parent background of DHs prior to field testing. Overall, incorporating MAS at all three stages was the most effective, in terms of delivering a high frequency of desired outcomes and at combining the selected favourable rust resistance, end use quality and grain yield alleles. However, when costs were included in the model the combination of MAS at the BC1F1 and haploid stage was identified as the optimal strategy. A detailed economic analysis showed that incorporation of marker selection at these two stages not only increased genetic gain over the phenotypic alternative but actually reduced the over all cost by 40%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we apply a new method for the determination of surface area of carbonaceous materials, using the local surface excess isotherms obtained from the Grand Canonical Monte Carlo simulation and a concept of area distribution in terms of energy well-depth of solid–fluid interaction. The range of this well-depth considered in our GCMC simulation is from 10 to 100 K, which is wide enough to cover all carbon surfaces that we dealt with (for comparison, the well-depth for perfect graphite surface is about 58 K). Having the set of local surface excess isotherms and the differential area distribution, the overall adsorption isotherm can be obtained in an integral form. Thus, given the experimental data of nitrogen or argon adsorption on a carbon material, the differential area distribution can be obtained from the inversion process, using the regularization method. The total surface area is then obtained as the area of this distribution. We test this approach with a number of data in the literature, and compare our GCMC-surface area with that obtained from the classical BET method. In general, we find that the difference between these two surface areas is about 10%, indicating the need to reliably determine the surface area with a very consistent method. We, therefore, suggest the approach of this paper as an alternative to the BET method because of the long-recognized unrealistic assumptions used in the BET theory. Beside the surface area obtained by this method, it also provides information about the differential area distribution versus the well-depth. This information could be used as a microscopic finger-print of the carbon surface. It is expected that samples prepared from different precursors and different activation conditions will have distinct finger-prints. We illustrate this with Cabot BP120, 280 and 460 samples, and the differential area distributions obtained from the adsorption of argon at 77 K and nitrogen also at 77 K have exactly the same patterns, suggesting the characteristics of this carbon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we investigate the effects of surface mediation on the adsorption behavior of argon at different temperatures on homogeneous graphitized thermal carbon black and on heterogeneous nongraphitized carbon black surface. The grand canonical Monte Carlo (GCMC) simulation is used to study the adsorption, and its performance is tested against a number of experimental data on graphitized thermal carbon black (which is known to be highly homogeneous) that are available in the literature. The surface-mediation effect is shown to be essential in the correct description of the adsorption isotherm because without accounting for that effect the GCMC simulation results are always greater than the experimental data in the region where the monolayer is being completed. This is due to the overestimation of the fluid–fluid interaction between particles in the first layer close to the solid surface. It is the surface mediation that reduces this fluid–fluid interaction in the adsorbed layers, and therefore the GCMC simulation results accounting for this surface mediation that are presented in this paper result in a better description of the data. This surface mediation having been determined, the surface excess of argon on heterogeneous carbon surfaces having solid–fluid interaction energies different from the graphite can be readily obtained. Since the real heterogeneous carbon surface is not the same as the homogeneous graphite surface, it can be described by an area distribution in terms of the well depth of the solid–fluid energy. Assuming a patchwise topology of the surface with patches of uniform well depth of solid–fluid interaction, the adsorption on a real carbon surface can be determined as an integral of the local surface excess of each patch with respect to the differential area. When this is matched against the experimental data of a carbon surface, we can derive the area distribution versus energy and hence the geometrical surface area. This new approach will be illustrated with the adsorption of argon on a nongraphitized carbon at 87.3 and 77 K, and it is found that the GCMC surface area is different from the BET surface area by about 7%. Furthermore, the description of the isotherm in the region of BET validity of 0.06 to 0.2 is much better with our method than with the BET equation.