964 resultados para phase-field models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very large molecular systems can be calculated with the so called CNDOL approximate Hamiltonians that have been developed by avoiding oversimplifications and only using a priori parameters and formulas from the simpler NDO methods. A new diagonal monoelectronic term named CNDOL/21 shows great consistency and easier SCF convergence when used together with an appropriate function for charge repulsion energies that is derived from traditional formulas. It is possible to obtain a priori molecular orbitals and electron excitation properties after the configuration interaction of single excited determinants with reliability, maintaining interpretative possibilities even being a simplified Hamiltonian. Tests with some unequivocal gas phase maxima of simple molecules (benzene, furfural, acetaldehyde, hexyl alcohol, methyl amine, 2,5 dimethyl 2,4 hexadiene, and ethyl sulfide) ratify the general quality of this approach in comparison with other methods. The calculation of large systems as porphine in gas phase and a model of the complete retinal binding pocket in rhodopsin with 622 basis functions on 280 atoms at the quantum mechanical level show reliability leading to a resulting first allowed transition in 483 nm, very similar to the known experimental value of 500 nm of "dark state." In this very important case, our model gives a central role in this excitation to a charge transfer from the neighboring Glu(-) counterion to the retinaldehyde polyene chain. Tests with gas phase maxima of some important molecules corroborate the reliability of CNDOL/2 Hamiltonians.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Streptavidin, a tetrameric protein secreted by Streptomyces avidinii, binds tightly to a small growth factor biotin. One of the numerous applications of this high-affinity system comprises the streptavidin-coated surfaces of bioanalytical assays which serve as universal binders for straightforward immobilization of any biotinylated molecule. Proteins can be immobilized with a lower risk of denaturation using streptavidin-biotin technology in contrast to direct passive adsorption. The purpose of this study was to characterize the properties and effects of streptavidin-coated binding surfaces on the performance of solid-phase immunoassays and to investigate the contributions of surface modifications. Various characterization tools and methods established in the study enabled the convenient monitoring and binding capacity determination of streptavidin-coated surfaces. The schematic modeling of the monolayer surface and the quantification of adsorbed streptavidin disclosed the possibilities and the limits of passive adsorption. The defined yield of 250 ng/cm2 represented approximately 65 % coverage compared with a modelled complete monolayer, which is consistent with theoretical surface models. Modifications such as polymerization and chemical activation of streptavidin resulted in a close to 10-fold increase in the biotin-binding densities of the surface compared with the regular streptavidin coating. In addition, the stability of the surface against leaching was improved by chemical modification. The increased binding densities and capacities enabled wider high-end dynamic ranges in the solid-phase immunoassays, especially when using the fragments of the capture antibodies instead of intact antibodies for the binding of the antigen. The binding capacity of the streptavidin surface was not, by definition, predictive of the low-end performance of the immunoassays nor the assay sensitivity. Other features such as non-specific binding, variation and leaching turned out to be more relevant. The immunoassays that use a direct surface readout measurement of time-resolved fluorescence from a washed surface are dependent on the density of the labeled antibodies in a defined area on the surface. The binding surface was condensed into a spot by coating streptavidin in liquid droplets into special microtiter wells holding a small circular indentation at the bottom. The condensed binding area enabled a denser packing of the labeled antibodies on the surface. This resulted in a 5 - 6-fold increase in the signal-to-background ratios and an equivalent improvement in the detection limits of the solid-phase immunoassays. This work proved that the properties of the streptavidin-coated surfaces can be modified and that the defined properties of the streptavidin-based immunocapture surfaces contribute to the performance of heterogeneous immunoassays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Postoperative hemithoracic radiotherapy has been used to treat malignant pleural mesothelioma, but it has not been assessed in a randomised trial. We assessed high-dose hemithoracic radiotherapy after neoadjuvant chemotherapy and extrapleural pneumonectomy in patients with malignant pleural mesothelioma. METHODS: We did this phase 2 trial in two parts at 14 hospitals in Switzerland, Belgium, and Germany. We enrolled patients with pathologically confirmed malignant pleural mesothelioma; resectable TNM stages T1-3 N0-2, M0; WHO performance status 0-1; age 18-70 years. In part 1, patients were given three cycles of neoadjuvant chemotherapy (cisplatin 75 mg/m(2) and pemetrexed 500 mg/m(2) on day 1 given every 3 weeks) and extrapleural pneumonectomy; the primary endpoint was complete macroscopic resection (R0-1). In part 2, participants with complete macroscopic resection were randomly assigned (1:1) to receive high-dose radiotherapy or not. The target volume for radiotherapy encompassed the entire hemithorax, the thoracotomy channel, and mediastinal nodal stations if affected by the disease or violated surgically. A boost was given to areas at high risk for locoregional relapse. The allocation was stratified by centre, histology (sarcomatoid vs epithelioid or mixed), mediastinal lymph node involvement (N0-1 vs N2), and T stage (T1-2 vs T3). The primary endpoint of part 1 was the proportion of patients achieving complete macroscopic resection (R0 and R1). The primary endpoint in part 2 was locoregional relapse-free survival, analysed by intention to treat. The trial is registered with ClinicalTrials.gov, number NCT00334594. FINDINGS: We enrolled patients between Dec 7, 2005, and Oct 17, 2012. Overall, we analysed 151 patients receiving neoadjuvant chemotherapy, of whom 113 (75%) had extrapleural pneumonectomy. Median follow-up was 54·2 months (IQR 32-66). 52 (34%) of 151 patients achieved an objective response. The most common grade 3 or 4 toxic effects were neutropenia (21 [14%] of 151 patients), anaemia (11 [7%]), and nausea or vomiting (eight [5%]). 113 patients had extrapleural pneumonectomy, with complete macroscopic resection achieved in 96 (64%) of 151 patients. We enrolled 54 patients in part 2; 27 in each group. The main reasons for exclusion were patient refusal (n=20) and ineligibility (n=10). 25 of 27 patients completed radiotherapy. Median total radiotherapy dose was 55·9 Gy (IQR 46·8-56·0). Median locoregional relapse-free survival from surgery, was 7·6 months (95% CI 4·5-10·7) in the no radiotherapy group and 9·4 months (6·5-11·9) in the radiotherapy group. The most common grade 3 or higher toxic effects related to radiotherapy were nausea or vomiting (three [11%] of 27 patients), oesophagitis (two [7%]), and pneumonitis (two [7%]). One patient died of pneumonitis. We recorded no toxic effects data for the control group. INTERPRETATION: Our findings do not support the routine use of hemithoracic radiotherapy for malignant pleural mesothelioma after neoadjuvant chemotherapy and extrapleural pneumonectomy. FUNDING: Swiss Group for Clinical Cancer Research, Swiss State Secretariat for Education, Research and Innovation, Eli Lilly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vivo (1)H MR spectroscopy allows the non invasive characterization of brain metabolites and it has been used for studying brain metabolic changes in a wide range of neurodegenerative diseases. The prion diseases form a group of fatal neurodegenerative diseases, also described as transmissible spongiform encephalopathies. The mechanism by which prions elicit brain damage remains unclear and therefore different transgenic mouse models of prion disease were created. We performed an in vivo longitudinal (1)H MR spectroscopy study at 14.1 T with the aim to measure the neurochemical profile of Prnp -/- and PrPΔ32-121 mice in the hippocampus and cerebellum. Using high-field MR spectroscopy we were able to analyze in details the in vivo brain metabolites in Prnp -/- and PrPΔ32-121 mice. An increase of myo-inositol, glutamate and lactate concentrations with a decrease of N-acetylaspartate concentrations were observed providing additional information to the previous measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Guex, KJ, Lugrin, V, Borloz, S, and Millet, GP. Influence on strength and flexibility of a swing phase-specific hamstring eccentric program in sprinters' general preparation. J Strength Cond Res 30(2): 525-532, 2016-Hamstring injuries are common in sprinters and mainly occur during the terminal swing phase. Eccentric training has been shown to reduce hamstring injury rate by improving several risk factors. The aim of this study was to test the hypothesis that an additional swing phase-specific hamstring eccentric training in well-trained sprinters performed at the commencement of the winter preparation is more efficient to improve strength, ratio, optimum angle, and flexibility than a similar program without hamstring eccentric exercises. Twenty sprinters were randomly allocated to an eccentric (n = 10) or a control group (n = 10). Both groups performed their usual track and field training throughout the study period. Sprinters in the eccentric group performed an additional 6-week hamstring eccentric program, which was specific to the swing phase of the running cycle (eccentric high-load open-chain kinetic movements covering the whole hamstring length-tension relationship preformed at slow to moderate velocity). Isokinetic and flexibility measurements were performed before and after the intervention. The eccentric group increased hamstring peak torques in concentric at 60 degrees .s by 16% (p < 0.001) and at 240 degrees .s by 10% (p < 0.01), in eccentric at 30 degrees .s by 20% (p < 0.001) and at 120 degrees .s by 22% (p < 0.001), conventional and functional ratios by 12% (p < 0.001), and flexibility by 4 degrees (p < 0.01), whereas the control group increased hamstring peak torques only in eccentric at 30 degrees .s by 6% (p </= 0.05) and at 120 degrees .s by 6% (p < 0.01). It was concluded that an additional swing phase-specific hamstring eccentric training in sprinters seems to be crucial to address different risk factors for hamstring strain injuries, such as eccentric and concentric strength, hamstring-to-quadriceps ratio ratio, and flexibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’atenció a l’etapa final de la vida de les persones és un dels pilars fonamentals de les cures pal·liatives i és des de la professió del treball social que es pot promoure aquesta atenció en concepte de qualitat, tranquil·litat i preservació dels valors de les persones en situació terminal. És per aquesta raó que s’ha elaborat el present document d’investigació, el qual compta inicialment amb un recull bibliogràfic que abraçarà la metodologia d’intervenció dels treballadors socials en el sistema de les cures pal·liatives de l’Estat espanyol i del Regne Unit, país pioner en la creació de les unitats de cures pal·liatives. En aquest recull s’emmarcaran detalladament les diferències existents en termes d’intervenció i de desenvolupament d’aquest àmbit en els dos territoris. La descripció detallada dels dos models d’intervenció pretén donar la resposta a l’interrogant sobre quina és la metodologia d’intervenció que realitzen els treballadors socials en el sistema de cures pal·liatives. Per assolir aquesta resposta, s’adrecen qüestions com els objectius que lideren el treball d’aquests professionals, les funcions que realitzen en la seva pràctica professional diària i les habilitats de les que han de disposar per tal d’aconseguir que la seva intervenció sigui òptima, tant per als mateixos professionals com per a les persones amb les quals intervenen. Posteriorment a la revisió bibliogràfica, es realitzaran tres entrevistes exploratòries a tres treballadors socials de diferents nivells sanitaris que promoguin les cures pal·liatives a Catalunya. S’analitzaran les qüestions treballades en el recull bibliogràfic (objectius, funcions i habilitats dels treballadors socials), les quals permetran comparar la perspectiva teòrica cercada amb la informació primària obtinguda. Finalment, es planteja una proposta de projecte que permetrà aprofundir en la intervenció dels treballadors socials de Catalunya i del Regne Unit, realitzant una comparació entre els dos models a través de la realització de quaranta-vuit entrevistes repartides homogèniament per els dos territoris. D’aquesta manera, s’especificaran totes aquelles accions que es puguin millorar i les intervencions que siguin més òptimes per la tipologia d’atenció que es requereix en els sistemes de les cures pal·liatives d’avui en dia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water withdrawal from Mediterranean reservoirs in summer is usually very high. Because of this, stratification is often continuous and far from the typical two-layered structure, favoring the excitation of higher vertical modes. The analysis of wind, temperature, and current data from Sau reservoir (Spain) shows that the third vertical mode of the internal seiche (baroclinic mode) dominated the internal wave field at the beginning of September 2003. We used a continuous stratification two-dimensional model to calculate the period and velocity distribution of the various modes of the internal seiche, and we calculated that the period of the third vertical mode is ;24 h, which coincides with the period of the dominating winds. As a result of the resonance between the third mode and the wind, the other oscillation modes were not excited during this period

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peering into the field of Alzheimer's disease (AD), the outsider realizes that many of the therapeutic strategies tested (in animal models) have been successful. One also may notice that there is a deficit in translational research, i.e., to take a successful drug in mice and translate it to the patient. Efforts are still focused on novel projects to expand the therapeutic arsenal to 'cure mice.' Scientific reasons behind so many successful strategies are not obvious. This article aims to review the current approaches to combat AD and to open a debate on common mechanisms of cognitive enhancement and neuroprotection. In short, either the rodent models are not good and should be discontinued, or we should extract the most useful information from those models. An example of a question that may be debated for the advancement in AD therapy is: In addition to reducing amyloid and tau pathologies, would it be necessary to boost synaptic strength and cognition? The debate could provide clues to turn around the current negative output in generating effective drugs for patients. Furthermore, discovery of biomarkers in human body fluids, and a clear distinction between cognitive enhancers and disease modifying strategies, should be instrumental for advancing in anti-AD drug discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The nonequilibrium phase transitions occurring in a fast-ionic-conductor model and in a reaction-diffusion Ising model are studied by Monte Carlo finite-size scaling to reveal nonclassical critical behavior; our results are compared with those in related models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identifiability of the parameters of a heat exchanger model without phase change was studied in this Master’s thesis using synthetically made data. A fast, two-step Markov chain Monte Carlo method (MCMC) was tested with a couple of case studies and a heat exchanger model. The two-step MCMC-method worked well and decreased the computation time compared to the traditional MCMC-method. The effect of measurement accuracy of certain control variables to the identifiability of parameters was also studied. The accuracy used did not seem to have a remarkable effect to the identifiability of parameters. The use of the posterior distribution of parameters in different heat exchanger geometries was studied. It would be computationally most efficient to use the same posterior distribution among different geometries in the optimisation of heat exchanger networks. According to the results, this was possible in the case when the frontal surface areas were the same among different geometries. In the other cases the same posterior distribution can be used for optimisation too, but that will give a wider predictive distribution as a result. For condensing surface heat exchangers the numerical stability of the simulation model was studied. As a result, a stable algorithm was developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulations have been carried out on the bromate - oxalic acid - Ce(IV) - acetone oscillating reaction, under flow conditions, using Field and Boyd's model (J. Phys. Chem. 1985, 89, 3707). Many different complex dynamic behaviors were found, including simple periodic oscillations, complex periodic oscillations, quasiperiodicity and chaos. Some of these complex oscillations can be understood as belonging to a Farey sequence. The many different behaviors were systematized in a phase diagram which shows that some regions of complex patterns were nested with one inside the other. The existence of almost all known dynamic behavior for this system allows the suggestion that it can be used as a model for some very complex phenomena that occur in biological systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show how certain N-dimensional dynamical systems are able to exploit the full instability capabilities of their fixed points to do Hopf bifurcations and how such a behavior produces complex time evolutions based on the nonlinear combination of the oscillation modes that emerged from these bifurcations. For really different oscillation frequencies, the evolutions describe robust wave form structures, usually periodic, in which selfsimilarity with respect to both the time scale and system dimension is clearly appreciated. For closer frequencies, the evolution signals usually appear irregular but are still based on the repetition of complex wave form structures. The study is developed by considering vector fields with a scalar-valued nonlinear function of a single variable that is a linear combination of the N dynamical variables. In this case, the linear stability analysis can be used to design N-dimensional systems in which the fixed points of a saddle-node pair experience up to N21 Hopf bifurcations with preselected oscillation frequencies. The secondary processes occurring in the phase region where the variety of limit cycles appear may be rather complex and difficult to characterize, but they produce the nonlinear mixing of oscillation modes with relatively generic features

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is necessary to use highly specialized robots in ITER (International Thermonuclear Experimental Reactor) both in the manufacturing and maintenance of the reactor due to a demanding environment. The sectors of the ITER vacuum vessel (VV) require more stringent tolerances than normally expected for the size of the structure involved. VV consists of nine sectors that are to be welded together. The vacuum vessel has a toroidal chamber structure. The task of the designed robot is to carry the welding apparatus along a path with a stringent tolerance during the assembly operation. In addition to the initial vacuum vessel assembly, after a limited running period, sectors need to be replaced for repair. Mechanisms with closed-loop kinematic chains are used in the design of robots in this work. One version is a purely parallel manipulator and another is a hybrid manipulator where the parallel and serial structures are combined. Traditional industrial robots that generally have the links actuated in series are inherently not very rigid and have poor dynamic performance in high speed and high dynamic loading conditions. Compared with open chain manipulators, parallel manipulators have high stiffness, high accuracy and a high force/torque capacity in a reduced workspace. Parallel manipulators have a mechanical architecture where all of the links are connected to the base and to the end-effector of the robot. The purpose of this thesis is to develop special parallel robots for the assembly, machining and repairing of the VV of the ITER. The process of the assembly and machining of the vacuum vessel needs a special robot. By studying the structure of the vacuum vessel, two novel parallel robots were designed and built; they have six and ten degrees of freedom driven by hydraulic cylinders and electrical servo motors. Kinematic models for the proposed robots were defined and two prototypes built. Experiments for machine cutting and laser welding with the 6-DOF robot were carried out. It was demonstrated that the parallel robots are capable of holding all necessary machining tools and welding end-effectors in all positions accurately and stably inside the vacuum vessel sector. The kinematic models appeared to be complex especially in the case of the 10-DOF robot because of its redundant structure. Multibody dynamics simulations were carried out, ensuring sufficient stiffness during the robot motion. The entire design and testing processes of the robots appeared to be complex tasks due to the high specialization of the manufacturing technology needed in the ITER reactor, while the results demonstrate the applicability of the proposed solutions quite well. The results offer not only devices but also a methodology for the assembly and repair of ITER by means of parallel robots.