8 resultados para dynamic treatment regime

em Universidad Politécnica de Madrid


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cardiovascular Diseases are the most prevalent and serious chronic conditions existing nowadays. They are the primary cause of death in the world and generate enormous expenditures to the health systems. Tele-monitoring and personal health systems have proven to be good options for tackling this situation; however they are still lacking many functionalities. It is necessary to find solutions that allow health professionals to follow up patients more closely and efficiently, while reducing the non-adherence of patients to the treatment regime. HeartCycle research project (partially funded by the European Commission) has developed a personal health system for cardiovascular diseases management with the aim to address this problem. This paper describes the Patient Loop of this solution, including the different components, the adopted user interaction, and the implemented patients education and coaching strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetoencephalography (MEG) allows the real-time recording of neural activity and oscillatory activity in distributed neural networks. We applied a non-linear complexity analysis to resting-state neural activity as measured using whole-head MEG. Recordings were obtained from 20 unmedicated patients with major depressive disorder and 19 matched healthy controls. Subsequently, after 6 months of pharmacological treatment with the antidepressant mirtazapine 30 mg/day, patients received a second MEG scan. A measure of the complexity of neural signals, the Lempel–Ziv Complexity (LZC), was derived from the MEG time series. We found that depressed patients showed higher pre-treatment complexity values compared with controls, and that complexity values decreased after 6 months of effective pharmacological treatment, although this effect was statistically significant only in younger patients. The main treatment effect was to recover the tendency observed in controls of a positive correlation between age and complexity values. Importantly, the reduction of complexity with treatment correlated with the degree of clinical symptom remission. We suggest that LZC, a formal measure of neural activity complexity, is sensitive to the dynamic physiological changes observed in depression and may potentially offer an objective marker of depression and its remission after treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global linear instability theory is concerned with the temporal or spatial development of small-amplitude perturbations superposed upon laminar steady or time-periodic three-dimensional flows, which are inhomogeneous in two(and periodic in one)or all three spatial directions.After a brief exposition of the theory,some recent advances are reported. First, results are presented on the implementation of a Jacobian-free Newton–Krylov time-stepping method into a standard finite-volume aerodynamic code to obtain global linear instability results in flows of industrial interest. Second, connections are sought between established and more-modern approaches for structure identification in flows, such as proper orthogonal decomposition and Koopman modes analysis (dynamic mode decomposition), and the possibility to connect solutions of the eigenvalue problem obtained by matrix formation or time-stepping with those delivered by dynamic mode decomposition, residual algorithm, and proper orthogonal decomposition analysis is highlighted in the laminar regime; turbulent and three-dimensional flows are identified as open areas for future research. Finally, a new stable very-high-order finite-difference method is implemented for the spatial discretization of the operators describing the spatial biglobal eigenvalue problem, parabolized stability equation three-dimensional analysis, and the triglobal eigenvalue problem; it is shown that, combined with sparse matrix treatment, all these problems may now be solved on standard desktop computers

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past 20 years, dynamic analysis of shells has been one of the most fascinating fields for research. Using the new light materials the building engineer soon discovered that the subsequent reduction of gravity forces produced not only the desired shape freedom but the appearance of ecologic loads as the first factor of design; loads which present strong random properties and marked dynamic influence. On the other hand, the technological advance in the aeronautical and astronautical field placed the engineers in front of shell structures of nonconventional shape and able to sustain substantialy dynamic loads. The response to the increasingly challenger problems of the last two decades has been very bright; new forms, new materials and new methods of analysis have arosen in the design of off-shore platforms, nuclear vessels, space crafts, etc. Thanks to the intensity of the lived years we have at our disposition a coherent and homogeneous amount of knowledge which enable us to face problems of inconceivable complexity when IASS was founded. The open minded approach to classical problems and the impact of the computer are, probably, important factors in the Renaissance we have enjoyed these years, and a good proof of this are the papers presented to the previous IASS meetings as well as that we are going to consider in this one. Particularly striking is the great number of papers based on a mathematical modeling in front of the meagerness of those treating laboratory experiments on physical models. The universal entering of the computer into almost every phase of our lifes, and the cost of physical models, are –may be- reasons for this lack of experimental methods. Nevertheless they continue offering useful results as are those obtained with the shaking-table in which the computer plays an essential role in the application of loads as well as in the instantaneous treatment of control data. Plates 1 and 2 record the papers presented under dynamic heading, 40% of them are from Japan in good correlation with the relevance that Japanese research has traditionally showed in this area. Also interesting is to find old friends as profesors Tanaka, Nishimura and Kostem who presented valuable papers in previous IASS conferences. As we see there are papers representative of all tendencies, even purely analytical! Better than discuss them in detail, which can be done after the authors presentation, I think we can comment in the general pattern of the dynamical approach are summarized in plate 3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Screw dislocations in bcc metals display non-planar cores at zero temperature which result in high lattice friction and thermally-activated strain rate behavior. In bcc W, electronic structure molecular statics calculations reveal a compact, non-degenerate core with an associated Peierls stress between 1.7 and 2.8 GPa. However, a full picture of the dynamic behavior of dislocations can only be gained by using more efficient atomistic simulations based on semiempirical interatomic potentials. In this paper we assess the suitability of five different potentials in terms of static properties relevant to screw dislocations in pure W. Moreover, we perform molecular dynamics simulations of stress-assisted glide using all five potentials to study the dynamic behavior of screw dislocations under shear stress. Dislocations are seen to display thermally-activated motion in most of the applied stress range, with a gradual transition to a viscous damping regime at high stresses. We find that one potential predicts a core transformation from compact to dissociated at finite temperature that affects the energetics of kink-pair production and impacts the mechanism of motion. We conclude that a modified embedded-atom potential achieves the best compromise in terms of static and dynamic screw dislocation properties, although at an expense of about ten-fold compared to central potentials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is part of a set of publications related with the development of mathematical models aimed to simulate the dynamic input and output of experimental nondestructive tests in order to detect structural imperfections. The structures to be considered are composed by steel plates of thin thickness. The imperfections in these cases are cracks and they can penetrate either a significant part of the plate thickness or be micro cracks or superficial imperfections. The first class of cracks is related with structural safety and the second one is more connected to the structural protection to the environment, particularly if protective paintings can be deteriorated. Two mathematical groups of models have been developed. The first group tries to locate the position and extension of the imperfection of the first class of imperfections, i.e. cracks and it is the object of the present paper. Bending Kirchoff thin plate models belong to this first group and they are used to this respect. The another group of models is dealt with membrane structures under the superficial Rayleigh waves excitation. With this group of models the micro cracks detection is intended. In the application of the first group of models to the detection of cracks, it has been observed that the differences between the natural frequencies of the non cracked and the cracked structures are very small. However, geometry and crack position can be identified quite accurately if this comparison is carried out between first derivatives (mode rotations) of the natural modes are used instead. Finally, in relation with the analysis of the superficial crack existence the use of Rayleigh waves is very promising. The geometry and the penetration of the micro crack can be detected very accurately. The mathematical and numerical treatment of the generation of these Rayleigh waves present and a numerical application has been shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore the recently developed snapshot-based dynamic mode decomposition (DMD) technique, a matrix-free Arnoldi type method, to predict 3D linear global flow instabilities. We apply the DMD technique to flows confined in an L-shaped cavity and compare the resulting modes to their counterparts issued from classic, matrix forming, linear instability analysis (i.e. BiGlobal approach) and direct numerical simulations. Results show that the DMD technique, which uses snapshots generated by a 3D non-linear incompressible discontinuous Galerkin Navier?Stokes solver, provides very similar results to classical linear instability analysis techniques. In addition, we compare DMD results issued from non-linear and linearised Navier?Stokes solvers, showing that linearisation is not necessary (i.e. base flow not required) to obtain linear modes, as long as the analysis is restricted to the exponential growth regime, that is, flow regime governed by the linearised Navier?Stokes equations, and showing the potential of this type of analysis based on snapshots to general purpose CFD codes, without need of modifications. Finally, this work shows that the DMD technique can provide three-dimensional direct and adjoint modes through snapshots provided by the linearised and adjoint linearised Navier?Stokes equations advanced in time. Subsequently, these modes are used to provide structural sensitivity maps and sensitivity to base flow modification information for 3D flows and complex geometries, at an affordable computational cost. The information provided by the sensitivity study is used to modify the L-shaped geometry and control the most unstable 3D mode.