974 resultados para Simulated annealing algorithm
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt"
Resumo:
The aim of this exploratory study was to assess the impact of clinicians' defense mechanisms-defined as self-protective psychological mechanisms triggered by the affective load of the encounter with the patient-on adherence to a communication skills training (CST). The population consisted of oncology clinicians (N = 31) who participated in a CST. An interview with simulated cancer patients was recorded prior and 6 months after CST. Defenses were measured before and after CST and correlated with a prototype of an ideally conducted interview based on the criteria of CST-teachers. Clinicians who used more adaptive defense mechanisms showed better adherence to communication skills after CST than clinicians with less adaptive defenses (F(1, 29) = 5.26, p = 0.03, d = 0.42). Improvement in communication skills after CST seems to depend on the initial levels of defenses of the clinician prior to CST. Implications for practice and training are discussed. Communication has been recognized as a central element of cancer care [1]. Ineffective communication may contribute to patients' confusion, uncertainty, and increased difficulty in asking questions, expressing feelings, and understanding information [2, 3], and may also contribute to clinicians' lack of job satisfaction and emotional burnout [4]. Therefore, communication skills trainings (CST) for oncology clinicians have been widely developed over the last decade. These trainings should increase the skills of clinicians to respond to the patient's needs, and enhance an adequate encounter with the patient with efficient exchange of information [5]. While CSTs show a great diversity with regard to their pedagogic approaches [6, 7], the main elements of CST consist of (1) role play between participants, (2) analysis of videotaped interviews with simulated patients, and (3) interactive case discussion provided by participants. As recently stated in a consensus paper [8], CSTs need to be taught in small groups (up to 10-12 participants) and have a minimal duration of at least 3 days in order to be effective. Several systematic reviews evaluated the impact of CST on clinicians' communication skills [9-11]. Effectiveness of CST can be assessed by two main approaches: participant-based and patient-based outcomes. Measures can be self-reported, but, according to Gysels et al. [10], behavioral assessment of patient-physician interviews [12] is the most objective and reliable method for measuring change after training. Based on 22 studies on participants' outcomes, Merckaert et al. [9] reported an increase of communication skills and participants' satisfaction with training and changes in attitudes and beliefs. The evaluation of CST remains a challenging task and variables mediating skills improvement remain unidentified. We recently thus conducted a study evaluating the impact of CST on clinicians' defenses by comparing the evolution of defenses of clinicians participating in CST with defenses of a control group without training [13]. Defenses are unconscious psychological processes which protect from anxiety or distress. Therefore, they contribute to the individual's adaptation to stress [14]. Perry refers to the term "defensive functioning" to indicate the degree of adaptation linked to the use of a range of specific defenses by an individual, ranging from low defensive functioning when he or she tends to use generally less adaptive defenses (such as projection, denial, or acting out) to high defensive functioning when he or she tends to use generally more adaptive defenses (such as altruism, intellectualization, or introspection) [15, 16]. Although several authors have addressed the emotional difficulties of oncology clinicians when facing patients and their need to preserve themselves [7, 17, 18], no research has yet been conducted on the defenses of clinicians. For example, repeated use of less adaptive defenses, such as denial, may allow the clinician to avoid or reduce distress, but it also diminishes his ability to respond to the patient's emotions, to identify and to respond adequately to his needs, and to foster the therapeutic alliance. Results of the above-mentioned study [13] showed two groups of clinicians: one with a higher defensive functioning and one with a lower defensive functioning prior to CST. After the training, a difference in defensive functioning between clinicians who participated in CST and clinicians of the control group was only showed for clinicians with a higher defensive functioning. Some clinicians may therefore be more responsive to CST than others. To further address this issue, the present study aimed to evaluate the relationship between the level of adherence to an "ideally conducted interview", as defined by the teachers of the CST, and the level of the clinician' defensive functioning. We hypothesized that, after CST, clinicians with a higher defensive functioning show a greater adherence to the "ideally conducted interview" than clinicians with a lower defensive functioning.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating for camera saturation which takes into account the variable activity in the field of view, i.e. time-dependent dead-time effects. The algorithm presented here accomplishes this task.
Resumo:
INTRODUCTION. The role of turbine-based NIV ventilators (TBV) versus ICU ventilators with NIV mode activated (ICUV) to deliver NIV in case of severe respiratory failure remains debated. OBJECTIVES. To compare the response time and pressurization capacity of TBV and ICUV during simulated NIV with normal and increased respiratory demand, in condition of normal and obstructive respiratory mechanics. METHODS. In a two-chamber lung model, a ventilator simulated normal (P0.1 = 2 mbar, respiratory rate RR = 15/min) or increased (P0.1 = 6 mbar, RR = 25/min) respiratory demand. NIV was simulated by connecting the lung model (compliance 100 ml/mbar; resistance 5 or 20 l/mbar) to a dummy head equipped with a naso-buccal mask. Connections allowed intentional leaks (29 ± 5 % of insufflated volume). Ventilators to test: Servo-i (Maquet), V60 and Vision (Philips Respironics) were connected via a standard circuit to the mask. Applied pressure support levels (PSL) were 7 mbar for normal and 14 mbar for increased demand. Airway pressure and flow were measured in the ventilator circuit and in the simulated airway. Ventilator performance was assessed by determining trigger delay (Td, ms), pressure time product at 300 ms (PTP300, mbar s) and inspiratory tidal volume (VT, ml) and compared by three-way ANOVA for the effect of inspiratory effort, resistance and the ventilator. Differences between ventilators for each condition were tested by oneway ANOVA and contrast (JMP 8.0.1, p\0.05). RESULTS. Inspiratory demand and resistance had a significant effect throughout all comparisons. Ventilator data figure in Table 1 (normal demand) and 2 (increased demand): (a) different from Servo-i, (b) different from V60.CONCLUSION. In this NIV bench study, with leaks, trigger delay was shorter for TBV with normal respiratory demand. By contrast, it was shorter for ICUV when respiratory demand was high. ICUV afforded better pressurization (PTP 300) with increased demand and PSL, particularly with increased resistance. TBV provided a higher inspiratory VT (i.e., downstream from the leaks) with normal demand, and a significantly (although minimally) lower VT with increased demand and PSL.
Resumo:
BACKGROUND: Using a bench test model, we investigated the hypothesis that neonatal and/or adult ventilators equipped with neonatal/pediatric modes currently do not reliably administer pressure support (PS) in neonatal or pediatric patient groups in either the absence or presence of air leaks. METHODS: PS was evaluated in 4 neonatal and 6 adult ventilators using a bench model to evaluate triggering, pressurization, and cycling in both the absence and presence of leaks. Delivered tidal volumes were also assessed. Three patients were simulated: a preterm infant (resistance 100 cm H2O/L/s, compliance 2 mL/cm H2O, inspiratory time of the patient [TI] 400 ms, inspiratory effort 1 and 2 cm H2O), a full-term infant (resistance 50 cm H2O/L/s, compliance 5 mL/cm H2O, TI 500 ms, inspiratory effort 2 and 4 cm H2O), and a child (resistance 30 cm H2O/L/s, compliance 10 mL/cm H2O, TI 600 ms, inspiratory effort 5 and 10 cm H2O). Two PS levels were tested (10 and 15 cm H2O) with and without leaks and with and without the leak compensation algorithm activated. RESULTS: Without leaks, only 2 neonatal ventilators and one adult ventilator had trigger delays under a given predefined acceptable limit (1/8 TI). Pressurization showed high variability between ventilators. Most ventilators showed TI in excess high enough to seriously impair patient-ventilator synchronization (> 50% of the TI of the subject). In some ventilators, leaks led to autotriggering and impairment of ventilation performance, but the influence of leaks was generally lower in neonatal ventilators. When a noninvasive ventilation algorithm was available, this was partially corrected. In general, tidal volume was calculated too low by the ventilators in the presence of leaks; the noninvasive ventilation algorithm was able to correct this difference in only 2 adult ventilators. CONCLUSIONS: No ventilator performed equally well under all tested conditions for all explored parameters. However, neonatal ventilators tended to perform better in the presence of leaks. These findings emphasize the need to improve algorithms for assisted ventilation modes to better deal with situations of high airway resistance, low pulmonary compliance, and the presence of leaks.
Resumo:
The multiscale finite volume (MsFV) method has been developed to efficiently solve large heterogeneous problems (elliptic or parabolic); it is usually employed for pressure equations and delivers conservative flux fields to be used in transport problems. The method essentially relies on the hypothesis that the (fine-scale) problem can be reasonably described by a set of local solutions coupled by a conservative global (coarse-scale) problem. In most cases, the boundary conditions assigned for the local problems are satisfactory and the approximate conservative fluxes provided by the method are accurate. In numerically challenging cases, however, a more accurate localization is required to obtain a good approximation of the fine-scale solution. In this paper we develop a procedure to iteratively improve the boundary conditions of the local problems. The algorithm relies on the data structure of the MsFV method and employs a Krylov-subspace projection method to obtain an unconditionally stable scheme and accelerate convergence. Two variants are considered: in the first, only the MsFV operator is used; in the second, the MsFV operator is combined in a two-step method with an operator derived from the problem solved to construct the conservative flux field. The resulting iterative MsFV algorithms allow arbitrary reduction of the solution error without compromising the construction of a conservative flux field, which is guaranteed at any iteration. Since it converges to the exact solution, the method can be regarded as a linear solver. In this context, the schemes proposed here can be viewed as preconditioned versions of the Generalized Minimal Residual method (GMRES), with a very peculiar characteristic that the residual on the coarse grid is zero at any iteration (thus conservative fluxes can be obtained).
Resumo:
Reinforcement learning (RL) is a very suitable technique for robot learning, as it can learn in unknown environments and in real-time computation. The main difficulties in adapting classic RL algorithms to robotic systems are the generalization problem and the correct observation of the Markovian state. This paper attempts to solve the generalization problem by proposing the semi-online neural-Q_learning algorithm (SONQL). The algorithm uses the classic Q_learning technique with two modifications. First, a neural network (NN) approximates the Q_function allowing the use of continuous states and actions. Second, a database of the most representative learning samples accelerates and stabilizes the convergence. The term semi-online is referred to the fact that the algorithm uses the current but also past learning samples. However, the algorithm is able to learn in real-time while the robot is interacting with the environment. The paper shows simulated results with the "mountain-car" benchmark and, also, real results with an underwater robot in a target following behavior
Resumo:
This paper proposes a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot. Although the dominant approach, when using RL, has been to apply value function based algorithms, the system here detailed is characterized by the use of direct policy search methods. Rather than approximating a value function, these methodologies approximate a policy using an independent function approximator with its own parameters, trying to maximize the future expected reward. The policy based algorithm presented in this paper is used for learning the internal state/action mapping of a behavior. In this preliminary work, we demonstrate its feasibility with simulated experiments using the underwater robot GARBI in a target reaching task
Resumo:
This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed
Resumo:
In computer graphics, global illumination algorithms take into account not only the light that comes directly from the sources, but also the light interreflections. This kind of algorithms produce very realistic images, but at a high computational cost, especially when dealing with complex environments. Parallel computation has been successfully applied to such algorithms in order to make it possible to compute highly-realistic images in a reasonable time. We introduce here a speculation-based parallel solution for a global illumination algorithm in the context of radiosity, in which we have taken advantage of the hierarchical nature of such an algorithm
Resumo:
Diffusion tensor magnetic resonance imaging, which measures directional information of water diffusion in the brain, has emerged as a powerful tool for human brain studies. In this paper, we introduce a new Monte Carlo-based fiber tracking approach to estimate brain connectivity. One of the main characteristics of this approach is that all parameters of the algorithm are automatically determined at each point using the entropy of the eigenvalues of the diffusion tensor. Experimental results show the good performance of the proposed approach