956 resultados para Probabilistic robotics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACT The traditional method of net present value (NPV) to analyze the economic profitability of an investment (based on a deterministic approach) does not adequately represent the implicit risk associated with different but correlated input variables. Using a stochastic simulation approach for evaluating the profitability of blueberry (Vaccinium corymbosum L.) production in Chile, the objective of this study is to illustrate the complexity of including risk in economic feasibility analysis when the project is subject to several but correlated risks. The results of the simulation analysis suggest that the non-inclusion of the intratemporal correlation between input variables underestimate the risk associated with investment decisions. The methodological contribution of this study illustrates the complexity of the interrelationships between uncertain variables and their impact on the convenience of carrying out this type of business in Chile. The steps for the analysis of economic viability were: First, adjusted probability distributions for stochastic input variables (SIV) were simulated and validated. Second, the random values of SIV were used to calculate random values of variables such as production, revenues, costs, depreciation, taxes and net cash flows. Third, the complete stochastic model was simulated with 10,000 iterations using random values for SIV. This result gave information to estimate the probability distributions of the stochastic output variables (SOV) such as the net present value, internal rate of return, value at risk, average cost of production, contribution margin and return on capital. Fourth, the complete stochastic model simulation results were used to analyze alternative scenarios and provide the results to decision makers in the form of probabilities, probability distributions, and for the SOV probabilistic forecasts. The main conclusion shown that this project is a profitable alternative investment in fruit trees in Chile.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le traitement de radiochirurgie par Gamma Knife (GK) est utilisé de plus en plus souvent comme une alternative à la microchirurgie conventionnelle pour le traitement des pathologies neurochirurgicales intracrâniennes. Il s'agit d'irradier en dose unique et à haute énergie, en condition stéréotaxique et à l'aide d'une imagerie multimodale (imagerie par résonance magnétique [IRM], tomodensitométrie et éventuellement artériographie). Le GK a été inventé par le neurochirurgien suédois Lars Leksell, qui a réalisé le premier ciblage du nerf trijumeau en 1951, sur la base d'une radiographie standard. Depuis, les progrès de l'informatique et de la robotique ont permis d'améliorer la technique de radiochirurgie qui s'effectue actuellement soit par accélérateur linéaire de particules monté sur un bras robotisé (Novalis®, Cyberknife®), soit par collimation de près de 192 sources fixes (GK). La principale indication radiochirurgicale dans le traitement de la douleur est la névralgie du nerf trijumeau. Les autres indications, plus rares, sont la névralgie du nerf glossopharyngien, l'algie vasculaire de la face, ainsi qu'un traitement de la douleur d'origine cancéreuse par hypophysiolyse. Gamma Knife surgery (GKS) is widely used as an alternative to open microsurgical procedures as noninvasive treatment of many intracranial conditions. It consists of delivering a single dose of high energy in stereotactic conditions, and with the help of a multimodal imaging (e.g., magnetic resonance imaging [MRI], computer tomography, and eventually angiography). The Gamma Knife (GK) was invented by the Swedish neurosurgeon Lars Leksell who was the first to treat a trigeminal neuralgia sufferer in 1951 using an orthogonal X-ray tube. Since then, the progresses made both in the field of informatics and robotics have allowed to improve the radiosurgical technique, which is currently performed either by a linear accelerator of particles mounted on a robotized arm (Novalis®, Cyberknife®), or by collimation of 192 fixed Co-60 sources (GK). The main indication of GKS in the treatment of pain is trigeminal neuralgia. The other indications, less frequent, are: glossopharyngeal neuralgia, cluster headache, and hypophysiolyse for cancer pain.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé: L'impact de la maladie d'Alzheimer (MA) est dévastateur pour la vie quotidienne de la personne affectée, avec perte progressive de la mémoire et d'autres facultés cognitives jusqu'à la démence. Il n'existe toujours pas de traitement contre cette maladie et il y a aussi une grande incertitude sur le diagnostic des premiers stades de la MA. La signature anatomique de la MA, en particulier l'atrophie du lobe temporal moyen (LTM) mesurée avec la neuroimagerie, peut être utilisée comme un biomarqueur précoce, in vivo, des premiers stades de la MA. Toutefois, malgré le rôle évident du LMT dans les processus de la mémoire, nous savons que les modèles anatomiques prédictifs de la MA basés seulement sur des mesures d'atrophie du LTM n'expliquent pas tous les cas cliniques. Au cours de ma thèse, j'ai conduit trois projets pour comprendre l'anatomie et le fonctionnement du LMT dans (1) les processus de la maladie et dans (2) les processus de mémoire ainsi que (3) ceux de l'apprentissage. Je me suis intéressée à une population avec déficit cognitif léger (« Mild Cognitive Impairment », MCI), à risque pour la MA. Le but du premier projet était de tester l'hypothèse que des facteurs, autres que ceux cognitifs, tels que les traits de personnalité peuvent expliquer les différences interindividuelles dans le LTM. De plus, la diversité phénotypique des manifestations précliniques de la MA provient aussi d'une connaissance limitée des processus de mémoire et d'apprentissage dans le cerveau sain. L'objectif du deuxième projet porte sur l'investigation des sous-régions du LTM, et plus particulièrement de leur contribution dans différentes composantes de la mémoire de reconnaissance chez le sujet sain. Pour étudier cela, j'ai utilisé une nouvelle méthode multivariée ainsi que l'IRM à haute résolution pour tester la contribution de ces sous-régions dans les processus de familiarité (« ou Know ») et de remémoration (ou « Recollection »). Finalement, l'objectif du troisième projet était de tester la contribution du LTM en tant que système de mémoire dans l'apprentissage et l'interaction dynamique entre différents systèmes de mémoire durant l'apprentissage. Les résultats du premier projet montrent que, en plus du déficit cognitif observé dans une population avec MCI, les traits de personnalité peuvent expliquer les différences interindividuelles du LTM ; notamment avec une plus grande contribution du neuroticisme liée à une vulnérabilité au stress et à la dépression. Mon étude a permis d'identifier un pattern d'anormalité anatomique dans le LTM associé à la personnalité avec des mesures de volume et de diffusion moyenne du tissu. Ce pattern est caractérisé par une asymétrie droite-gauche du LTM et un gradient antéro-postérieur dans le LTM. J'ai interprété ce résultat par des propriétés tissulaires et neurochimiques différemment sensibles au stress. Les résultats de mon deuxième projet ont contribué au débat actuel sur la contribution des sous-régions du LTM dans les processus de familiarité et de remémoration. Utilisant une nouvelle méthode multivariée, les résultats supportent premièrement une dissociation des sous-régions associées aux différentes composantes de la mémoire. L'hippocampe est le plus associé à la mémoire de type remémoration et le cortex parahippocampique, à la mémoire de type familiarité. Deuxièmement, l'activation correspondant à la trace mnésique pour chaque type de mémoire est caractérisée par une distribution spatiale distincte. La représentation neuronale spécifique, « sparse-distributed», associée à la mémoire de remémoration dans l'hippocampe serait la meilleure manière d'encoder rapidement des souvenirs détaillés sans interférer les souvenirs précédemment stockés. Dans mon troisième projet, j'ai mis en place une tâche d'apprentissage en IRM fonctionnelle pour étudier les processus d'apprentissage d'associations probabilistes basé sur le feedback/récompense. Cette étude m'a permis de mettre en évidence le rôle du LTM dans l'apprentissage et l'interaction entre différents systèmes de mémoire comme la mémoire procédurale, perceptuelle ou d'amorçage et la mémoire de travail. Nous avons trouvé des activations dans le LTM correspondant à un processus de mémoire épisodique; les ganglions de la base (GB), à la mémoire procédurale et la récompense; le cortex occipito-temporal (OT), à la mémoire de représentation perceptive ou l'amorçage et le cortex préfrontal, à la mémoire de travail. Nous avons également observé que ces régions peuvent interagir; le type de relation entre le LTM et les GB a été interprété comme une compétition, ce qui a déjà été reporté dans des études récentes. De plus, avec un modèle dynamique causal, j'ai démontré l'existence d'une connectivité effective entre des régions. Elle se caractérise par une influence causale de type « top-down » venant de régions corticales associées avec des processus de plus haut niveau venant du cortex préfrontal sur des régions corticales plus primaires comme le OT cortex. Cette influence diminue au cours du de l'apprentissage; cela pourrait correspondre à un mécanisme de diminution de l'erreur de prédiction. Mon interprétation est que cela est à l'origine de la connaissance sémantique. J'ai également montré que les choix du sujet et l'activation cérébrale associée sont influencés par les traits de personnalité et des états affectifs négatifs. Les résultats de cette thèse m'ont amenée à proposer (1) un modèle expliquant les mécanismes possibles liés à l'influence de la personnalité sur le LTM dans une population avec MCI, (2) une dissociation des sous-régions du LTM dans différents types de mémoire et une représentation neuronale spécifique à ces régions. Cela pourrait être une piste pour résoudre les débats actuels sur la mémoire de reconnaissance. Finalement, (3) le LTM est aussi un système de mémoire impliqué dans l'apprentissage et qui peut interagir avec les GB par une compétition. Nous avons aussi mis en évidence une interaction dynamique de type « top -down » et « bottom-up » entre le cortex préfrontal et le cortex OT. En conclusion, les résultats peuvent donner des indices afin de mieux comprendre certains dysfonctionnements de la mémoire liés à l'âge et la maladie d'Alzheimer ainsi qu'à améliorer le développement de traitement. Abstract: The impact of Alzheimer's disease is devastating for the daily life of the affected patients, with progressive loss of memory and other cognitive skills until dementia. We still lack disease modifying treatment and there is also a great amount of uncertainty regarding the accuracy of diagnostic classification in the early stages of AD. The anatomical signature of AD, in particular the medial temporal lobe (MTL) atrophy measured with neuroimaging, can be used as an early in vivo biomarker in early stages of AD. However, despite the evident role of MTL in memory, we know that the derived predictive anatomical model based only on measures of brain atrophy in MTL does not explain all clinical cases. Throughout my thesis, I have conducted three projects to understand the anatomy and the functioning of MTL on (1) disease's progression, (2) memory process and (3) learning process. I was interested in a population with mild cognitive impairment (MCI), at risk for AD. The objective of the first project was to test the hypothesis that factors, other than the cognitive ones, such as the personality traits, can explain inter-individual differences in the MTL. Moreover, the phenotypic diversity in the manifestations of preclinical AD arises also from the limited knowledge of memory and learning processes in healthy brain. The objective of the second project concerns the investigation of sub-regions of the MTL, and more particularly their contributions in the different components of recognition memory in healthy subjects. To study that, I have used a new multivariate method as well as MRI at high resolution to test the contribution of those sub-regions in the processes of familiarity and recollection. Finally, the objective of the third project was to test the contribution of the MTL as a memory system in learning and the dynamic interaction between memory systems during learning. The results of the first project show that, beyond cognitive state of impairment observed in the population with MCI, the personality traits can explain the inter-individual differences in the MTL; notably with a higher contribution of neuroticism linked to proneness to stress and depression. My study has allowed identifying a pattern of anatomical abnormality in the MTL related to personality with measures of volume and mean diffusion of the tissue. That pattern is characterized by right-left asymmetry in MTL and an anterior to posterior gradient within MTL. I have interpreted that result by tissue and neurochemical properties differently sensitive to stress. Results of my second project have contributed to the actual debate on the contribution of MTL sub-regions in the processes of familiarity and recollection. Using a new multivariate method, the results support firstly a dissociation of the subregions associated with different memory components. The hippocampus was mostly associated with recollection and the surrounding parahippocampal cortex, with familiarity type of memory. Secondly, the activation corresponding to the mensic trace for each type of memory is characterized by a distinct spatial distribution. The specific neuronal representation, "sparse-distributed", associated with recollection in the hippocampus would be the best way to rapidly encode detailed memories without overwriting previously stored memories. In the third project, I have created a learning task with functional MRI to sudy the processes of learning of probabilistic associations based on feedback/reward. That study allowed me to highlight the role of the MTL in learning and the interaction between different memory systems such as the procedural memory, the perceptual memory or priming and the working memory. We have found activations in the MTL corresponding to a process of episodic memory; the basal ganglia (BG), to a procedural memory and reward; the occipito-temporal (OT) cortex, to a perceptive memory or priming and the prefrontal cortex, to working memory. We have also observed that those regions can interact; the relation type between the MTL and the BG has been interpreted as a competition. In addition, with a dynamic causal model, I have demonstrated a "top-down" influence from cortical regions associated with high level cortical area such as the prefrontal cortex on lower level cortical regions such as the OT cortex. That influence decreases during learning; that could correspond to a mechanism linked to a diminution of prediction error. My interpretation is that this is at the origin of the semantic knowledge. I have also shown that the subject's choice and the associated brain activation are influenced by personality traits and negative affects. Overall results of this thesis have brought me to propose (1) a model explaining the possible mechanism linked to the influence of personality on the MTL in a population with MCI, (2) a dissociation of MTL sub-regions in different memory types and a neuronal representation specific to each region. This could be a cue to resolve the actual debates on recognition memory. Finally, (3) the MTL is also a system involved in learning and that can interact with the BG by a competition. We have also shown a dynamic interaction of « top -down » and « bottom-up » types between the pre-frontal cortex and the OT cortex. In conclusion, the results could give cues to better understand some memory dysfunctions in aging and Alzheimer's disease and to improve development of treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’objectiu del treball és emular virtualment l’entorn de treball del robot Stäubli Tx60 quehi ha al laboratori de robòtica de la UdG (dins les possibilitats que ofereix el software adquirit).Aquest laboratori intenta reproduir un entorn industrial de treball en el qual es realitzal’assemblatge d’un conjunt de manera cent per cent automatitzada.En una primera fase, s’ha dissenyat en tres dimensions tot l’entorn de treball que hi hadisponible al laboratori a través del software CAD SolidWorks. Cada un dels conjuntsque conformen l’estació de treball s’ha dissenyat de manera independent.Posteriorment s’introdueixen tots els elements dissenyats dins el software StäubliRobotics Suite 2013. Amb tot l’anterior, cal remarcar que l’objectiu principal del treball consta de duesetapes. Inicialment es dissenya el model 3D de l’entorn de treball a través del software SolidWorks i s’introdueix dins el software Stäubli Robotics Suite 2013. Enuna segona etapa, es realitza un manual d’ús del nou software de robòtica

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with the contribution of forensic science to the legal process by helping reduce uncertainty. Although it is now widely accepted that uncertainty should be handled by probability because it is a safeguard against incoherent proceedings, there remain diverging and conflicting views on how probability ought to be interpreted. This is exemplified by the proposals in scientific literature that call for procedures of probability computation that are referred to as "objective," suggesting that scientists ought to use them in their reporting to recipients of expert information. I find such proposals objectionable. They need to be viewed cautiously, essentially because ensuing probabilistic statements can be perceived as making forensic science prescriptive. A motivating example from the context of forensic DNA analysis will be chosen to illustrate this. As a main point, it shall be argued that such constraining suggestions can be avoided by interpreting probability as a measure of personal belief, that is, subjective probability. Invoking references to foundational literature from mathematical statistics and philosophy of science, the discussion will explore the consequences of this interdisciplinary viewpoint for the practice of forensic expert reporting. It will be emphasized that-as an operational interpretation of probability-the subjectivist perspective enables forensic science to add value to the legal process, in particular by avoiding inferential impasses to which other interpretations of probability may lead. Moreover, understanding probability from a subjective perspective can encourage participants in the legal process to take on more responsibility in matters regarding the coherent handling of uncertainty. This would assure more balanced interactions at the interface between science and the law. This, in turn, provides support for ongoing developments that can be called the "probabilization" of forensic science.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ordered weighted averaging (OWA) operators and their extensions are powerful tools used in numerous decision-making problems. This class of operator belongs to a more general family of aggregation operators, understood as discrete Choquet integrals. Aggregation operators are usually characterized by indicators. In this article four indicators usually associated with the OWA operator are extended to discrete Choquet integrals: namely, the degree of balance, the divergence, the variance indicator and Renyi entropies. All of these indicators are considered from a local and a global perspective. Linearity of indicators for linear combinations of capacities is investigated and, to illustrate the application of results, indicators of the probabilistic ordered weighted averaging -POWA- operator are derived. Finally, an example is provided to show the application to a specific context.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sobriety checkpoints are not usually randomly located by traffic authorities. As such, information provided by non-random alcohol tests cannot be used to infer the characteristics of the general driving population. In this paper a case study is presented in which the prevalence of alcohol-impaired driving is estimated for the general population of drivers. A stratified probabilistic sample was designed to represent vehicles circulating in non-urban areas of Catalonia (Spain), a region characterized by its complex transportation network and dense traffic around the metropolis of Barcelona. Random breath alcohol concentration tests were performed during spring 2012 on 7,596 drivers. The estimated prevalence of alcohol-impaired drivers was 1.29 PER CENT, which is roughly a third of the rate obtained in non-random tests. Higher rates were found on weekends (1.90 PER CENT on Saturdays, 4.29 PER CENT on Sundays) and especially at night. The rate is higher for men (1.45 PER CENT) than for women (0.64 PER CENT) and the percentage of positive outcomes shows an increasing pattern with age. In vehicles with two occupants, the proportion of alcohol-impaired drivers is estimated at 2.62 PER CENT, but when the driver was alone the rate drops to 0.84 PER CENT, which might reflect the socialization of drinking habits. The results are compared with outcomes in previous surveys, showing a decreasing trend in the prevalence of alcohol-impaired drivers over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ydinenergian tuottamisessa turvallisuus on tärkeää. Todennäköisyyspohjaisella riskianalyysillä voidaan arvioida turvallisuusvaatimusten täyttymistä eri tilanteissa. Tässä diplomityössä tarkastellaan todennäköisyyspohjaisen riskianalyysin käyttöä ydinvoimalaitoksen kaapelipalojen vaikutusten arvioinnissa. Työn tarkoituksena on omalta osaltaan edistää ydinvoimalaitosten kaapelipaloturvallisuuden parantamista. Työssä esitellään todennäköisyyspohjaisen riskianalyysin ja todennäköisyyspohjaisen paloanalyysin periaatteet sekä nykyiset kaapelipaloanalyysimenetelmät. Olemassa olevien menetelmien pohjalta kehitettiin menetelmä Olkiluoto 1 ja 2 laitosyksiköiden kaapelipaloturvallisuuden arviointiin. Työssä tarkastellaan myös maailmalla sattuneita kaapelipaloja sekä ydinvoimalaitosten palosimulointiin kehitettyä ohjelmistoa. Työssä kehitetty kaapelipaloanalyysi jakautuu kahteen päävaiheeseen: virtapiirien vika-analyysiin ja virtapiirivikojen todennäköisyysanalyysiin. Virtapiirien vika-analyysi käsittää kaapeleiden vikamoodien, virtapiirien vikaantumisluokkien sekä vikaantumisten vaikutuksien määrittämisen. Virtapiirivikojen todennäköisyysanalyysissä määritetään puolestaan vikaantumistodennäköisyydet kaapelipalokokeiden tulosten pohjalta. Kehitettyä analyysimenetelmää sovellettiin esimerkinomaisesti Olkiluoto 1 ja 2 laitosyksiköiden kahdelle eri huonetilalle. Tuloksena saatiin turvallisuudelle tärkeiden järjestelmien virtapiirien vikaantumismallit sekä niiden todennäköisyydet. Tulosten perusteella voidaan todeta, että työssä kehitetty kaapelipaloanalyysimenetelmä toimi hyvin. Tulevaisuudessa menetelmää on tarkoitus hyödyntää Olkiluoto 1 ja 2 -laitosyksiköiden kaapelipaloturvallisuuden arvioinnissa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the past few decades, age estimation of living persons has represented a challenging task for many forensic services worldwide. In general, the process for age estimation includes the observation of the degree of maturity reached by some physical attributes, such as dentition or several ossification centers. The estimated chronological age or the probability that an individual belongs to a meaningful class of ages is then obtained from the observed degree of maturity by means of various statistical methods. Among these methods, those developed in a Bayesian framework offer to users the possibility of coherently dealing with the uncertainty associated with age estimation and of assessing in a transparent and logical way the probability that an examined individual is younger or older than a given age threshold. Recently, a Bayesian network for age estimation has been presented in scientific literature; this kind of probabilistic graphical tool may facilitate the use of the probabilistic approach. Probabilities of interest in the network are assigned by means of transition analysis, a statistical parametric model, which links the chronological age and the degree of maturity by means of specific regression models, such as logit or probit models. Since different regression models can be employed in transition analysis, the aim of this paper is to study the influence of the model in the classification of individuals. The analysis was performed using a dataset related to the ossifications status of the medial clavicular epiphysis and results support that the classification of individuals is not dependent on the choice of the regression model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modelling the shoulder's musculature is challenging given its mechanical and geometric complexity. The use of the ideal fibre model to represent a muscle's line of action cannot always faithfully represent the mechanical effect of each muscle, leading to considerable differences between model-estimated and in vivo measured muscle activity. While the musculo-tendon force coordination problem has been extensively analysed in terms of the cost function, only few works have investigated the existence and sensitivity of solutions to fibre topology. The goal of this paper is to present an analysis of the solution set using the concepts of torque-feasible space (TFS) and wrench-feasible space (WFS) from cable-driven robotics. A shoulder model is presented and a simple musculo-tendon force coordination problem is defined. The ideal fibre model for representing muscles is reviewed and the TFS and WFS are defined, leading to the necessary and sufficient conditions for the existence of a solution. The shoulder model's TFS is analysed to explain the lack of anterior deltoid (DLTa) activity. Based on the analysis, a modification of the model's muscle fibre geometry is proposed. The performance with and without the modification is assessed by solving the musculo-tendon force coordination problem for quasi-static abduction in the scapular plane. After the proposed modification, the DLTa reaches 20% of activation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Durante toda la evolución de la tecnología, se han empleado aparatos interconexionados por cables. Los cables limitan la libertad de movimiento del usuario y pueden captar interferencias entre ellos si la red de cableado es elevada. Mientras avanzaba la tecnología inalámbrica, se ha ido adaptando al equipamiento electrónico a la vez que se iban haciendo cada vez más pequeños. Por esto, se impone la necesidad de utilizarlos como controles a distancia sin el empleo de cables debido a los inconvenientes que estos conllevan. El presente trabajo, pretende unificar tres tecnologías que pueden tener en el futuro una gran afinidad. · Dispositivos basados en el sistema Android. Desde sus inicios, han tenido una evolución meteórica. Se han ido haciendo cada vez más rápidos y mejores. · Sistemas inalámbricos. Los sistemas wifi o bluetooth, se han ido incorporando a nuestras vidas cada vez más y están prácticamente en cualquier aparato. · Robótica. Cualquier proceso de producción incorpora un robot. Son necesarios para hacer muchos trabajos que, aunque el hombre lo puede realizar, un robot reduce los tiempos y la peligrosidad de los procesos. Aunque las dos primeras tecnologías van unidas, ¿quién no tiene un teléfono con conexión wifi y bluetooth?, pocos diseños aúnan estos campos con la Robótica. El objetivo final de este trabajo es realizar una aplicación en Android para el control remoto de un robot, empleando el sistema de comunicación inalámbrico. La aplicación desarrollada, permite controlar el robot a conveniencia del usuario en un entorno táctil/teledirigido. Gracias a la utilización de simulador en ambos lenguajes (RAPID y Android), ha sido posible realizar la programación sin tener que estar presente ante el robot objeto de este trabajo. A través de su progreso, se ha ido evolucionando en la cantidad de datos enviados al robot y complejidad en su procesamiento, a la vez que se ha mejorado en la estética de la aplicación. Finalmente se usó la aplicación desarrollada con el robot, consiguiendo con éxito que realizara los movimientos que eran enviados con la tablet programada.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chronic graft-versus-host disease (cGvHD) is the leading cause of late nonrelapse mortality (transplant-related mortality) after hematopoietic stem cell transplant. Given that there are a wide range of treatment options for cGvHD, assessment of the associated costs and efficacy can help clinicians and health care providers allocate health care resources more efficiently. OBJECTIVE: The purpose of this study was to assess the cost-effectiveness of extracorporeal photopheresis (ECP) compared with rituximab (Rmb) and with imatinib (Imt) in patients with cGvHD at 5 years from the perspective of the Spanish National Health System. METHODS: The model assessed the incremental cost-effectiveness/utility ratio of ECP versus Rmb or Imt for 1000 hypothetical patients by using microsimulation cost-effectiveness techniques. Model probabilities were obtained from the literature. Treatment pathways and adverse events were evaluated taking clinical opinion and published reports into consideration. Local data on costs (2010 Euros) and health care resources utilization were validated by the clinical authors. Probabilistic sensitivity analyses were used to assess the robustness of the model. RESULTS: The greater efficacy of ECP resulted in a gain of 0.011 to 0.024 quality-adjusted life-year in the first year and 0.062 to 0.094 at year 5 compared with Rmb or Imt. The results showed that the higher acquisition cost of ECP versus Imt was compensated for at 9 months by greater efficacy; this higher cost was partially compensated for ( 517) by year 5 versus Rmb. After 9 months, ECP was dominant (cheaper and more effective) compared with Imt. The incremental cost-effectiveness ratio of ECP versus Rmb was 29,646 per life-year gained and 24,442 per quality-adjusted life-year gained at year 2.5. Probabilistic sensitivity analysis confirmed the results. The main study limitation was that to assess relative treatment effects, only small studies were available for indirect comparison. CONCLUSION: ECP as a third-line therapy for cGvHD is a more cost-effective strategy than Rmb or Imt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: The aim of the study was to combine clinical results from the European Cohort of the REVERSE study and costs associated with the addition of cardiac resynchronization therapy (CRT) to optimal medical therapy (OMT) in patients with mild symptomatic (NYHA I-II) or asymptomatic left ventricular dysfunction and markers of cardiac dyssynchrony in Spain. Methods: A Markov model was developed with CRT + OMT (CRT-ON) versus OMT only (CRT-OFF) based on a retrospective cost-effectiveness analysis. Raw data was derived from literature and expert opinion, reflecting clinical and economic consequences of patient"s management in Spain. Time horizon was 10 years. Both costs (euro 2010) and effects were discounted at 3 percent per annum. Results: CRT-ON showed higher total costs than CRT-OFF; however, CRT reduced the length of hospitalization in ICU by 94 percent (0.006 versus 0.091 days) and general ward in by 34 percent (0.705 versus 1.076 days). Surviving CRT-ON patients (88.2 percent versus 77.5 percent) remained in better functional class longer, and they achieved an improvement of 0.9 life years (LYGs) and 0.77 years quality-adjusted life years (QALYs). CRT-ON proved to be cost-effective after 6 years, except for the 7th year due to battery depletion. At 10 years, the results were 18,431 per LYG and 21,500 per QALY gained. Probabilistic sensitivity analysis showed CRT-ON was cost-effective in 75.4 percent of the cases at 10 years. Conclusions: The use of CRT added to OMT represents an efficient use of resources in patients suffering from heart failure in NYHA functional classes I and II.