960 resultados para cost estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Partner notification is essential to the comprehensive case management of sexually transmitted infections. Systematic reviews and mathematical modelling can be used to synthesise information about the effects of new interventions to enhance the outcomes of partner notification. OBJECTIVE To study the effectiveness and cost-effectiveness of traditional and new partner notification technologies for curable sexually transmitted infections (STIs). DESIGN Secondary data analysis of clinical audit data; systematic reviews of randomised controlled trials (MEDLINE, EMBASE and Cochrane Central Register of Controlled Trials) published from 1 January 1966 to 31 August 2012 and of studies of health-related quality of life (HRQL) [MEDLINE, EMBASE, ISI Web of Knowledge, NHS Economic Evaluation Database (NHS EED), Database of Abstracts of Reviews of Effects (DARE) and Health Technology Assessment (HTA)] published from 1 January 1980 to 31 December 2011; static models of clinical effectiveness and cost-effectiveness; and dynamic modelling studies to improve parameter estimation and examine effectiveness. SETTING General population and genitourinary medicine clinic attenders. PARTICIPANTS Heterosexual women and men. INTERVENTIONS Traditional partner notification by patient or provider referral, and new partner notification by expedited partner therapy (EPT) or its UK equivalent, accelerated partner therapy (APT). MAIN OUTCOME MEASURES Population prevalence; index case reinfection; and partners treated per index case. RESULTS Enhanced partner therapy reduced reinfection in index cases with curable STIs more than simple patient referral [risk ratio (RR) 0.71; 95% confidence interval (CI) 0.56 to 0.89]. There are no randomised trials of APT. The median number of partners treated for chlamydia per index case in UK clinics was 0.60. The number of partners needed to treat to interrupt transmission of chlamydia was lower for casual than for regular partners. In dynamic model simulations, > 10% of partners are chlamydia positive with look-back periods of up to 18 months. In the presence of a chlamydia screening programme that reduces population prevalence, treatment of current partners achieves most of the additional reduction in prevalence attributable to partner notification. Dynamic model simulations show that cotesting and treatment for chlamydia and gonorrhoea reduce the prevalence of both STIs. APT has a limited additional effect on prevalence but reduces the rate of index case reinfection. Published quality-adjusted life-year (QALY) weights were of insufficient quality to be used in a cost-effectiveness study of partner notification in this project. Using an intermediate outcome of cost per infection diagnosed, doubling the efficacy of partner notification from 0.4 to 0.8 partners treated per index case was more cost-effective than increasing chlamydia screening coverage. CONCLUSIONS There is evidence to support the improved clinical effectiveness of EPT in reducing index case reinfection. In a general heterosexual population, partner notification identifies new infected cases but the impact on chlamydia prevalence is limited. Partner notification to notify casual partners might have a greater impact than for regular partners in genitourinary clinic populations. Recommendations for future research are (1) to conduct randomised controlled trials using biological outcomes of the effectiveness of APT and of methods to increase testing for human immunodeficiency virus (HIV) and STIs after APT; (2) collection of HRQL data should be a priority to determine QALYs associated with the sequelae of curable STIs; and (3) standardised parameter sets for curable STIs should be developed for mathematical models of STI transmission that are used for policy-making. FUNDING The National Institute for Health Research Health Technology Assessment programme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation reviews the recommendations of the Panel on Cost Effectiveness in Health and Medicine (Panel) convened by the US Public Health Service in 1993 in four areas: aggregation of costs and benefits, methods of estimating resources used, definition of population impacted and perspective used in cost benefit analysis. Financial data from a clinical trial was used to test whether different approaches in each of the above four areas would change the net benefit resulting from a cost benefit analysis. Differences in aggregation of cost and benefit resulted in the same net benefit, but not the same cost/benefit ratios. Differences in resource use estimation methods, population subgroups definitions and perspectives all produced different net benefits. Difference in perspective resulted in different and often opposing decisions as to whether the proposed intervention from the clinical trial should be implemented. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The acquisition of accurate information on the size of traits in animals is fundamental for the study of animal ecology and evolution and their management. We demonstrate how morphological traits of free-ranging animals can reliably be estimated on very large observation distances of several hundred meters by the use of ordinary digital photographic equipment and simple photogrammetric software. In our study, we estimated the length of horn annuli in free-ranging male Alpine ibex (Capra ibex) by taking already measured horn annuli of conspecifics on the same photographs as scaling units. Comparisons with hand-measured horn annuli lengths and repeatability analyses revealed a high accuracy of the photogrammetric estimates. If length estimations of specific horn annuli are based on multiple photographs measurement errors of <5.5 mm can be expected. In the current study the application of the described photogrammetric procedure increased the sample size of animals with known horn annuli length by an additional 104%. The presented photogrammetric procedure is of broad applicability and represents an easy, robust and cost-efficient method for the measuring of individuals in populations where animals are hard to capture or to approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aims to address two research questions. First, ‘Can we identify factors that are determinants both of improved health outcomes and of reduced costs for hospitalized patients with one of six common diagnoses?’ Second, ‘Can we identify other factors that are determinants of improved health outcomes for such hospitalized patients but which are not associated with costs?’ The Healthcare Cost and Utilization Project (HCUP) Nationwide Inpatient Sample (NIS) database from 2003 to 2006 was employed in this study. The total study sample consisted of hospitals which had at least 30 patients each year for the given diagnosis: 954 hospitals for acute myocardial infarction (AMI), 1552 hospitals for congestive heart failure (CHF), 1120 hospitals for stroke (STR), 1283 hospitals for gastrointestinal hemorrhage (GIH), 979 hospitals for hip fracture (HIP), and 1716 hospitals for pneumonia (PNE). This study used simultaneous equations models to investigate the determinants of improvement in health outcomes and of cost reduction in hospital inpatient care for these six common diagnoses. In addition, the study used instrumental variables and two-stage least squares random effect model for unbalanced panel data estimation. The study concluded that a few factors were determinants of high quality and low cost. Specifically, high specialty was the determinant of high quality and low costs for CHF patients; small hospital size was the determinant of high quality and low costs for AMI patients. Furthermore, CHF patients who were treated in Midwest, South, and West region hospitals had better health outcomes and lower hospital costs than patients who were treated in Northeast region hospitals. Gastrointestinal hemorrhage and pneumonia patients who were treated in South region hospitals also had better health outcomes and lower hospital costs than patients who were treated in Northeast region hospitals. This study found that six non-cost factors were related to health outcomes for a few diagnoses: hospital volume, percentage emergency room admissions for a given diagnosis, hospital competition, specialty, bed size, and hospital region.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work is the design and implementation of the digital control stage of a 280W AC/DC industrial power supply in a single low-cost microcontroller to replace the analog control stage. The switch-mode power supply (SMPS) consists of a PFC boost converter with fixed frequency operation and a variable frequency LLC series resonant DC/DC converter. Input voltage range is 85VRMS-550VRMS and the output voltage range is 24V-28V. A digital controller is especially suitable for this kind of SMPS to implement its multiple functionalities and to keep the efficiency and the performance high over the wide range of input voltages. Additional advantages of the digital control are reliability and size. The optimized design and implementation of the digital control stage it is presented. Experimental results show the stable operation of the controlled system and an estimation of the cost reduction achieved with the digital control stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predicting statically the running time of programs has many applications ranging from task scheduling in parallel execution to proving the ability of a program to meet strict time constraints. A starting point in order to attack this problem is to infer the computational complexity of such programs (or fragments thereof). This is one of the reasons why the development of static analysis techniques for inferring cost-related properties of programs (usually upper and/or lower bounds of actual costs) has received considerable attention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective static analyses have been proposed which infer bounds on the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of the platform in order to determine the valúes of certain parameters for a given platform. These parameters calíbrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A total power radiometer operating at 99 GHz was implemented for a propagation experiment aimed to estimate attenuation along a slant path, in Madrid. Valuable data was collected during a measurement campaign in mid-april of 2012. The retrieved time series of radiometric attenuation allow the use of this technique at this frequency to be validated, under clear sky and cloudy conditions, using a low cost instrument calibrated with simple procedures. In spite of some hardware limitations, this experiment shows an interesting application of radiometric technique in order to study atmospheric propagation at 99 GHz.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El propósito de esta tesis es la implementación de métodos eficientes de adaptación de mallas basados en ecuaciones adjuntas en el marco de discretizaciones de volúmenes finitos para mallas no estructuradas. La metodología basada en ecuaciones adjuntas optimiza la malla refinándola adecuadamente con el objetivo de mejorar la precisión de cálculo de un funcional de salida dado. El funcional suele ser una magnitud escalar de interés ingenieril obtenida por post-proceso de la solución, como por ejemplo, la resistencia o la sustentación aerodinámica. Usualmente, el método de adaptación adjunta está basado en una estimación a posteriori del error del funcional de salida mediante un promediado del residuo numérico con las variables adjuntas, “Dual Weighted Residual method” (DWR). Estas variables se obtienen de la solución del problema adjunto para el funcional seleccionado. El procedimiento habitual para introducir este método en códigos basados en discretizaciones de volúmenes finitos involucra la utilización de una malla auxiliar embebida obtenida por refinamiento uniforme de la malla inicial. El uso de esta malla implica un aumento significativo de los recursos computacionales (por ejemplo, en casos 3D el aumento de memoria requerida respecto a la que necesita el problema fluido inicial puede llegar a ser de un orden de magnitud). En esta tesis se propone un método alternativo basado en reformular la estimación del error del funcional en una malla auxiliar más basta y utilizar una técnica de estimación del error de truncación, denominada _ -estimation, para estimar los residuos que intervienen en el método DWR. Utilizando esta estimación del error se diseña un algoritmo de adaptación de mallas que conserva los ingredientes básicos de la adaptación adjunta estándar pero con un coste computacional asociado sensiblemente menor. La metodología de adaptación adjunta estándar y la propuesta en la tesis han sido introducidas en un código de volúmenes finitos utilizado habitualmente en la industria aeronáutica Europea. Se ha investigado la influencia de distintos parámetros numéricos que intervienen en el algoritmo. Finalmente, el método propuesto se compara con otras metodologías de adaptación de mallas y su eficiencia computacional se demuestra en una serie de casos representativos de interés aeronáutico. ABSTRACT The purpose of this thesis is the implementation of efficient grid adaptation methods based on the adjoint equations within the framework of finite volume methods (FVM) for unstructured grid solvers. The adjoint-based methodology aims at adapting grids to improve the accuracy of a functional output of interest, as for example, the aerodynamic drag or lift. The adjoint methodology is based on the a posteriori functional error estimation using the adjoint/dual-weighted residual method (DWR). In this method the error in a functional output can be directly related to local residual errors of the primal solution through the adjoint variables. These variables are obtained by solving the corresponding adjoint problem for the chosen functional. The common approach to introduce the DWR method within the FVM framework involves the use of an auxiliary embedded grid. The storage of this mesh demands high computational resources, i.e. over one order of magnitude increase in memory relative to the initial problem for 3D cases. In this thesis, an alternative methodology for adapting the grid is proposed. Specifically, the DWR approach for error estimation is re-formulated on a coarser mesh level using the _ -estimation method to approximate the truncation error. Then, an output-based adaptive algorithm is designed in such way that the basic ingredients of the standard adjoint method are retained but the computational cost is significantly reduced. The standard and the new proposed adjoint-based adaptive methodologies have been incorporated into a flow solver commonly used in the EU aeronautical industry. The influence of different numerical settings has been investigated. The proposed method has been compared against different grid adaptation approaches and the computational efficiency of the new method has been demonstrated on some representative aeronautical test cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A medida que se incrementa la energía de los aceleradores de partículas o iones pesados como el CERN o GSI, de los reactores de fusión como JET o ITER, u otros experimentos científicos, se va haciendo cada vez más imprescindible el uso de técnicas de manipulación remota para la interacción con el entorno sujeto a la radiación. Hasta ahora la tasa de dosis radioactiva en el CERN podía tomar valores cercanos a algunos mSv para tiempos de enfriamiento de horas, que permitían la intervención humana para tareas de mantenimiento. Durante los primeros ensayos con plasma en JET, se alcanzaban valores cercanos a los 200 μSv después de un tiempo de enfriamiento de 4 meses y ya se hacía extensivo el uso de técnicas de manipulación remota. Hay una clara tendencia al incremento de los niveles de radioactividad en el futuro en este tipo de instalaciones. Un claro ejemplo es ITER, donde se esperan valores de 450 Sv/h en el centro del toroide a los 11 días de enfriamiento o los nuevos niveles energéticos del CERN que harán necesario una apuesta por niveles de mantenimiento remotos. En estas circunstancias se enmarca esta tesis, que estudia un sistema de control bilateral basado en fuerza-posición, tratando de evitar el uso de sensores de fuerza/par, cuyo contenido electrónico los hace especialmente sensitivos en estos ambientes. El contenido de este trabajo se centra en la teleoperación de robots industriales, que debido a su reconocida solvencia y facilidad para ser adaptados a estos entornos, unido al bajo coste y alta disponibilidad, les convierte en una alternativa interesante para tareas de manipulación remota frente a costosas soluciones a medida. En primer lugar se considera el problema cinemático de teleoperación maestro-esclavo de cinemática disimilar y se desarrolla un método general para la solución del problema en el que se incluye el uso de fuerzas asistivas para guiar al operador. A continuación se explican con detalle los experimentos realizados con un robot ABB y que muestran las dificultades encontradas y recomendaciones para solventarlas. Se concluye el estudio cinemático con un método para el encaje de espacios de trabajo entre maestro y esclavo disimilares. Posteriormente se mira hacia la dinámica, estudiándose el modelado de robots con vistas a obtener un método que permita estimar las fuerzas externas que actúan sobre los mismos. Durante la caracterización del modelo dinámico, se realizan varios ensayos para tratar de encontrar un compromiso entre complejidad de cálculo y error de estimación. También se dan las claves para modelar y caracterizar robots con estructura en forma de paralelogramo y se presenta la arquitectura de control deseada. Una vez obtenido el modelo completo del esclavo, se investigan diferentes alternativas que permitan una estimación de fuerzas externas en tiempo real, minimizando las derivadas de la posición para minimizar el ruido. Se comienza utilizando observadores clásicos del estado para ir evolucionando hasta llegar al desarrollo de un observador de tipo Luenberger-Sliding cuya implementación es relativamente sencilla y sus resultados contundentes. También se analiza el uso del observador propuesto durante un control bilateral simulado en el que se compara la realimentación de fuerzas obtenida con las técnicas clásicas basadas en error de posición frente a un control basado en fuerza-posición donde la fuerza es estimada y no medida. Se comprueba como la solución propuesta da resultados comparables con las arquitecturas clásicas y sin embargo introduce una alternativa para la teleoperación de robots industriales cuya teleoperación en entornos radioactivos sería imposible de otra manera. Finalmente se analizan los problemas derivados de la aplicación práctica de la teleoperación en los escenarios mencionados anteriormente. Debido a las condiciones prohibitivas para todo equipo electrónico, los sistemas de control se deben colocar a gran distancia de los manipuladores, dando lugar a longitudes de cable de centenares de metros. En estas condiciones se crean sobretensiones en controladores basados en PWM que pueden ser destructivas para el sistema formado por control, cableado y actuador, y por tanto, han de ser eliminadas. En este trabajo se propone una solución basada en un filtro LC comercial y se prueba de forma extensiva que su inclusión no produce efectos negativos sobre el control del actuador. ABSTRACT As the energy on the particle accelerators or heavy ion accelerators such as CERN or GSI, fusion reactors such as JET or ITER, or other scientific experiments is increased, it is becoming increasingly necessary to use remote handling techniques to interact with the remote and radioactive environment. So far, the dose rate at CERN could present values near several mSv for cooling times on the range of hours, which allowed human intervention for maintenance tasks. At JET, they measured values close to 200 μSv after a cooling time of 4 months and since then, the remote handling techniques became usual. There is a clear tendency to increase the radiation levels in the future. A clear example is ITER, where values of 450 Sv/h are expected in the centre of the torus after 11 days of cooling. Also, the new energetic levels of CERN are expected to lead to a more advanced remote handling means. In these circumstances this thesis is framed, studying a bilateral control system based on force-position, trying to avoid the use of force/torque sensors, whose electronic content makes them very sensitive in these environments. The contents of this work are focused on teleoperating industrial robots, which due its well-known reliability, easiness to be adapted to these environments, cost-effectiveness and high availability, are considered as an interesting alternative to expensive custom-made solutions for remote handling tasks. Firstly, the kinematic problem of teloperating master and slave with dissimilar kinematics is analysed and a new general approach for solving this issue is presented. The solution includes using assistive forces in order to guide the human operator. Coming up next, I explain with detail the experiments accomplished with an ABB robot that show the difficulties encountered and the proposed solutions. This section is concluded with a method to match the master’s and slave’s workspaces when they present dissimilar kinematics. Later on, the research studies the dynamics, with special focus on robot modelling with the purpose of obtaining a method that allows to estimate external forces acting on them. During the characterisation of the model’s parameters, a set of tests are performed in order to get to a compromise between computational complexity and estimation error. Key points for modelling and characterising robots with a parallelogram structure are also given, and the desired control architecture is presented. Once a complete model of the slave is obtained, different alternatives for external force estimation are review to be able to predict forces in real time, minimizing the position differentiation to minimize the estimation noise. The research starts by implementing classic state observers and then it evolves towards the use of Luenberger- Sliding observers whose implementation is relatively easy and the results are convincing. I also analyse the use of proposed observer during a simulated bilateral control on which the force feedback obtained with the classic techniques based on the position error is compared versus a control architecture based on force-position, where the force is estimated instead of measured. I t is checked how the proposed solution gives results comparable with the classical techniques and however introduces an alternative method for teleoperating industrial robots whose teleoperation in radioactive environments would have been impossible in a different way. Finally, the problems originated by the practical application of teleoperation in the before mentioned scenarios are analysed. Due the prohibitive conditions for every electronic equipment, the control systems should be placed far from the manipulators. This provokes that the power cables that fed the slaves devices can present lengths of hundreds of meters. In these circumstances, overvoltage waves are developed when implementing drives based on PWM technique. The occurrence of overvoltage is very dangerous for the system composed by drive, wiring and actuator, and has to be eliminated. During this work, a solution based on commercial LC filters is proposed and it is extensively proved that its inclusion does not introduce adverse effects into the actuator’s control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a ? -estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier?Stokes equations. It is shown that the two quasi- a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear Fresnel collectors are identified as a technology that should play a main role in order to reduce cost of Concentrating Solar Power. An optical and thermal analysis of the different blocks of the solar power plant is carried out, where Fresnel arrays are compared with the most extended linear technology: parabolic trough collectors. It is demonstrated that the optical performance of Fresnel array is very close to that of PTC, with similar values of maximum flux intensities. In addition, if the heat carrier fluid flows in series by the tubes of the receiver, relatively high thermal efficiencies are achieved. Thus, an annual solar to electricity efficiency of 19% is expected, which is similar to the state of the art in PTCs; this is done with a reduction of costs, thanks to lighter structures, that drives to an estimation of LCOE of around 6.5 c€/kWh.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work is addressed the topic of estimation of velocity and acceleration from digital position data. It is presented a review of several classic methods and implemented with real position data from a low cost digital sensor of a hydraulic linear actuator. The results are analyzed and compared. It is shown that static methods have a limited bandwidth application, and that the performance of some methods may be enhanced by adapting its parameters according to the current state.