123 resultados para DBS
Resumo:
Background: Deep brain stimulation (DBS) is highly successful in treating Parkinson's disease (PD), dystonia, and essential tremor (ET). Until recently implantable neurostimulators were nonrechargeable, battery-driven devices, with a lifetime of about 3-5 years. This relatively short duration causes problems for patients (e.g. programming and device-use limitations, unpredictable expiration, surgeries to replace depleted batteries). Additionally, these batteries (relatively large with considerable weight) may cause discomfort. To overcome these issues, the first rechargeable DBS device was introduced: smaller, lighter and intended to function for 9 years. Methods: Of 35 patients implanted with the rechargeable device, 21 (including 8 PD, 10 dystonia, 2 ET) were followed before and 3 months after surgery and completed a systematic survey of satisfaction with the rechargeable device. Results: Overall patient satisfaction was high (83.3 ± 18.3). Dystonia patients tended to have lower satisfaction values for fit and comfort of the system than PD patients. Age was significantly negatively correlated with satisfaction regarding process of battery recharging. Conclusions: Dystonia patients (generally high-energy consumption, severe problems at the DBS device end-of-life) are good, reliable candidates for a rechargeable DBS system. In PD, younger patients, without signs of dementia and good technical understanding, might have highest benefit.
Resumo:
Although subthalamic-deep brain stimulation (STN-DBS) is an efficient treatment for Parkinson's disease (PD), its effects on fine motor functions are not clear. We present the case of a professional violinist with PD treated with STN-DBS. DBS improved musical articulation, intonation and emotional expression and worsened timing relative to a timekeeper (metronome). The same effects were found for dopaminergic treatment. These results suggest that STN-DBS, mimicking the effects of dopaminergic stimulation, improves fine-tuned motor behaviour whilst impairing timing precision.
Resumo:
BACKGROUND AND OBJECTIVE Phenotyping cocktails use a combination of cytochrome P450 (CYP)-specific probe drugs to simultaneously assess the activity of different CYP isoforms. To improve the clinical applicability of CYP phenotyping, the main objectives of this study were to develop a new cocktail based on probe drugs that are widely used in clinical practice and to test whether alternative sampling methods such as collection of dried blood spots (DBS) or saliva could be used to simplify the sampling process. METHODS In a randomized crossover study, a new combination of commercially available probe drugs (the Basel cocktail) was tested for simultaneous phenotyping of CYP1A2, CYP2B6, CYP2C9, CYP2C19, CYP2D6 and CYP3A4. Sixteen subjects received low doses of caffeine, efavirenz, losartan, omeprazole, metoprolol and midazolam in different combinations. All subjects were genotyped, and full pharmacokinetic profiles of the probe drugs and their main metabolites were determined in plasma, dried blood spots and saliva samples. RESULTS The Basel cocktail was well tolerated, and bioequivalence tests showed no evidence of mutual interactions between the probe drugs. In plasma, single timepoint metabolic ratios at 2 h (for CYP2C19 and CYP3A4) or at 8 h (for the other isoforms) after dosing showed high correlations with corresponding area under the concentration-time curve (AUC) ratios (AUC0-24h parent/AUC0-24h metabolite) and are proposed as simple phenotyping metrics. Metabolic ratios in dried blood spots (for CYP1A2 and CYP2C19) or in saliva samples (for CYP1A2) were comparable to plasma ratios and offer the option of minimally invasive or non-invasive phenotyping of these isoforms. CONCLUSIONS This new combination of phenotyping probe drugs can be used without mutual interactions. The proposed sampling timepoints have the potential to facilitate clinical application of phenotyping but require further validation in conditions of altered CYP activity. The use of DBS or saliva samples seems feasible for phenotyping of the selected CYP isoforms.
Resumo:
Die Tanzwissenschaft sieht sich stets mit methodischen Herausforderungen und mit der Schwierigkeit konfrontiert Tanzereignisse in eine diskursive Form zu bringen. Es gilt, singuläre und adäquate – oft interdisziplinäre – Methoden für den jeweiligen Untersuchungsgegenstand zu finden. In meinem Beitrag möchte ich danach fragen, inwiefern sich eine jeweilige Methode aus dem konkreten Untersuchungsgegenstand heraus ergeben kann. Exemplarisch beschäftige ich mich mit der ‚improvisierten Choreographie’ Accords von Thomas Hauert und der Kompanie ZOO. Das choreographische Prinzip in Accords ist das improvisierte Unisono. Die Tanzenden orientieren sich aneinander, übernehmen Bewegungen voneinander und agieren vergleichbar einem Vogel- oder Fischschwarm. Mittels der Denkfigur des Schwarms möchte ich diese so entstehenden ‚schwärmenden Konstellationen’ beschreiben und die Funktions- und Operationsregeln dieses Gebildes analysieren. Des Weiteren sollen mit dieser epistemologischen Figur die kinästhetischen Übertragungsprozesse diskutiert werden, die sich zwischen den Tanzenden ereignen. Die Schwarmfigur scheint geradezu prädestiniert für die Betrachtung einer Tanzimprovisation. Beide Phänomene – der Schwarm und die Improvisation gleichermassen – zeichnen sich aus durch Transitorik, Performativität, Kontingenz und Emergenz. Dennoch gilt es nicht nur nach dem Potenzial eines solchen Vorgehens und der Produktivität dieser epistemologischen Denkfigur zu fragen, sondern auch mögliche Schwierigkeiten zu problematisieren.
Resumo:
INTRODUCTION Neurogenic bladder dysfunction is well described in Parkinson's disease and has a major impact on quality of live. In contrast, little is known about the extent of urinary symptoms in other movement disorders such as dystonia and about the role of the basal ganglia in bladder control.. PATIENTS AND METHODS A consecutive series of 11 patients with severe dystonia undergoing deep brain stimulation (DBS) of the globus pallidus internus was prospectively enrolled. Bladder function was assessed by the International Prostate Symptom Score and urodynamic investigation (UDI) before DBS surgery and afterwards in the conditions with and without DBS. RESULTS In UDI before DBS surgery, detrusor overactivity was found in 36% (4/11) of dystonia patients. With pallidal DBS ON, maximum flow rate significantly decreased, post-void residual significantly increased and detrusor overactivity disappeared.. CONCLUSIONS Pathological urodynamic changes can be found in a relevant percentage of dystonia patients. Pallidal DBS has a relaxing effect on detrusor function indicating a role of the basal ganglia in lower urinary tract control. Thus, a better understanding on how subcortical networks influence lower urinary tract function might open new therapeutic perspectives..
Resumo:
Background: Access to hepatitis B viral load (VL) testing is poor in sub-Saharan Africa (SSA) due toeconomic and logistical reasons.Objectives: To demonstrate the feasibility of testing dried blood spots (DBS) for hepatitis B virus (HBV)VL in a laboratory in Lusaka, Zambia, and to compare HBV VLs between DBS and plasma samples.Study design: Paired plasma and DBS samples from HIV-HBV co-infected Zambian adults were analyzedfor HBV VL using the COBAS AmpliPrep/COBAS TaqMan HBV test (Version 2.0) and for HBV genotypeby direct sequencing. We used Bland-Altman analysis to compare VLs between sample types and bygenotype. Logistic regression analysis was conducted to assess the probability of an undetectable DBSresult by plasma VL.Results: Among 68 participants, median age was 34 years, 61.8% were men, and median plasma HBV VLwas 3.98 log IU/ml (interquartile range, 2.04–5.95). Among sequenced viruses, 28 were genotype A1 and27 were genotype E. Bland–Altman plots suggested strong agreement between DBS and plasma VLs. DBSVLs were on average 1.59 log IU/ml lower than plasma with 95% limits of agreement of −2.40 to −0.83 logIU/ml. At a plasma VL ≥2,000 IU/ml, the probability of an undetectable DBS result was 1.8% (95% CI:0.5–6.6). At plasma VL ≥20,000 IU/ml this probability reduced to 0.2% (95% CI: 0.03–1.7).
Resumo:
The forensic utility of fatty acid ethyl esters (FAEEs) in dried blood spots (DBS) as short-term confirmatory markers for ethanol intake was examined. An LC-MS/MS method for the determination of FAEEs in DBS was developed and validated to investigate FAEE formation and elimination in a drinking study, whereby eight subjects ingested 0.66-0.84 g/kg alcohol to reach blood alcohol concentrations (BAC) of 0.8 g/kg. Blood was taken every 1.5-2 h, BAC was determined, and dried blood spots were prepared, with 50 μL of blood, for the determination of FAEEs. Lower limits of quantitation (LLOQ) were between 15 and 37 ng/mL for the four major FAEEs. Validation data are presented in detail. In the drinking study, ethyl palmitate and ethyl oleate proved to be the two most suitable markers for FAEE determination. Maximum FAEE concentrations were reached in samples taken 2 or 4 h after the start of drinking. The following mean peak concentrations (c̅ max) were reached: ethyl myristate 14 ± 4 ng/mL, ethyl palmitate 144 ± 35 ng/mL, ethyl oleate 125 ± 55 ng/mL, ethyl stearate 71 ± 21 ng/mL, total FAEEs 344 ± 91 ng/mL. Detectability of FAEEs was found to be on the same time scale as BAC. In liquid blood samples containing ethanol, FAEE concentrations increase post-sampling. This study shows that the use of DBS fixation prevents additional FAEE formation in blood samples containing ethanol. Positive FAEE results obtained by DBS analysis can be used as evidence for the presence of ethanol in the original blood sample. Graphical Abstract Time courses for fatty acid ethyl ester (FAEE) concentrations in DBS and ethanol concentrations for subject 1 over a period of 7 h. Ethanol ingestion occured during the first hour of the time course.
Resumo:
BACKGROUND Deep brain stimulation (DBS) is recognized as an effective treatment for movement disorders. We recently changed our technique, limiting the number of brain penetrations to three per side. OBJECTIVES The first aim was to evaluate the electrode precision on both sides of surgery since we implemented this surgical technique. The second aim was to analyse whether or not the electrode placement was improved with microrecording and macrostimulation. METHODS We retrospectively reviewed operation protocols and MRIs of 30 patients who underwent bilateral DBS. For microrecording and macrostimulation, we used three parallel channels of the 'Ben Gun' centred on the MRI-planned target. Pre- and post-operative MRIs were merged. The distance between the planned target and the centre of the implanted electrode artefact was measured. RESULTS There was no significant difference in targeting precision on both sides of surgery. There was more intra-operative adjustment of the second electrode positioning based on microrecording and macrostimulation, which allowed to significantly approach the MRI-planned target on the medial-lateral axis. CONCLUSION There was more electrode adjustment needed on the second side, possibly in relation with brain shift. We thus suggest performing a single central track with electrophysiological and clinical assessment, with multidirectional exploration on demand for suboptimal clinical responses.
Resumo:
Objective: To assess the neuropsychological outcome as a safety measure and quality control in patients with subthalamic nucleus (STN) stimulation for PD. Background: Deep brain stimulation (DBS) is considered a relatively safe treatment used in patients with movement disorders. However, neuropsychological alterations have been reported in patients with STN DBS for PD. Cognition and mood are important determinants of quality of life in PD patients and must be assessed for safety control. Methods: Seventeen consecutive patients (8 women) who underwent STN DBS for PD have been assessed before and 4 months after surgery. Besides motor symptoms (UPDRS-III), mood (Beck Depression Inventory, Hamilton Depression Rating Scale) and neuropsychological aspects, mainly executive functions, have been assessed (mini mental state examination, semantic and phonematic verbal fluency, go-no go test, stroop test, trail making test, tests of alertness and attention, digit span, wordlist learning, praxia, Boston naming test, figure drawing, visual perception). Paired t-tests were used for comparisons before and after surgery. Results: Patients were 61.6±7.8 years old at baseline assessment. All surgeries were performed without major adverse events. Motor symptoms ‘‘on’’ medication remained stable whereas they improved in the ‘‘off’’ condition (p<0.001). Mood was not depressed before surgery and remained unchanged at follow-up. All neuropsychological assessment outcome measures remained stable at follow-up with the exception of semantic verbal fluency and wordlist learning. Semantic verbal fluency decreased by 21±16% (p<0.001) and there was a trend to worse phonematic verbal fluency after surgery (p=0.06). Recall of a list of 10 words was worse after surgery only for the third attempt of recall (13%, p<0.005). Conclusions: Verbal fluency decreased in our patients after STN DBS, as previously reported. The procedure was otherwise safe and did not lead to deterioration of mood.
Resumo:
A unique macroseismic data set for the strongest earthquakes occurred since 1940 in Vrancea region, is constructed by a thorough review of all available sources. Inconsistencies and errors in the reported data and in their use are analyzed as well. The final data set, free from inconsistencies, including those at the political borders, contains 9822 observations for the strong intermediate-depth earthquakes: 1940, Mw=7.7; 1977, Mw=7.4; 1986, Mw=7.1; 1990, May 30, Mw=6.9 and 1990, May 31, Mw=6.4; 2004, Mw=6.0. This data set is available electronically as supplementary data for the present paper. From the discrete macroseismic data the continuous macroseismic field is generated using the methodology developed by Molchan et al. (2002) that, along with the unconventional smoothing method Modified Polynomial Filtering (MPF), uses the Diffused Boundary (DB) method, which visualizes the uncertainty in the isoseismal's boundaries. The comparison of DBs with previous isoseismals maps represents a good evaluation criterion of the reliability of earlier published maps. The produced isoseismals can be used not only for the formal comparison between observed and theoretical isoseismals, but also for the retrieval of source properties and the assessment of local responses (Molchan et al., 2011).
Resumo:
Comprehensive biogeochemical studies including determination of isotopic composition of organic carbon in both suspended matter and surface layer (0-1 cm) bottom sediments (more than 260 determinations of d13C-Corg) were carried out for five Arctic shelf seas: White, Barents, Kara, East Siberian, and Chukchi Seas. The aim of this study is to elucidate causes that change isotopic composition of particulate organic carbon at the water-sediment boundary. It is shown that isotopic composition of organic carbon in sediments from seas with high river run-off (White, Kara, and East Siberian Seas) does not inherit isotopic composition of organic carbon in particles precipitating from the water column, but is enriched in 13C. Seas with low river run-off (Barents and Chukchi Seas) show insignificant difference between d13C-Corg values in both suspended load and sediments because of low content of isotopically light allochthonous organic matter in suspended matter. Biogeochemical studies with radioisotope tracers (14CO2, 35S, and 14CH4) revealed existence of specific microbial filter formed from heterotrophic and autotrophic organisms at the water-sediment boundary. This filter prevents mass influx of products of organic matter decomposition into the water column, as well as reduces influx of OM contained in suspended matter from water into sediments.
Resumo:
This thesis contributes to the analysis and design of printed reflectarray antennas. The main part of the work is focused on the analysis of dual offset antennas comprising two reflectarray surfaces, one of them acts as sub-reflector and the second one acts as mainreflector. These configurations introduce additional complexity in several aspects respect to conventional dual offset reflectors, however they present a lot of degrees of freedom that can be used to improve the electrical performance of the antenna. The thesis is organized in four parts: the development of an analysis technique for dualreflectarray antennas, a preliminary validation of such methodology using equivalent reflector systems as reference antennas, a more rigorous validation of the software tool by manufacturing and testing a dual-reflectarray antenna demonstrator and the practical design of dual-reflectarray systems for some applications that show the potential of these kind of configurations to scan the beam and to generate contoured beams. In the first part, a general tool has been implemented to analyze high gain antennas which are constructed of two flat reflectarray structures. The classic reflectarray analysis based on MoM under local periodicity assumption is used for both sub and main reflectarrays, taking into account the incident angle on each reflectarray element. The incident field on the main reflectarray is computed taking into account the field radiated by all the elements on the sub-reflectarray.. Two approaches have been developed, one which employs a simple approximation to reduce the computer run time, and the other which does not, but offers in many cases, improved accuracy. The approximation is based on computing the reflected field on each element on the main reflectarray only once for all the fields radiated by the sub-reflectarray elements, assuming that the response will be the same because the only difference is a small variation on the angle of incidence. This approximation is very accurate when the reflectarray elements on the main reflectarray show a relatively small sensitivity to the angle of incidence. An extension of the analysis technique has been implemented to study dual-reflectarray antennas comprising a main reflectarray printed on a parabolic surface, or in general in a curved surface. In many applications of dual-reflectarray configurations, the reflectarray elements are in the near field of the feed-horn. To consider the near field radiated by the horn, the incident field on each reflectarray element is computed using a spherical mode expansion. In this region, the angles of incidence are moderately wide, and they are considered in the analysis of the reflectarray to better calculate the actual incident field on the sub-reflectarray elements. This technique increases the accuracy for the prediction of co- and cross-polar patterns and antenna gain respect to the case of using ideal feed models. In the second part, as a preliminary validation, the proposed analysis method has been used to design a dual-reflectarray antenna that emulates previous dual-reflector antennas in Ku and W-bands including a reflectarray as subreflector. The results for the dualreflectarray antenna compare very well with those of the parabolic reflector and reflectarray subreflector; radiation patterns, antenna gain and efficiency are practically the same when the main parabolic reflector is substituted by a flat reflectarray. The results show that the gain is only reduced by a few tenths of a dB as a result of the ohmic losses in the reflectarray. The phase adjustment on two surfaces provided by the dual-reflectarray configuration can be used to improve the antenna performance in some applications requiring multiple beams, beam scanning or shaped beams. Third, a very challenging dual-reflectarray antenna demonstrator has been designed, manufactured and tested for a more rigorous validation of the analysis technique presented. The proposed antenna configuration has the feed, the sub-reflectarray and the main-reflectarray in the near field one to each other, so that the conventional far field approximations are not suitable for the analysis of such antenna. This geometry is used as benchmarking for the proposed analysis tool in very stringent conditions. Some aspects of the proposed analysis technique that allow improving the accuracy of the analysis are also discussed. These improvements include a novel method to reduce the inherent cross polarization which is introduced mainly from grounded patch arrays. It has been checked that cross polarization in offset reflectarrays can be significantly reduced by properly adjusting the patch dimensions in the reflectarray in order to produce an overall cancellation of the cross-polarization. The dimensions of the patches are adjusted in order not only to provide the required phase-distribution to shape the beam, but also to exploit the crosses by zero of the cross-polarization components. The last part of the thesis deals with direct applications of the technique described. The technique presented is directly applicable to the design of contoured beam antennas for DBS applications, where the requirements of cross-polarisation are very stringent. The beam shaping is achieved by synthesithing the phase distribution on the main reflectarray while the sub-reflectarray emulates an equivalent hyperbolic subreflector. Dual-reflectarray antennas present also the ability to scan the beam over small angles about boresight. Two possible architectures for a Ku-band antenna are also described based on a dual planar reflectarray configuration that provides electronic beam scanning in a limited angular range. In the first architecture, the beam scanning is achieved by introducing a phase-control in the elements of the sub-reflectarray and the mainreflectarray is passive. A second alternative is also studied, in which the beam scanning is produced using 1-bit control on the main reflectarray, while a passive subreflectarray is designed to provide a large focal distance within a compact configuration. The system aims to develop a solution for bi-directional satellite links for emergency communications. In both proposed architectures, the objective is to provide a compact optics and simplicity to be folded and deployed.
Resumo:
El diseño de una antena reflectarray bajo la aproximación de periodicidad local requiere la determinación de la matriz de scattering de estructuras multicapa con metalizaciones periódicas para un gran número de geometrías diferentes. Por lo tanto, a la hora de diseñar antenas reflectarray en tiempos de CPU razonables, se necesitan herramientas númericas rápidas y precisas para el análisis de las estructuras periódicas multicapa. En esta tesis se aplica la versión Galerkin del Método de los Momentos (MDM) en el dominio espectral al análisis de las estructuras periódicas multicapa necesarias para el diseño de antenas reflectarray basadas en parches apilados o en dipolos paralelos coplanares. Desgraciadamente, la aplicación de este método numérico involucra el cálculo de series dobles infinitas, y mientras que algunas series convergen muy rápidamente, otras lo hacen muy lentamente. Para aliviar este problema, en esta tesis se propone un novedoso MDM espectral-espacial para el análisis de las estructuras periódicas multicapa, en el cual las series rápidamente convergente se calculan en el dominio espectral, y las series lentamente convergentes se calculan en el dominio espacial mediante una versión mejorada de la formulación de ecuaciones integrales de potenciales mixtos (EIPM) del MDM. Esta versión mejorada se basa en la interpolación eficiente de las funciones de Green multicapa periódicas, y en el cálculo eficiente de las integrales singulares que conducen a los elementos de la matriz del MDM. El novedoso método híbrido espectral-espacial y el tradicional MDM en el dominio espectral se han comparado en el caso de los elementos reflectarray basado en parches apilados. Las simulaciones numéricas han demostrado que el tiempo de CPU requerido por el MDM híbrido es alrededor de unas 60 veces más rápido que el requerido por el tradicional MDM en el dominio espectral para una precisión de dos cifras significativas. El uso combinado de elementos reflectarray con parches apilados y técnicas de optimización de banda ancha ha hecho posible diseñar antenas reflectarray de transmisiónrecepción (Tx-Rx) y polarización dual para aplicaciones de espacio con requisitos muy restrictivos. Desgraciadamente, el nivel de aislamiento entre las polarizaciones ortogonales en antenas DBS (típicamente 30 dB) es demasiado exigente para ser conseguido con las antenas basadas en parches apilados. Además, el uso de elementos reflectarray con parches apilados conlleva procesos de fabricación complejos y costosos. En esta tesis se investigan varias configuraciones de elementos reflectarray basadas en conjuntos de dipolos paralelos con el fin de superar los inconvenientes que presenta el elemento basado en parches apilados. Primeramente, se propone un elemento consistente en dos conjuntos apilados ortogonales de tres dipolos paralelos para aplicaciones de polarización dual. Se ha diseñado, fabricado y medido una antena basada en este elemento, y los resultados obtenidos para la antena indican que tiene unas altas prestaciones en términos de ancho de banda, pérdidas, eficiencia y discriminación contrapolar, además de requerir un proceso de fabricación mucho más sencillo que el de las antenas basadas en tres parches apilados. Desgraciadamente, el elemento basado en dos conjuntos ortogonales de tres dipolos paralelos no proporciona suficientes grados de libertad para diseñar antenas reflectarray de transmisión-recepción (Tx-Rx) de polarización dual para aplicaciones de espacio por medio de técnicas de optimización de banda ancha. Por este motivo, en la tesis se propone un nuevo elemento reflectarray que proporciona los grados de libertad suficientes para cada polarización. El nuevo elemento consiste en dos conjuntos ortogonales de cuatro dipolos paralelos. Cada conjunto contiene tres dipolos coplanares y un dipolo apilado. Para poder acomodar los dos conjuntos de dipolos en una sola celda de la antena reflectarray, el conjunto de dipolos de una polarización está desplazado medio período con respecto al conjunto de dipolos de la otra polarización. Este hecho permite usar solamente dos niveles de metalización para cada elemento de la antena, lo cual simplifica el proceso de fabricación como en el caso del elemento basados en dos conjuntos de tres dipolos paralelos coplanares. Una antena de doble polarización y doble banda (Tx-Rx) basada en el nuevo elemento ha sido diseñada, fabricada y medida. La antena muestra muy buenas presentaciones en las dos bandas de frecuencia con muy bajos niveles de polarización cruzada. Simulaciones numéricas presentadas en la tesis muestran que estos bajos de niveles de polarización cruzada se pueden reducir todavía más si se llevan a cabo pequeñas rotaciones de los dos conjuntos de dipolos asociados a cada polarización. ABSTRACT The design of a reflectarray antenna under the local periodicity assumption requires the determination of the scattering matrix of a multilayered structure with periodic metallizations for quite a large number of different geometries. Therefore, in order to design reflectarray antennas within reasonable CPU times, fast and accurate numerical tools for the analysis of the periodic multilayered structures are required. In this thesis the Galerkin’s version of the Method of Moments (MoM) in the spectral domain is applied to the analysis of the periodic multilayered structures involved in the design of reflectarray antennas made of either stacked patches or coplanar parallel dipoles. Unfortunately, this numerical approach involves the computation of double infinite summations, and whereas some of these summations converge very fast, some others converge very slowly. In order to alleviate this problem, in the thesis a novel hybrid MoM spectral-spatial domain approach is proposed for the analysis of the periodic multilayered structures. In the novel approach, whereas the fast convergent summations are computed in the spectral domain, the slowly convergent summations are computed by means of an enhanced Mixed Potential Integral Equation (MPIE) formulation of the MoM in the spatial domain. This enhanced formulation is based on the efficient interpolation of the multilayered periodic Green’s functions, and on the efficient computation of the singular integrals leading to the MoM matrix entries. The novel hybrid spectral-spatial MoM code and the standard spectral domain MoM code have both been compared in the case of reflectarray elements based on multilayered stacked patches. Numerical simulations have shown that the CPU time required by the hybrid MoM is around 60 times smaller than that required by the standard spectral MoM for an accuracy of two significant figures. The combined use of reflectarray elements based on stacked patches and wideband optimization techniques has made it possible to design dual polarization transmit-receive (Tx-Rx) reflectarrays for space applications with stringent requirements. Unfortunately, the required level of isolation between orthogonal polarizations in DBS antennas (typically 30 dB) is hard to achieve with the configuration of stacked patches. Moreover, the use of reflectarrays based on stacked patches leads to a complex and expensive manufacturing process. In this thesis, we investigate several configurations of reflectarray elements based on sets of parallel dipoles that try to overcome the drawbacks introduced by the element based on stacked patches. First, an element based on two stacked orthogonal sets of three coplanar parallel dipoles is proposed for dual polarization applications. An antenna made of this element has been designed, manufactured and measured, and the results obtained show that the antenna presents a high performance in terms of bandwidth, losses, efficiency and cross-polarization discrimination, while the manufacturing process is cheaper and simpler than that of the antennas made of stacked patches. Unfortunately, the element based on two sets of three coplanar parallel dipoles does not provide enough degrees of freedom to design dual-polarization transmit-receive (Tx-Rx) reflectarray antennas for space applications by means of wideband optimization techniques. For this reason, in the thesis a new reflectarray element is proposed which does provide enough degrees of freedom for each polarization. This new element consists of two orthogonal sets of four parallel dipoles, each set containing three coplanar dipoles and one stacked dipole. In order to accommodate the two sets of dipoles in each reflectarray cell, the set of dipoles for one polarization is shifted half a period from the set of dipoles for the other polarization. This also makes it possible to use only two levels of metallization for the reflectarray element, which simplifies the manufacturing process as in the case of the reflectarray element based on two sets of three parallel dipoles. A dual polarization dual-band (Tx-Rx) reflectarray antenna based on the new element has been designed, manufactured and measured. The antenna shows a very good performance in both Tx and Rx frequency bands with very low levels of cross-polarization. Numerical simulations carried out in the thesis have shown that the low levels of cross-polarization can be even made smaller by means of small rotations of the two sets of dipoles associated to each polarization.
Resumo:
El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.
Resumo:
The tmRNA database (tmRDB) is maintained at the University of Texas Health Science Center at Tyler, Texas, and accessible on the World Wide Web at the URL http://psyche.uthct.edu/dbs/tmRDB/tmRDB.html. Mirror sites are located at Auburn University, Auburn, Alabama (http://www.ag.auburn.edu/mirror/tmRDB/) and the Institute of Biological Sciences, Aarhus, Denmark (http://www.bioinf.au.dk/tmRDB/). The tmRDB provides information and citation links about tmRNA, a molecule that combines functions of tRNA and mRNA in trans-translation. tmRNA is likely to be present in all bacteria and has been found in algae chloroplasts, the cyanelle of Cyanophora paradoxa and the mitochondrion of the flagellate Reclinomonas americana. This release adds 26 new sequences and corresponding predicted tmRNA-encoded tag peptides for a total of 86 tmRNAs, ordered alphabetically and phylogenetically. Secondary structures and three-dimensional models in PDB format for representative molecules are being made available. tmRNA alignments prove individual base pairs and are generated manually assisted by computational tools. The alignments with their corresponding structural annotation can be obtained in various formats, including a new column format designed to improve and simplify computational usability of the data.