935 resultados para Step and flash imprint lithography
Resumo:
The aging population has become a burning issue for all modern societies around the world recently. There are two important issues existing now to be solved. One is how to continuously monitor the movements of those people having suffered a stroke in natural living environment for providing more valuable feedback to guide clinical interventions. The other one is how to guide those old people effectively when they are at home or inside other buildings and to make their life easier and convenient. Therefore, human motion tracking and navigation have been active research fields with the increasing number of elderly people. However, motion capture has been extremely challenging to go beyond laboratory environments and obtain accurate measurements of human physical activity especially in free-living environments, and navigation in free-living environments also poses some problems such as the denied GPS signal and the moving objects commonly presented in free-living environments. This thesis seeks to develop new technologies to enable accurate motion tracking and positioning in free-living environments. This thesis comprises three specific goals using our developed IMU board and the camera from the imaging source company: (1) to develop a robust and real-time orientation algorithm using only the measurements from IMU; (2) to develop a robust distance estimation in static free-living environments to estimate people’s position and navigate people in static free-living environments and simultaneously the scale ambiguity problem, usually appearing in the monocular camera tracking, is solved by integrating the data from the visual and inertial sensors; (3) in case of moving objects viewed by the camera existing in free-living environments, to firstly design a robust scene segmentation algorithm and then respectively estimate the motion of the vIMU system and moving objects. To achieve real-time orientation tracking, an Adaptive-Gain Orientation Filter (AGOF) is proposed in this thesis based on the basic theory of deterministic approach and frequency-based approach using only measurements from the newly developed MARG (Magnet, Angular Rate, and Gravity) sensors. To further obtain robust positioning, an adaptive frame-rate vision-aided IMU system is proposed to develop and implement fast vIMU ego-motion estimation algorithms, where the orientation is estimated in real time from MARG sensors in the first step and then used to estimate the position based on the data from visual and inertial sensors. In case of the moving objects viewed by the camera existing in free-living environments, a robust scene segmentation algorithm is firstly proposed to obtain position estimation and simultaneously the 3D motion of moving objects. Finally, corresponding simulations and experiments have been carried out.
Resumo:
Traditional transportation fuel, petroleum, is limited and nonrenewable, and it also causes pollutions. Hydrogen is considered one of the best alternative fuels for transportation. The key issue for using hydrogen as fuel for transportation is hydrogen storage. Lithium nitride (Li3N) is an important material which can be used for hydrogen storage. The decompositions of lithium amide (LiNH2) and lithium imide (Li2NH) are important steps for hydrogen storage in Li3N. The effect of anions (e.g. Cl-) on the decomposition of LiNH2 has never been studied. Li3N can react with LiBr to form lithium nitride bromide Li13N4Br which has been proposed as solid electrolyte for batteries. The decompositions of LiNH2 and Li2NH with and without promoter were investigated by using temperature programmed decomposition (TPD) and X-ray diffraction (XRD) techniques. It was found that the decomposition of LiNH2 produced Li2NH and NH3 via two steps: LiNH2 into a stable intermediate species (Li1.5NH1.5) and then into Li2NH. The decomposition of Li2NH produced Li, N2 and H2 via two steps: Li2NH into an intermediate species --- Li4NH and then into Li. The kinetic analysis of Li2NH decomposition showed that the activation energies are 533.6 kJ/mol for the first step and 754.2 kJ/mol for the second step. Furthermore, XRD demonstrated that the Li4NH, which was generated in the decomposition of Li2NH, formed a solid solution with Li2NH. In the solid solution, Li4NH possesses a similar cubic structure as Li2NH. The lattice parameter of the cubic Li4NH is 0.5033nm. The decompositions of LiNH2 and Li2NH can be promoted by chloride ion (Cl-). The introduction of Cl- into LiNH2 resulted in the generation of a new NH3 peak at low temperature of 250 °C besides the original NH3 peak at 330 °C in TPD profiles. Furthermore, Cl- can decrease the decomposition temperature of Li2NH by about 110 °C. The degradation of Li3N was systematically investigated with techniques of XRD, Fourier transform infrared (FT-IR) spectroscopy, and UV-visible spectroscopy. It was found that O2 could not affect Li3N at room temperature. However, H2O in air can cause the degradation of Li3N due to the reaction between H2O and Li3N to LiOH. The produced LiOH can further react with CO2 in air to Li2CO3 at room temperature. Furthermore, it was revealed that Alfa-Li3N is more stable in air than Beta-Li3N. The chemical stability of Li13N4Br in air has been investigated by XRD, TPD-MS, and UV-vis absorption as a function of time. The aging process finally leads to the degradation of the Li13N4Br into Li2CO3, lithium bromite (LiBrO2) and the release of gaseous NH3. The reaction order n = 2.43 is the best fitting for the Li13N4Br degradation in air reaction. Li13N4Br energy gap was calculated to be 2.61 eV.
Resumo:
Gene-directed enzyme prodrug therapy is a form of cancer therapy in which delivery of a gene that encodes an enzyme is able to convert a prodrug, a pharmacologically inactive molecule, into a potent cytotoxin. Currently delivery of gene and prodrug is a two-step process. Here, we propose a one-step method using polymer nanocarriers to deliver prodrug, gene and cytotoxic drug simultaneously to malignant cells. Prodrugs acyclovir, ganciclovir and 5-doxifluridine were used to directly to initiate ring-opening polymerization of epsilon-caprolactone, forming a hydrophobic prodrug-tagged poly(epsilon-caprolactone) which was further grafted with hydrophilic polymers (methoxy poly(ethylene glycol), chitosan or polyethylenemine) to form amphiphilic copolymers for micelle formation. Successful synthesis of copolymers and micelle formation was confirmed by standard analytical means. Conversion of prodrugs to their cytotoxic forms was analyzed by both two-step and one-step means i.e. by first delivering gene plasmid into cell line HT29 and then challenging the cells with the prodrug-tagged micelle carriers and secondly by complexing gene plasmid onto micelle nanocarriers and delivery gene and prodrug simultaneously to parental HT29 cells. Anticancer effectiveness of prodrug-tagged micelles was further enhanced by encapsulating chemotherapy drugs doxorubicin or SN-38. Viability of colon cancer cell line HT29 was significantly reduced. Furthermore, in an effort to develop a stealth and targeted carrier, CD47-streptavidin fusion protein was attached onto the micelle surface utilizing biotin-streptavidin affinity. CD47, a marker of self on the red blood cell surface, was used for its antiphagocytic efficacy, results showed that micelles bound with CD47 showed antiphagocytic efficacy when exposed to J774A.1 macrophages. Since CD47 is not only an antiphagocytic ligand but also an integrin associated protein, it was used to target integrin alpha(v)beta(3), which is overexpressed on tumor-activated neovascular endothelial cells. Results showed that CD47-tagged micelles had enhanced uptake when treated to PC3 cells which have high expression of alpha(v)beta(3). The synthesized multifunctional polymeric micelle carriers developed could offer a new platform for an innovative cancer therapy regime.
Resumo:
River bedload surveyed at 50 sites in Westland is dominated by Alpine Schist or Torlesse Greywacke from the Alpine Fault hanging wall, with subordinate Pounamu Ultramafics or footwall-derived Western Province rocks. Tumbling experiments found ultramafics to have the lowest attrition rates, compared with greywacke sandstone and granite (which abrade to produce silt to medium-sand), or incompetent schist (which fragments). Arahura has greater total concentrations (103–105 t/km2) and proportions (5–40%) of ultramafic bedload compared with Hokitika and Taramakau catchments (101–104 t/km2, mostly <10%), matching relative areas of mapped Pounamu Ultramafic bedrock, but enriched relative to absolute areal proportions. Western Province rocks downthrown by the Alpine Fault are under-represented in the bedload. Enriched concentrations of ultramafic bedload decrease rapidly with distance downstream from source rock outcrops, changing near prominent ice-limit moraines. Bedload evolution with transport involves both downstream fining and dilution from tributaries, in a sediment supply regime more strongly influenced by tectonics and the imprint of past glaciation. Treasured New Zealand pounamu (jade) is associated with ultramafic rocks. Chances of discovery vary between catchments, are increased near glacial moraines, and are highest near source-rock outcrops in remote mountain headwaters.
Resumo:
Immunoassays are essential in the workup of patients with suspected heparin-induced thrombocytopenia. However, the diagnostic accuracy is uncertain with regard to different classes of assays, antibody specificities, thresholds, test variations, and manufacturers. We aimed to assess diagnostic accuracy measures of available immunoassays and to explore sources of heterogeneity. We performed comprehensive literature searches and applied strict inclusion criteria. Finally, 49 publications comprising 128 test evaluations in 15 199 patients were included in the analysis. Methodological quality according to the revised tool for quality assessment of diagnostic accuracy studies was moderate. Diagnostic accuracy measures were calculated with the unified model (comprising a bivariate random-effects model and a hierarchical summary receiver operating characteristics model). Important differences were observed between classes of immunoassays, type of antibody specificity, thresholds, application of confirmation step, and manufacturers. Combination of high sensitivity (>95%) and high specificity (>90%) was found in 5 tests only: polyspecific enzyme-linked immunosorbent assay (ELISA) with intermediate threshold (Genetic Testing Institute, Asserachrom), particle gel immunoassay, lateral flow immunoassay, polyspecific chemiluminescent immunoassay (CLIA) with a high threshold, and immunoglobulin G (IgG)-specific CLIA with low threshold. Borderline results (sensitivity, 99.6%; specificity, 89.9%) were observed for IgG-specific Genetic Testing Institute-ELISA with low threshold. Diagnostic accuracy appears to be inadequate in tests with high thresholds (ELISA; IgG-specific CLIA), combination of IgG specificity and intermediate thresholds (ELISA, CLIA), high-dose heparin confirmation step (ELISA), and particle immunofiltration assay. When making treatment decisions, clinicians should be a aware of diagnostic characteristics of the tests used and it is recommended they estimate posttest probabilities according to likelihood ratios as well as pretest probabilities using clinical scoring tools.
Resumo:
Introduction. Investigations into the shortcomings of current intracavitary brachytherapy (ICBT) technology has lead us to design an Anatomically Adaptive Applicator (A3). The goal of this work was to design and characterize the imaging and dosimetric capabilities of this device. The A3 design incorporates a single shield that can both rotate and translate within the colpostat. We hypothesized that this feature, coupled with specific A3 component construction materials and imaging techniques, would facilitate artifact-free CT and MR image acquisition. In addition, by shaping the delivered dose distribution via the A3 movable shield, dose delivered to the rectum will be less compared to equivalent treatments utilizing current state-of-the-art ICBT applicators. ^ Method and materials. A method was developed to facilitate an artifact-free CT imaging protocol that used a "step-and-shoot" technique: pausing the scanner midway through the scan and moving the A 3 shield out of the path of the beam. The A3 CT imaging capabilities were demonstrated acquiring images of a phantom that positioned the A3 and FW applicators in a clinically-applicable geometry. Artifact-free MRI imaging was achieved by utilizing MRI-compatible ovoid components and pulse-sequences that minimize susceptibility artifacts. Artifacts were qualitatively compared, in a clinical setup. For the dosimetric study, Monte-Carlo (MC) models of the A3 and FW (shielded and unshielded) applicators were validated. These models were incorporated into a MC model of one cervical cancer patient ICBT insertion, using 192Ir (mHDR v2 source). The A3 shield's rotation and translation was adjusted for each dwell position to minimize dose to the rectum. Superposition of dose to rectum for all A3 dwell sources (4 per ovoid) was applied to obtain a comparison of equivalent FW treatments. Rectal dose-volume histograms (absolute and HDR/PDR biologically effective dose (BED)) and BED to 2 cc (BED2cc ) were determined for all applicators and compared. ^ Results. Using a "step-and-shoot" CT scanning method and MR compliant materials and optimized pulse-sequences, images of the A 3 were nearly artifact-free for both modalities. The A3 reduced BED2cc by 18.5% and 7.2% for a PDR treatment and 22.4% and 8.7% for a HDR treatment compared to treatments delivered using an uFW and sFW applicator, respectively. ^ Conclusions. The novel design of the A3 facilitated nearly artifact-free image quality for both CT and MR clinical imaging protocols. The design also facilitated a reduction in BED to the rectum compared to equivalent ICBT treatments delivered using current, state-of-the-art applicators. ^
Resumo:
Child obesity in the U.S. is a significant public health issue, particularly among children from disadvantaged backgrounds. Thus, the roles of parents’ human and financial capital and racial and ethnic background have become important topics of social science and public health research on child obesity. Less often discussed, however, is the role of family structure, which is an important predictor of child well-being and indicator of family socioeconomic status. The goal of this study, therefore, is to investigate how preschool aged children’s risk of obesity varies across a diverse set of family structures and whether these differences in obesity are moderated by family poverty status and the mothers’ education. Using a large nationally representative sample of children from the Early Childhood Longitudinal Study – Birth Cohort, we find that preschoolers raised by two biological cohabiting parents or a relative caregiver (generally the grandparent) have greater odds of being obese than children raised by married biological parents. Also, poor children in married biological parent households and non-poor children in married step parent households have greater obesity risks, while poor children in father only, unmarried step, and married step parent families actually have lower odds of obesity than children in non-poor intact households. The implications of these findings for policy and future research linking family structure to children’s weight status are discussed.
Resumo:
We obtained sediment physical properties and geochemical data from 47 piston and gravity cores located in the Bay of Bengal, to study the complex history of the Late Pleistocene run-off from the Ganges and Brahmaputra rivers and its imprint on the Bengal Fan. Grain-size parameters were predicted from core logs of density and velocity to infer sediment transport energy and to distinguish different environments along the 3000-km-long transport path from the delta platform to the lower fan. On the shelf, 27 cores indicate rapidly prograding delta foresets today that contain primarily mud, whereas outer shelf sediment has 25% higher silt contents, indicative of stronger and more stable transport regime, which prevent deposition and expose a Late Pleistocene relic surface. Deposition is currently directed towards the shelf canyon 'Swatch of No Ground', where turbidites are released to the only channel-levee system that is active on the fan during the Holocene. Active growth of the channel-levee system occurred throughout sea-level rise and highstand with a distinct growth phase at the end of the Younger Dryas. Coarse-grained material bypasses the upper fan and upper parts of the middle fan, where particle flow is enhanced as a result of flow-restriction in well-defined channels. Sandier material is deposited mainly as sheet-flow deposits on turbidite-dominated plains at the lower fan. The currently most active part of the fan with 10-40 cm thick turbidites is documented for the central channel including inner levees (e.g., site 40). Site 47 from the lower fan far to the east of the active channel-levee system indicates the end of turbidite sedimentation at 300 ka for that location. That time corresponds to the sea-level lowering during late isotopic stage 9 when sediment supply to the fan increased and led to channel avulsion farther upstream, probably indicating a close relation of climate variability and fan activity. Pelagic deep-sea sites 22 and 28 contain a 630-kyear record of climate response to orbital forcing with dominant 21- and 41-kyear cycles for carbonate and magnetic susceptibility, respectively, pointing to teleconnections of low-latitude monsoonal forcing on the precession band to high-latitude obliquity forcing. Upper slope sites 115, 124, and 126 contain a record of the response to high-frequency climate change in the Dansgaard-Oeschger bands during the last glacial cycle with shared frequencies between 0.75 and 2.5 kyear. Correlation of highs in Bengal Fan physical properties to lows in the d18O record of the GISP2 ice-core suggests that times of greater sediment transport energy in the Bay of Bengal are associated with cooler air temperatures over Greenland. Teleconnections were probably established through moisture and other greenhouse-gas forcing that could have been initiated by instabilities in the methane hydrate reservoir in the oceans.
Resumo:
This talk illustrates how results from various Stata commands can be processed efficiently for inclusion in customized reports. A two-step procedure is proposed in which results are gathered and archived in the first step and then tabulated in the second step. Such an approach disentangles the tasks of computing results (which may take long) and preparing results for inclusion in presentations, papers, and reports (which you may have to do over and over). Examples using results from model estimation commands and various other Stata commands such as tabulate, summarize, or correlate are presented. Users will also be shown how to dynamically link results into word processors or into LaTeX documents.
Resumo:
La medición y testeo de células fotovoltaicas en el laboratorio o en la industria exige reproducir unas condiciones de iluminación semejantes a las reales. Por eso se utilizan sistemas de iluminación basados en lámparas flash de Xenon que reproducen las condiciones reales en cuanto a nivel de irradiancia y espectro de la luz incidente. El objetivo de este proyecto es realizar los circuitos electrónicos necesarios para el disparo de dichas lámparas. El circuito de alimentación y disparo de una lámpara flash consta de una fuente de alimentación variable, un circuito de disparo para la ionización del gas Xenon y la electrónica de control. Nuestro circuito de disparo pretende producir pulsos adecuados para los dispositivos fotovoltaicos tanto en irradiancia, espectro y en duración, de forma que con un solo disparo consigamos el tiempo, la irradiancia y el espectro suficiente para el testeo de la célula fotovoltaica. La mayoría de estos circuitos exceptuando los específicos que necesita la lámpara, serán diseñados, simulados, montados en PCB y comprobados posteriormente en el laboratorio. ABSTRACT. Measurement and testing of photovoltaic cells in the laboratory or in industry requires reproduce lighting conditions similar to the real ones. So are used based lighting systems xenon flash lamps that reproduce the actual conditions in the level of irradiance and spectrum of the incident light. The objective of this project is to make electronic circuits required for such lamps shot. The power supply circuit and flash lamp shot consists of a variable power supply, a trigger circuit for Xenon gas ionization and the control electronics. Our shot circuit aims to produce pulses suitable for photovoltaic devices both irradiance, spectrum and duration, so that with a single shot get the time, the irradiance and spectrum enough for testing the photovoltaic cell. Most of these circuits except lamp specific requirements will be designed, simulated, and PCB mounted subsequently tested in the laboratory.
Resumo:
Un escenario habitualmente considerado para el uso sostenible y prolongado de la energía nuclear contempla un parque de reactores rápidos refrigerados por metales líquidos (LMFR) dedicados al reciclado de Pu y la transmutación de actínidos minoritarios (MA). Otra opción es combinar dichos reactores con algunos sistemas subcríticos asistidos por acelerador (ADS), exclusivamente destinados a la eliminación de MA. El diseño y licenciamiento de estos reactores innovadores requiere herramientas computacionales prácticas y precisas, que incorporen el conocimiento obtenido en la investigación experimental de nuevas configuraciones de reactores, materiales y sistemas. A pesar de que se han construido y operado un cierto número de reactores rápidos a nivel mundial, la experiencia operacional es todavía reducida y no todos los transitorios se han podido entender completamente. Por tanto, los análisis de seguridad de nuevos LMFR están basados fundamentalmente en métodos deterministas, al contrario que las aproximaciones modernas para reactores de agua ligera (LWR), que se benefician también de los métodos probabilistas. La aproximación más usada en los estudios de seguridad de LMFR es utilizar una variedad de códigos, desarrollados a base de distintas teorías, en busca de soluciones integrales para los transitorios e incluyendo incertidumbres. En este marco, los nuevos códigos para cálculos de mejor estimación ("best estimate") que no incluyen aproximaciones conservadoras, son de una importancia primordial para analizar estacionarios y transitorios en reactores rápidos. Esta tesis se centra en el desarrollo de un código acoplado para realizar análisis realistas en reactores rápidos críticos aplicando el método de Monte Carlo. Hoy en día, dado el mayor potencial de recursos computacionales, los códigos de transporte neutrónico por Monte Carlo se pueden usar de manera práctica para realizar cálculos detallados de núcleos completos, incluso de elevada heterogeneidad material. Además, los códigos de Monte Carlo se toman normalmente como referencia para los códigos deterministas de difusión en multigrupos en aplicaciones con reactores rápidos, porque usan secciones eficaces punto a punto, un modelo geométrico exacto y tienen en cuenta intrínsecamente la dependencia angular de flujo. En esta tesis se presenta una metodología de acoplamiento entre el conocido código MCNP, que calcula la generación de potencia en el reactor, y el código de termohidráulica de subcanal COBRA-IV, que obtiene las distribuciones de temperatura y densidad en el sistema. COBRA-IV es un código apropiado para aplicaciones en reactores rápidos ya que ha sido validado con resultados experimentales en haces de barras con sodio, incluyendo las correlaciones más apropiadas para metales líquidos. En una primera fase de la tesis, ambos códigos se han acoplado en estado estacionario utilizando un método iterativo con intercambio de archivos externos. El principal problema en el acoplamiento neutrónico y termohidráulico en estacionario con códigos de Monte Carlo es la manipulación de las secciones eficaces para tener en cuenta el ensanchamiento Doppler cuando la temperatura del combustible aumenta. Entre todas las opciones disponibles, en esta tesis se ha escogido la aproximación de pseudo materiales, y se ha comprobado que proporciona resultados aceptables en su aplicación con reactores rápidos. Por otro lado, los cambios geométricos originados por grandes gradientes de temperatura en el núcleo de reactores rápidos resultan importantes para la neutrónica como consecuencia del elevado recorrido libre medio del neutrón en estos sistemas. Por tanto, se ha desarrollado un módulo adicional que simula la geometría del reactor en caliente y permite estimar la reactividad debido a la expansión del núcleo en un transitorio. éste módulo calcula automáticamente la longitud del combustible, el radio de la vaina, la separación de los elementos de combustible y el radio de la placa soporte en función de la temperatura. éste efecto es muy relevante en transitorios sin inserción de bancos de parada. También relacionado con los cambios geométricos, se ha implementado una herramienta que, automatiza el movimiento de las barras de control en busca d la criticidad del reactor, o bien calcula el valor de inserción axial las barras de control. Una segunda fase en la plataforma de cálculo que se ha desarrollado es la simulació dinámica. Puesto que MCNP sólo realiza cálculos estacionarios para sistemas críticos o supercríticos, la solución más directa que se propone sin modificar el código fuente de MCNP es usar la aproximación de factorización de flujo, que resuelve por separado la forma del flujo y la amplitud. En este caso se han estudiado en profundidad dos aproximaciones: adiabática y quasiestática. El método adiabático usa un esquema de acoplamiento que alterna en el tiempo los cálculos neutrónicos y termohidráulicos. MCNP calcula el modo fundamental de la distribución de neutrones y la reactividad al final de cada paso de tiempo, y COBRA-IV calcula las propiedades térmicas en el punto intermedio de los pasos de tiempo. La evolución de la amplitud de flujo se calcula resolviendo las ecuaciones de cinética puntual. Este método calcula la reactividad estática en cada paso de tiempo que, en general, difiere de la reactividad dinámica que se obtendría con la distribución de flujo exacta y dependiente de tiempo. No obstante, para entornos no excesivamente alejados de la criticidad ambas reactividades son similares y el método conduce a resultados prácticos aceptables. Siguiendo esta línea, se ha desarrollado después un método mejorado para intentar tener en cuenta el efecto de la fuente de neutrones retardados en la evolución de la forma del flujo durante el transitorio. El esquema consiste en realizar un cálculo cuasiestacionario por cada paso de tiempo con MCNP. La simulación cuasiestacionaria se basa EN la aproximación de fuente constante de neutrones retardados, y consiste en dar un determinado peso o importancia a cada ciclo computacial del cálculo de criticidad con MCNP para la estimación del flujo final. Ambos métodos se han verificado tomando como referencia los resultados del código de difusión COBAYA3 frente a un ejercicio común y suficientemente significativo. Finalmente, con objeto de demostrar la posibilidad de uso práctico del código, se ha simulado un transitorio en el concepto de reactor crítico en fase de diseño MYRRHA/FASTEF, de 100 MW de potencia térmica y refrigerado por plomo-bismuto. ABSTRACT Long term sustainable nuclear energy scenarios envisage a fleet of Liquid Metal Fast Reactors (LMFR) for the Pu recycling and minor actinides (MAs) transmutation or combined with some accelerator driven systems (ADS) just for MAs elimination. Design and licensing of these innovative reactor concepts require accurate computational tools, implementing the knowledge obtained in experimental research for new reactor configurations, materials and associated systems. Although a number of fast reactor systems have already been built, the operational experience is still reduced, especially for lead reactors, and not all the transients are fully understood. The safety analysis approach for LMFR is therefore based only on deterministic methods, different from modern approach for Light Water Reactors (LWR) which also benefit from probabilistic methods. Usually, the approach adopted in LMFR safety assessments is to employ a variety of codes, somewhat different for the each other, to analyze transients looking for a comprehensive solution and including uncertainties. In this frame, new best estimate simulation codes are of prime importance in order to analyze fast reactors steady state and transients. This thesis is focused on the development of a coupled code system for best estimate analysis in fast critical reactor. Currently due to the increase in the computational resources, Monte Carlo methods for neutrons transport can be used for detailed full core calculations. Furthermore, Monte Carlo codes are usually taken as reference for deterministic diffusion multigroups codes in fast reactors applications because they employ point-wise cross sections in an exact geometry model and intrinsically account for directional dependence of the ux. The coupling methodology presented here uses MCNP to calculate the power deposition within the reactor. The subchannel code COBRA-IV calculates the temperature and density distribution within the reactor. COBRA-IV is suitable for fast reactors applications because it has been validated against experimental results in sodium rod bundles. The proper correlations for liquid metal applications have been added to the thermal-hydraulics program. Both codes are coupled at steady state using an iterative method and external files exchange. The main issue in the Monte Carlo/thermal-hydraulics steady state coupling is the cross section handling to take into account Doppler broadening when temperature rises. Among every available options, the pseudo materials approach has been chosen in this thesis. This approach obtains reasonable results in fast reactor applications. Furthermore, geometrical changes caused by large temperature gradients in the core, are of major importance in fast reactor due to the large neutron mean free path. An additional module has therefore been included in order to simulate the reactor geometry in hot state or to estimate the reactivity due to core expansion in a transient. The module automatically calculates the fuel length, cladding radius, fuel assembly pitch and diagrid radius with the temperature. This effect will be crucial in some unprotected transients. Also related to geometrical changes, an automatic control rod movement feature has been implemented in order to achieve a just critical reactor or to calculate control rod worth. A step forward in the coupling platform is the dynamic simulation. Since MCNP performs only steady state calculations for critical systems, the more straight forward option without modifying MCNP source code, is to use the flux factorization approach solving separately the flux shape and amplitude. In this thesis two options have been studied to tackle time dependent neutronic simulations using a Monte Carlo code: adiabatic and quasistatic methods. The adiabatic methods uses a staggered time coupling scheme for the time advance of neutronics and the thermal-hydraulics calculations. MCNP computes the fundamental mode of the neutron flux distribution and the reactivity at the end of each time step and COBRA-IV the thermal properties at half of the the time steps. To calculate the flux amplitude evolution a solver of the point kinetics equations is used. This method calculates the static reactivity in each time step that in general is different from the dynamic reactivity calculated with the exact flux distribution. Nevertheless, for close to critical situations, both reactivities are similar and the method leads to acceptable practical results. In this line, an improved method as an attempt to take into account the effect of delayed neutron source in the transient flux shape evolutions is developed. The scheme performs a quasistationary calculation per time step with MCNP. This quasistationary simulations is based con the constant delayed source approach, taking into account the importance of each criticality cycle in the final flux estimation. Both adiabatic and quasistatic methods have been verified against the diffusion code COBAYA3, using a theoretical kinetic exercise. Finally, a transient in a critical 100 MWth lead-bismuth-eutectic reactor concept is analyzed using the adiabatic method as an application example in a real system.
Resumo:
La tomografía axial computerizada (TAC) es la modalidad de imagen médica preferente para el estudio de enfermedades pulmonares y el análisis de su vasculatura. La segmentación general de vasos en pulmón ha sido abordada en profundidad a lo largo de los últimos años por la comunidad científica que trabaja en el campo de procesamiento de imagen; sin embargo, la diferenciación entre irrigaciones arterial y venosa es aún un problema abierto. De hecho, la separación automática de arterias y venas está considerado como uno de los grandes retos futuros del procesamiento de imágenes biomédicas. La segmentación arteria-vena (AV) permitiría el estudio de ambas irrigaciones por separado, lo cual tendría importantes consecuencias en diferentes escenarios médicos y múltiples enfermedades pulmonares o estados patológicos. Características como la densidad, geometría, topología y tamaño de los vasos sanguíneos podrían ser analizados en enfermedades que conllevan remodelación de la vasculatura pulmonar, haciendo incluso posible el descubrimiento de nuevos biomarcadores específicos que aún hoy en dípermanecen ocultos. Esta diferenciación entre arterias y venas también podría ayudar a la mejora y el desarrollo de métodos de procesamiento de las distintas estructuras pulmonares. Sin embargo, el estudio del efecto de las enfermedades en los árboles arterial y venoso ha sido inviable hasta ahora a pesar de su indudable utilidad. La extrema complejidad de los árboles vasculares del pulmón hace inabordable una separación manual de ambas estructuras en un tiempo realista, fomentando aún más la necesidad de diseñar herramientas automáticas o semiautomáticas para tal objetivo. Pero la ausencia de casos correctamente segmentados y etiquetados conlleva múltiples limitaciones en el desarrollo de sistemas de separación AV, en los cuales son necesarias imágenes de referencia tanto para entrenar como para validar los algoritmos. Por ello, el diseño de imágenes sintéticas de TAC pulmonar podría superar estas dificultades ofreciendo la posibilidad de acceso a una base de datos de casos pseudoreales bajo un entorno restringido y controlado donde cada parte de la imagen (incluyendo arterias y venas) está unívocamente diferenciada. En esta Tesis Doctoral abordamos ambos problemas, los cuales están fuertemente interrelacionados. Primero se describe el diseño de una estrategia para generar, automáticamente, fantomas computacionales de TAC de pulmón en humanos. Partiendo de conocimientos a priori, tanto biológicos como de características de imagen de CT, acerca de la topología y relación entre las distintas estructuras pulmonares, el sistema desarrollado es capaz de generar vías aéreas, arterias y venas pulmonares sintéticas usando métodos de crecimiento iterativo, que posteriormente se unen para formar un pulmón simulado con características realistas. Estos casos sintéticos, junto a imágenes reales de TAC sin contraste, han sido usados en el desarrollo de un método completamente automático de segmentación/separación AV. La estrategia comprende una primera extracción genérica de vasos pulmonares usando partículas espacio-escala, y una posterior clasificación AV de tales partículas mediante el uso de Graph-Cuts (GC) basados en la similitud con arteria o vena (obtenida con algoritmos de aprendizaje automático) y la inclusión de información de conectividad entre partículas. La validación de los fantomas pulmonares se ha llevado a cabo mediante inspección visual y medidas cuantitativas relacionadas con las distribuciones de intensidad, dispersión de estructuras y relación entre arterias y vías aéreas, los cuales muestran una buena correspondencia entre los pulmones reales y los generados sintéticamente. La evaluación del algoritmo de segmentación AV está basada en distintas estrategias de comprobación de la exactitud en la clasificación de vasos, las cuales revelan una adecuada diferenciación entre arterias y venas tanto en los casos reales como en los sintéticos, abriendo así un amplio abanico de posibilidades en el estudio clínico de enfermedades cardiopulmonares y en el desarrollo de metodologías y nuevos algoritmos para el análisis de imágenes pulmonares. ABSTRACT Computed tomography (CT) is the reference image modality for the study of lung diseases and pulmonary vasculature. Lung vessel segmentation has been widely explored by the biomedical image processing community, however, differentiation of arterial from venous irrigations is still an open problem. Indeed, automatic separation of arterial and venous trees has been considered during last years as one of the main future challenges in the field. Artery-Vein (AV) segmentation would be useful in different medical scenarios and multiple pulmonary diseases or pathological states, allowing the study of arterial and venous irrigations separately. Features such as density, geometry, topology and size of vessels could be analyzed in diseases that imply vasculature remodeling, making even possible the discovery of new specific biomarkers that remain hidden nowadays. Differentiation between arteries and veins could also enhance or improve methods processing pulmonary structures. Nevertheless, AV segmentation has been unfeasible until now in clinical routine despite its objective usefulness. The huge complexity of pulmonary vascular trees makes a manual segmentation of both structures unfeasible in realistic time, encouraging the design of automatic or semiautomatic tools to perform the task. However, this lack of proper labeled cases seriously limits in the development of AV segmentation systems, where reference standards are necessary in both algorithm training and validation stages. For that reason, the design of synthetic CT images of the lung could overcome these difficulties by providing a database of pseudorealistic cases in a constrained and controlled scenario where each part of the image (including arteries and veins) is differentiated unequivocally. In this Ph.D. Thesis we address both interrelated problems. First, the design of a complete framework to automatically generate computational CT phantoms of the human lung is described. Starting from biological and imagebased knowledge about the topology and relationships between structures, the system is able to generate synthetic pulmonary arteries, veins, and airways using iterative growth methods that can be merged into a final simulated lung with realistic features. These synthetic cases, together with labeled real CT datasets, have been used as reference for the development of a fully automatic pulmonary AV segmentation/separation method. The approach comprises a vessel extraction stage using scale-space particles and their posterior artery-vein classification using Graph-Cuts (GC) based on arterial/venous similarity scores obtained with a Machine Learning (ML) pre-classification step and particle connectivity information. Validation of pulmonary phantoms from visual examination and quantitative measurements of intensity distributions, dispersion of structures and relationships between pulmonary air and blood flow systems, show good correspondence between real and synthetic lungs. The evaluation of the Artery-Vein (AV) segmentation algorithm, based on different strategies to assess the accuracy of vessel particles classification, reveal accurate differentiation between arteries and vein in both real and synthetic cases that open a huge range of possibilities in the clinical study of cardiopulmonary diseases and the development of methodological approaches for the analysis of pulmonary images.
Resumo:
The PsaF-deficient mutant 3bF of Chlamydomonas reinhardtii was used to modify PsaF by nuclear transformation and site-directed mutagenesis. Four lysine residues in the N-terminal domain of PsaF, which have been postulated to form the positively charged face of a putative amphipathic α-helical structure were altered to K12P, K16Q, K23Q, and K30Q. The interactions between plastocyanin (pc) or cytochrome c6 (cyt c6) and photosystem I (PSI) isolated from wild type and the different mutants were analyzed using crosslinking techniques and flash absorption spectroscopy. The K23Q change drastically affected crosslinking of pc to PSI and electron transfer from pc and cyt c6 to PSI. The corresponding second order rate constants for binding of pc and cyt c6 were reduced by a factor of 13 and 7, respectively. Smaller effects were observed for mutations K16Q and K30Q, whereas in K12P the binding was not changed relative to wild type. None of the mutations affected the half-life of the microsecond electron transfer performed within the intermolecular complex between the donors and PSI. The fact that these single amino acid changes within the N-terminal domain of PsaF have different effects on the electron transfer rate constants and dissociation constants for both electron donors suggests the existence of a rather precise recognition site for pc and cyt c6 that leads to the stabilization of the final electron transfer complex through electrostatic interactions.
Resumo:
Bacterial chemotaxis is widely studied because of its accessibility and because it incorporates processes that are important in a number of sensory systems: signal transduction, excitation, adaptation, and a change in behavior, all in response to stimuli. Quantitative data on the change in behavior are available for this system, and the major biochemical steps in the signal transduction/processing pathway have been identified. We have incorporated recent biochemical data into a mathematical model that can reproduce many of the major features of the intracellular response, including the change in the level of chemotactic proteins to step and ramp stimuli such as those used in experimental protocols. The interaction of the chemotactic proteins with the motor is not modeled, but we can estimate the degree of cooperativity needed to produce the observed gain under the assumption that the chemotactic proteins interact directly with the motor proteins.
Resumo:
Etheno adducts in DNA arise from multiple endogenous and exogenous sources. Of these adducts we have reported that, 1,N6-ethenoadenine (ɛA) and 3,N4-ethenocytosine (ɛC) are removed from DNA by two separate DNA glycosylases. We later confirmed these results by using a gene knockout mouse lacking alkylpurine-DNA-N-glycosylase, which excises ɛA. The present work is directed toward identifying and purifying the human glycosylase activity releasing ɛC. HeLa cells were subjected to multiple steps of column chromatography, including two ɛC-DNA affinity columns, which resulted in >1,000-fold purification. Isolation and renaturation of the protein from SDS/polyacrylamide gel showed that the ɛC activity resides in a 55-kDa polypeptide. This apparent molecular mass is approximately the same as reported for the human G/T mismatch thymine-DNA glycosylase. This latter activity copurified to the final column step and was present in the isolated protein band having ɛC-DNA glycosylase activity. In addition, oligonucleotides containing ɛC⋅G or G/T(U), could compete for ɛC protein binding, further indicating that the ɛC-DNA glycosylase is specific for both types of substrates in recognition. The same substrate specificity for ɛC also was observed in a recombinant G/T mismatch DNA glycosylase from the thermophilic bacterium, Methanobacterium thermoautotrophicum THF.