900 resultados para Field-based model
Resumo:
The urate transporter, GLUT9, is responsible for the basolateral transport of urate in the proximal tubule of human kidneys and in the placenta, playing a central role in uric acid homeostasis. GLUT9 shares the least homology with other members of the glucose transporter family, especially with the glucose transporting members GLUT1-4 and is the only member of the GLUT family to transport urate. The recently published high-resolution structure of XylE, a bacterial D-xylose transporting homologue, yields new insights into the structural foundation of this GLUT family of proteins. While this represents a huge milestone, it is unclear if human GLUT9 can benefit from this advancement through subsequent structural based targeting and mutagenesis. Little progress has been made toward understanding the mechanism of GLUT9 since its discovery in 2000. Before work can begin on resolving the mechanisms of urate transport we must determine methods to express, purify and analyze hGLUT9 using a model system adept in expressing human membrane proteins. Here, we describe the surface expression, purification and isolation of monomeric protein, and functional analysis of recombinant hGLUT9 using the Xenopus laevis oocyte system. In addition, we generated a new homology-based high-resolution model of hGLUT9 from the XylE crystal structure and utilized our purified protein to generate a low-resolution single particle reconstruction. Interestingly, we demonstrate that the functional protein extracted from the Xenopus system fits well with the homology-based model allowing us to generate the predicted urate-binding pocket and pave a path for subsequent mutagenesis and structure-function studies.
Resumo:
While most previous research has considered public service motivation (PSM) as the only motivational factor predicting (public) job choice, the authors present a novel, rational choice-based model which includes three motivational dimensions: extrinsic, enjoyment-based intrinsic and prosocial intrinsic. Besides providing more accurate person-job fit predictions, this new approach fills a significant research gap and facilitates future theory building.
Resumo:
Convergent plate margins typically experience a transition from subduction to collision dynamics as massive continental blocks enter the subduction channel. Studies of high-pressure rocks indicate that tectonic fragments are rapidly exhumed from eclogite facies to midcrustal levels, but the details of such dynamics are controversial.To understand the dynamics of a subduction channel we report the results of a petrochronological study from the central Sesia Zone, a key element of the internalWestern Alps.This comprises two polymetamorphic basement complexes (Eclogitic Micaschist Complex and Gneiss Minuti Complex) and a thin, dismembered cover sequence (Scalaro Unit) associated with pre-Alpine metagabbros and metasediments (Bonze Unit). Structurally controlled samples from three of these units (Eclogitic Micaschist Complex and Scalaro-Bonze Units) yield unequivocal petrological and geochronological evidence of two distinct high-pressure stages. Ages (U-Th-Pb) of growth zones in accessory allanite and zircon, combined with inclusion and textural relationships, can be tied to the multi-stage evolution of single samples.Two independent tectono-metamorphic ‘slices’ showing a coherent metamorphic evolution during a given time interval have been recognized: the Fondo slice (which includes Scalaro and Bonze rocks) and the Druer slice (belonging to the Eclogitic Micaschist Complex).The new data indicate separate stages of deformation at eclogite-facies conditions for each recognized independent kilometer-sized tectono-metamorphic slice, between ~85 and 60 Ma, with evidence of intermittent decompression (∆P~0.5 GPa) within only the Fondo slice. The evolution path of the Druer slice indicates a different P-T-time evolution with prolonged eclogite-facies metamorphism between ~85 and 75Ma. Our approach, combining structural, petrological and geochronological techniques, yields field-based constraints on the duration and rates of dynamics within a subduction channel.
Resumo:
BACKGROUND AND AIMS Hepatitis C (HCV) is a leading cause of morbidity and mortality in people who live with HIV. In many countries, access to direct acting antiviral agents to treat HCV is restricted to individuals with advanced liver disease (METAVIR stage F3 or F4). Our goal was to estimate the long term impact of deferring HCV treatment for men who have sex with men (MSM) who are coinfected with HIV and often have multiple risk factors for liver disease progression. METHODS We developed an individual-based model of liver disease progression in HIV/HCV coinfected men who have sex with men. We estimated liver-related morbidity and mortality as well as the median time spent with replicating HCV infection when individuals were treated in liver fibrosis stages F0, F1, F2, F3 or F4 on the METAVIR scale. RESULTS The percentage of individuals who died of liver-related complications was 2% if treatment was initiated in F0 or F1. It increased to 3% if treatment was deferred until F2, 7% if it was deferred until F3 and 22% if deferred until F4. The median time individuals spent with replicating HCV increased from 5 years if treatment was initiated in F2 to almost 15 years if it was deferred until F4. CONCLUSIONS Deferring HCV therapy until advanced liver fibrosis is established could increase liver-related morbidity and mortality in HIV/HCV coinfected individuals, and substantially prolong the time individuals spend with replicating HCV infection.
Resumo:
Microarray technology is a high-throughput method for genotyping and gene expression profiling. Limited sensitivity and specificity are one of the essential problems for this technology. Most of existing methods of microarray data analysis have an apparent limitation for they merely deal with the numerical part of microarray data and have made little use of gene sequence information. Because it's the gene sequences that precisely define the physical objects being measured by a microarray, it is natural to make the gene sequences an essential part of the data analysis. This dissertation focused on the development of free energy models to integrate sequence information in microarray data analysis. The models were used to characterize the mechanism of hybridization on microarrays and enhance sensitivity and specificity of microarray measurements. ^ Cross-hybridization is a major obstacle factor for the sensitivity and specificity of microarray measurements. In this dissertation, we evaluated the scope of cross-hybridization problem on short-oligo microarrays. The results showed that cross hybridization on arrays is mostly caused by oligo fragments with a run of 10 to 16 nucleotides complementary to the probes. Furthermore, a free-energy based model was proposed to quantify the amount of cross-hybridization signal on each probe. This model treats cross-hybridization as an integral effect of the interactions between a probe and various off-target oligo fragments. Using public spike-in datasets, the model showed high accuracy in predicting the cross-hybridization signals on those probes whose intended targets are absent in the sample. ^ Several prospective models were proposed to improve Positional Dependent Nearest-Neighbor (PDNN) model for better quantification of gene expression and cross-hybridization. ^ The problem addressed in this dissertation is fundamental to the microarray technology. We expect that this study will help us to understand the detailed mechanism that determines sensitivity and specificity on the microarrays. Consequently, this research will have a wide impact on how microarrays are designed and how the data are interpreted. ^
Resumo:
Measurement of the absorbed dose from ionizing radiation in medical applications is an essential component to providing safe and reproducible patient care. There are a wide variety of tools available for measuring radiation dose; this work focuses on the characterization of two common, solid-state dosimeters in medical applications: thermoluminescent dosimeters (TLD) and optically stimulated luminescent dosimeters (OSLD). There were two main objectives to this work. The first objective was to evaluate the energy dependence of TLD and OSLD for non-reference measurement conditions in a radiotherapy environment. The second objective was to fully characterize the OSLD nanoDot in a CT environment, and to provide validated calibration procedures for CT dose measurement using OSLD. Current protocols for dose measurement using TLD and OSLD generally assume a constant photon energy spectrum within a nominal beam energy regardless of measurement location, tissue composition, or changes in beam parameters. Variations in the energy spectrum of therapeutic photon beams may impact the response of TLD and OSLD and could thereby result in an incorrect measure of dose unless these differences are accounted for. In this work, we used a Monte Carlo based model to simulate variations in the photon energy spectra of a Varian 6MV beam; then evaluated the impact of the perturbations in energy spectra on the response of both TLD and OSLD using Burlin Cavity Theory. Energy response correction factors were determined for a range of conditions and compared to measured correction factors with good agreement. When using OSLD for dose measurement in a diagnostic imaging environment, photon energy spectra are often referenced to a therapy-energy or orthovoltage photon beam – commonly 250kVp, Co-60, or even 6MV, where the spectra are substantially different. Appropriate calibration techniques specifically for the OSLD nanoDot in a CT environment have not been presented in the literature; furthermore the dependence of the energy response of the calibration energy has not been emphasized. The results of this work include detailed calibration procedures for CT dosimetry using OSLD, and a full characterization of this dosimetry system in a low-dose, low-energy setting.
Resumo:
This study evaluated a modified home-based model of family preservation services, the long-term community case management model, as operationalized by a private child welfare agency that serves as the last resort for hard-to-serve families with children at severe risk of out-of-home placement. The evaluation used a One-Group Pretest-Posttest design with a modified time-series design to determine if the intervention would produce a change over time in the composite score of each family's Child Well-Being Scales (CWBS). A comparison of the mean CWBS scores of the 208 families and subsets of these families at the pretest and various posttests showed a statistically significant decrease in the CWBS scores, indicating decreased risk factors. The longer the duration of services, the greater the statistically significant risk reduction. The results support the conclusion that the families who participate in empowerment-oriented community case management, with the option to extend service duration to resolve or ameliorate chronic family problems, have experienced effective strengthening in family functioning.
Resumo:
Carbon isotopically based estimates of CO2 levels have been generated from a record of the photosynthetic fractionation of 13C (epsilon p) in a central equatorial Pacific sediment core that spans the last ~255 ka. Contents of 13C in phytoplanktonic biomass were determined by analysis of C37 alkadienones. These compounds are exclusive products of Prymnesiophyte algae which at present grow most abundantly at depths of 70-90 m in the central equatorial Pacific. A record of the isotopic compostion of dissolved CO2 was constructed from isotopic analyses of the planktonic foraminifera Neogloboquadrina dutertrei, which calcifies at 70-90 m in the same region. Values of epsilon p, derived by comparison of the organic and inorganic delta values, were transformed to yield concentrations of dissolved CO2 (c e) based on a new, site-specific calibration of the relationship between epsilon p and c e. The calibration was based on reassessment of existing epsilon p versus c e data, which support a physiologically based model in which epsilon p is inversely related to c e. Values of PCO2, the partial pressure of CO2 that would be in equilibrium with the estimated concentrations of dissolved CO2, were calculated using Henry's law and the temperature determined from the alkenone-unsaturation index UK 37. Uncertainties in these values arise mainly from uncertainties about the appropriateness (particularly over time) of the site-specific relationship between epsilon p and 1/c e. These are discussed in detail and it is concluded that the observed record of epsilon p most probably reflects significant variations in Delta pCO2, the ocean-atmosphere disequilibrium, which appears to have ranged from ~110 µatm during glacial intervals (ocean > atmosphere) to ~60 µatm during interglacials. Fluxes of CO2 to the atmosphere would thus have been significantly larger during glacial intervals. If this were characteristic of large areas of the equatorial Pacific, then greater glacial sinks for the equatorially evaded CO2 must have existed elsewhere. Statistical analysis of air-sea pCO2 differences and other parameters revealed significant (p < 0.01) inverse correlations of Delta pCO2 with sea surface temperature and with the mass accumulation rate of opal. The former suggests response to the strength of upwelling, the latter may indicate either drawdown of CO2 by siliceous phytoplankton or variation of [CO2]/[Si(OH)4] ratios in upwelling waters.
Resumo:
During Ocean Drilling Program Leg 188 to Prydz Bay, East Antarctica, several of the shipboard scientists formed the High-Resolution Integrated Stratigraphy Committee (HiRISC). The committee was established in order to furnish an integrated data set from the Pliocene portion of Site 1165 as a contribution to the ongoing debate about Pliocene climate and climate evolution in Antarctica. The proxies determined in our various laboratories were the following: magnetostratigraphy and magnetic properties, grain-size distributions (granulometry), near-ultraviolet, visible, and near-infrared spectrophotometry, calcium carbonate content, characteristics of foraminifer, diatom, and radiolarian content, clay mineral composition, and stable isotopes. In addition to the HiRISC samples, other data sets contained in this report are subsets of much larger data sets. We included these subsets in order to provide the reader with a convenient integrated data set of Pliocene-Pleistocene strata from the East Antarctic continental margin. The data are presented in the form of 14 graphs (in addition to the site map). Text and figure captions guide the reader to the original data sets. Some preliminary interpretations are given at the end of the manuscript.
Resumo:
Predicted future CO2 levels can affect reproduction, growth, and behaviour of many marine organisms. However, the capacity of species to adapt to predicted changes in ocean chemistry is largely unknown. We used a unique field-based experiment to test for differential survival associated with variation in CO2 tolerance in a wild population of coral-reef fishes. Juvenile damselfish exhibited variation in their response to elevated (700 µatm) CO2 when tested in the laboratory and this influenced their behaviour and risk of mortality in the wild. Individuals that were sensitive to elevated CO2 were more active and move further from shelter in natural coral reef habitat and, as a result, mortality from predation was significantly higher compared with individuals from the same treatment that were tolerant of elevated CO2. If individual variation in CO2 tolerance is heritable, this selection of phenotypes tolerant to elevated CO2 could potentially help mitigate the effects of ocean acidification.
Resumo:
Triple-Play (3P) and Quadruple-Play (4P) services are being widely offered by telecommunication services providers. Such services must be able to offer equal or higher quality levels than those obtained with traditional systems, especially for the most demanding services such as broadcast IPTV. This paper presents a matrix-based model, defined in terms of service components, user perceptions, agent capabilities, performance indicators and evaluation functions, which allows to estimate the overall quality of a set of convergent services, as perceived by the users, from a set of performance and/or Quality of Service (QoS) parameters of the convergent IP transport network
Linear global instability of non-orthogonal incompressible swept attachment-line boundary layer flow
Resumo:
Instability of the orthogonal swept attachment line boundary layer has received attention by local1, 2 and global3–5 analysis methods over several decades, owing to the significance of this model to transition to turbulence on the surface of swept wings. However, substantially less attention has been paid to the problem of laminar flow instability in the non-orthogonal swept attachment-line boundary layer; only a local analysis framework has been employed to-date.6 The present contribution addresses this issue from a linear global (BiGlobal) instability analysis point of view in the incompressible regime. Direct numerical simulations have also been performed in order to verify the analysis results and unravel the limits of validity of the Dorrepaal basic flow7 model analyzed. Cross-validated results document the effect of the angle _ on the critical conditions identified by Hall et al.1 and show linear destabilization of the flow with decreasing AoA, up to a limit at which the assumptions of the Dorrepaal model become questionable. Finally, a simple extension of the extended G¨ortler-H¨ammerlin ODE-based polynomial model proposed by Theofilis et al.4 is presented for the non-orthogonal flow. In this model, the symmetries of the three-dimensional disturbances are broken by the non-orthogonal flow conditions. Temporal and spatial one-dimensional linear eigenvalue codes were developed, obtaining consistent results with BiGlobal stability analysis and DNS. Beyond the computational advantages presented by the ODE-based model, it allows us to understand the functional dependence of the three-dimensional disturbances in the non-orthogonal case as well as their connections with the disturbances of the orthogonal stability problem.
Resumo:
Se desarrollan varias técnicas basadas en descomposición ortogonal propia (DOP) local y proyección de tipo Galerkin para acelerar la integración numérica de problemas de evolución, de tipo parabólico, no lineales. Las ideas y métodos que se presentan conllevan un nuevo enfoque para la modelización de tipo DOP, que combina intervalos temporales cortos en que se usa un esquema numérico estándard con otros intervalos temporales en que se utilizan los sistemas de tipo Galerkin que resultan de proyectar las ecuaciones de evolución sobre la variedad lineal generada por los modos DOP, obtenidos a partir de instantáneas calculadas en los intervalos donde actúa el código numérico. La variedad DOP se construye completamente en el primer intervalo, pero solamente se actualiza en los demás intervalos según las dinámicas de la solución, aumentando de este modo la eficiencia del modelo de orden reducido resultante. Además, se aprovechan algunas propiedades asociadas a la dependencia débil de los modos DOP tanto en la variable temporal como en los posibles parámetros de que pueda depender el problema. De esta forma, se aumentan la flexibilidad y la eficiencia computacional del proceso. La aplicación de los métodos resultantes es muy prometedora, tanto en la simulación de transitorios en flujos laminares como en la construcción de diagramas de bifurcación en sistemas dependientes de parámetros. Las ideas y los algoritmos desarrollados en la tesis se ilustran en dos problemas test, la ecuación unidimensional compleja de Ginzburg-Landau y el problema bidimensional no estacionario de la cavidad. Abstract Various ideas and methods involving local proper orthogonal decomposition (POD) and Galerkin projection are presented aiming at accelerating the numerical integration of nonlinear time dependent parabolic problems. The proposed methods come from a new approach to the POD-based model reduction procedures, which combines short runs with a given numerical solver and a reduced order model constructed by expanding the solution of the problem into appropriate POD modes, which span a POD manifold, and Galerkin projecting some evolution equations onto that linear manifold. The POD manifold is completely constructed from the outset, but only updated as time proceeds according to the dynamics, which yields an adaptive and flexible procedure. In addition, some properties concerning the weak dependence of the POD modes on time and possible parameters in the problem are exploited in order to increase the flexibility and efficiency of the low dimensional model computation. Application of the developed techniques to the approximation of transients in laminar fluid flows and the simulation of attractors in bifurcation problems shows very promising results. The test problems considered to illustrate the various ideas and check the performance of the algorithms are the onedimensional complex Ginzburg-Landau equation and the two-dimensional unsteady liddriven cavity problem.
Resumo:
This paper presents some ideas about a new neural network architecture that can be compared to a Taylor analysis when dealing with patterns. Such architecture is based on lineal activation functions with an axo-axonic architecture. A biological axo-axonic connection between two neurons is defined as the weight in a connection in given by the output of another third neuron. This idea can be implemented in the so called Enhanced Neural Networks in which two Multilayer Perceptrons are used; the first one will output the weights that the second MLP uses to computed the desired output. This kind of neural network has universal approximation properties even with lineal activation functions. There exists a clear difference between cooperative and competitive strategies. The former ones are based on the swarm colonies, in which all individuals share its knowledge about the goal in order to pass such information to other individuals to get optimum solution. The latter ones are based on genetic models, that is, individuals can die and new individuals are created combining information of alive one; or are based on molecular/celular behaviour passing information from one structure to another. A swarm-based model is applied to obtain the Neural Network, training the net with a Particle Swarm algorithm.
Resumo:
There is an increasing awareness among all kinds of organisations (in business,government and civil society) about the benefits of jointly working with stakeholders to satisfy both their goals and the social demands placed upon them. This is particularly the case within corporate social responsibility (CSR) frameworks. In this regard, multi-criteria tools for decision-making like the analytic hierarchy process (AHP) described in the paper can be useful for the building relationships with stakeholders. Since these tools can reveal decision-maker’s preferences, the integration of opinions from various stakeholders in the decision-making process may result in better and more innovative solutions with significant shared value. This paper is based on ongoing research to assess the feasibility of an AHP-based model to support CSR decisions in large infrastructure projects carried out by Red Electrica de España, the sole transmission agent and operator of the Spanishelectricity system.