954 resultados para Two-component Regulatory System
Resumo:
Relativistic density functional theory is widely applied in molecular calculations with heavy atoms, where relativistic and correlation effects are on the same footing. Variational stability of the Dirac Hamiltonian is a very important field of research from the beginning of relativistic molecular calculations on, among efforts for accuracy, efficiency, and density functional formulation, etc. Approximations of one- or two-component methods and searching for suitable basis sets are two major means for good projection power against the negative continuum. The minimax two-component spinor linear combination of atomic orbitals (LCAO) is applied in the present work for both light and super-heavy one-electron systems, providing good approximations in the whole energy spectrum, being close to the benchmark minimax finite element method (FEM) values and without spurious and contaminated states, in contrast to the presence of these artifacts in the traditional four-component spinor LCAO. The variational stability assures that minimax LCAO is bounded from below. New balanced basis sets, kinetic and potential defect balanced (TVDB), following the minimax idea, are applied with the Dirac Hamiltonian. Its performance in the same super-heavy one-electron quasi-molecules shows also very good projection capability against variational collapse, as the minimax LCAO is taken as the best projection to compare with. The TVDB method has twice as many basis coefficients as four-component spinor LCAO, which becomes now linear and overcomes the disadvantage of great time-consumption in the minimax method. The calculation with both the TVDB method and the traditional LCAO method for the dimers with elements in group 11 of the periodic table investigates their difference. New bigger basis sets are constructed than in previous research, achieving high accuracy within the functionals involved. Their difference in total energy is much smaller than the basis incompleteness error, showing that the traditional four-spinor LCAO keeps enough projection power from the numerical atomic orbitals and is suitable in research on relativistic quantum chemistry. In scattering investigations for the same comparison purpose, the failure of the traditional LCAO method of providing a stable spectrum with increasing size of basis sets is contrasted to the TVDB method, which contains no spurious states already without pre-orthogonalization of basis sets. Keeping the same conditions including the accuracy of matrix elements shows that the variational instability prevails over the linear dependence of the basis sets. The success of the TVDB method manifests its capability not only in relativistic quantum chemistry but also for scattering and under the influence of strong external electronic and magnetic fields. The good accuracy in total energy with large basis sets and the good projection property encourage wider research on different molecules, with better functionals, and on small effects.
Resumo:
At present, a fraction of 0.1 - 0.2% of the patients undergoing surgery become aware during the process. The situation is referred to as anesthesia awareness and is obviously very traumatic for the person experiencing it. The reason for its occurrence is mostly an insufficient dosage of the narcotic Propofol combined with the incapability of the technology monitoring the depth of the patient’s anesthetic state to notice the patient becoming aware. A solution can be a highly sensitive and selective real time monitoring device for Propofol based on optical absorption spectroscopy. Its working principle has been postulated by Prof. Dr. habil. H. Hillmer and formulated in DE10 2004 037 519 B4, filed on Aug 30th, 2004. It consists of the exploitation of Intra Cavity Absorption effects in a two mode laser system. In this Dissertation, a two mode external cavity semiconductor laser, which has been developed previously to this work is enhanced and optimized to a functional sensor. Enhancements include the implementation of variable couplers into the system and the implementation of a collimator arrangement into which samples can be introduced. A sample holder and cells are developed and characterized with a focus on compatibility with the measurement approach. Further optimization concerns the overall performance of the system: scattering sources are reduced by re-splicing all fiber-to-fiber connections, parasitic cavities are eliminated by suppressing the Fresnel reflexes of all one fiber ends by means of optical isolators and wavelength stability of the system is improved by the implementation of thermal insulation to the Fiber Bragg Gratings (FBG). The final laser sensor is characterized in detail thermally and optically. Two separate modes are obtained at 1542.0 and 1542.5 nm, tunable in a range of 1nm each. Mode Full Width at Half Maximum (FWHM) is 0.06nm and Signal to Noise Ratio (SNR) is as high as 55 dB. Independent of tuning the two modes of the system can always be equalized in intensity, which is important as the delicacy of the intensity equilibrium is one of the main sensitivity enhancing effects formulated in DE10 2004 037 519 B4. For the proof of concept (POC) measurements the target substance Propofol is diluted in the solvents Acetone and DiChloroMethane (DCM), which have been investigated for compatibility with Propofol beforehand. Eight measurement series (two solvents, two cell lengths and two different mode spacings) are taken, which draw a uniform picture: mode intensity ratio responds linearly to an increase of Propofol in all cases. The slope of the linear response indicates the sensitivity of the system. The eight series are split up into two groups: measurements taken in long cells and measurements taken in short cells.
Resumo:
ZUSAMMENFASSUNG: Proteinkinasen übernehmen zentrale Aufgaben in der Signaltransduktion höherer Zellen. Dabei ist die cAMP-abhängige Proteinkinase (PKA) bezüglich ihrer Struktur und Funktion eine der am besten charakterisierten Proteinkinasen. Trotzdem ist wenig über direkte Interaktionspartner der katalytischen Untereinheiten (PKA-C) bekannt. In einem Split-Ubiquitin basiertem Yeast Two Hybrid- (Y2H-)System wurden potenzielle Interaktionspartner der PKA-C identifiziert. Als Bait wurden sowohl die humane Hauptisoform Cα (hCα) als auch die Proteinkinase X (PrKX) eingesetzt. Nach der Bestätigung der Funktionalität der PKA-C-Baitproteine, dem Nachweis der Expression und der Interaktion mit dem bekannten Interaktionspartner PKI wurde ein Y2H-Screen gegen eine Mausembryo-cDNA-Expressionsbibliothek durchgeführt. Von 2*10^6 Klonen wurden 76 Kolonien isoliert, die ein mit PrKX interagierendes Preyprotein exprimierten. Über die Sequenzierung der enthaltenen Prey-Vektoren wurden 25 unterschiedliche, potenzielle Interaktionspartner identifiziert. Für hCα wurden über 2*10^6 S. cerevisiae-Kolonien untersucht, von denen 1.959 positiv waren (1.663 unter erhöhter Stringenz). Über die Sequenzierung von ca. 10% der Klone (168) konnten Sequenzen für 67 verschiedene, potenzielle Interaktionspartner der hCα identifiziert werden. 15 der Preyproteine wurden in beiden Screens identifiziert. Die PKA-C-spezifische Wechselwirkung der insgesamt 77 Preyproteine wurde im Bait Dependency Test gegen largeT, ein Protein ohne Bezug zum PKA-System, untersucht. Aus den PKA-C-spezifischen Bindern wurden die löslichen Preyproteine AMY-1, Bax72-192, Fabp3, Gng11, MiF, Nm23-M1, Nm23-M2, Sssca1 und VASP256-375 für die weitere in vitro-Validierung ausgewählt. Die Interaktion von FLAG-Strep-Strep-hCα (FSS-hCα) mit den über Strep-Tactin aus der rekombinanten Expression in E. coli gereinigten One-STrEP-HA-Proteinen (SSHA-Proteine) wurde über Koimmunpräzipitation für SSHA-Fabp3, -Nm23-M1, -Nm23-M2, -Sssca1 und -VASP256-375 bestätigt. In SPR-Untersuchungen, für die hCα kovalent an die Oberfläche eines CM5-Sensorchips gekoppelt wurde, wurden die ATP/Mg2+-Abhängigkeit der Bindungen sowie differentielle Effekte der ATP-kompetitiven Inhibitoren H89 und HA-1077 untersucht. Freie hCα, die vor der Injektion zu den SSHA-Proteinen gegeben wurde, kompetierte im Gegensatz zu FSS-PrKX die Bindung an die hCα-Oberfläche. Erste kinetische Analysen lieferten Gleichgewichtsdissoziationskonstanten im µM- (SSHA-Fabp3, -Sssca1), nM- (SSHA-Nm23-M1, –M2) bzw. pM- (SSHA-VASP256-375) Bereich. In funktionellen Analysen konnte eine Phosphorylierung von SSHA-Sssca1 und VASP256-375 durch hCα und FSS-PrKX im Autoradiogramm nachgewiesen werden. SSHA-VASP256-375 zeigte zudem eine starke Inhibition von hCα im Mobility Shift-Assay. Dieser inhibitorische Effekt sowie die hohe Affinität konnten jedoch auf eine Kombination aus der Linkersequenz des Vektors und dem N-Terminus von VASP256-375 zurückgeführt werden. Über die Wechselwirkungen der hier identifizierten Interaktionspartner Fabp3, Nm23-M1 und Nm23-M2 mit hCα können in Folgeuntersuchungen neue PKA-Funktionen insbesondere im Herzen sowie während der Zellmigration aufgedeckt werden. Sssca1 stellt dagegen ein neues, näher zu charakterisierendes PKA-Substrat dar.
Resumo:
Two simple and frequently used capture–recapture estimates of the population size are compared: Chao's lower-bound estimate and Zelterman's estimate allowing for contaminated distributions. In the Poisson case it is shown that if there are only counts of ones and twos, the estimator of Zelterman is always bounded above by Chao's estimator. If counts larger than two exist, the estimator of Zelterman is becoming larger than that of Chao's, if only the ratio of the frequencies of counts of twos and ones is small enough. A similar analysis is provided for the binomial case. For a two-component mixture of Poisson distributions the asymptotic bias of both estimators is derived and it is shown that the Zelterman estimator can experience large overestimation bias. A modified Zelterman estimator is suggested and also the bias-corrected version of Chao's estimator is considered. All four estimators are compared in a simulation study.
Resumo:
Understanding how multiple signals are integrated in living cells to produce a balanced response is a major challenge in biology. Two-component signal transduction pathways, such as bacterial chemotaxis, comprise histidine protein kinases (HPKs) and response regulators (RRs). These are used to sense and respond to changes in the environment. Rhodobacter sphaeroides has a complex chemosensory network with two signaling clusters, each containing a HPK, CheA. Here we demonstrate, using a mathematical model, how the outputs of the two signaling clusters may be integrated. We use our mathematical model supported by experimental data to predict that: (1) the main RR controlling flagellar rotation, CheY6, aided by its specific phosphatase, the bifunctional kinase CheA3, acts as a phosphate sink for the other RRs; and (2) a phosphorelay pathway involving CheB2 connects the cytoplasmic cluster kinase CheA3 with the polar localised kinase CheA2, and allows CheA3-P to phosphorylate non-cognate chemotaxis RRs. These two mechanisms enable the bifunctional kinase/phosphatase activity of CheA3 to integrate and tune the sensory output of each signaling cluster to produce a balanced response. The signal integration mechanisms identified here may be widely used by other bacteria, since like R. sphaeroides, over 50% of chemotactic bacteria have multiple cheA homologues and need to integrate signals from different sources.
Resumo:
Two simple and frequently used capture–recapture estimates of the population size are compared: Chao's lower-bound estimate and Zelterman's estimate allowing for contaminated distributions. In the Poisson case it is shown that if there are only counts of ones and twos, the estimator of Zelterman is always bounded above by Chao's estimator. If counts larger than two exist, the estimator of Zelterman is becoming larger than that of Chao's, if only the ratio of the frequencies of counts of twos and ones is small enough. A similar analysis is provided for the binomial case. For a two-component mixture of Poisson distributions the asymptotic bias of both estimators is derived and it is shown that the Zelterman estimator can experience large overestimation bias. A modified Zelterman estimator is suggested and also the bias-corrected version of Chao's estimator is considered. All four estimators are compared in a simulation study.
Resumo:
The surface of a nanofiber that is formed from a self-assembling pseudopeptide has been decorated by gold and silver nanoparticles that are stabilized by a dipeptide. Transmission electron microscopic images make the decoration visible. In this paper, a new strategy of mineralizing a pseudopeptide based nanofiber by gold and silver nanoparticles with use of a two-component nanografting method is described.
Resumo:
The genetic analysis workshop 15 (GAW15) problem 1 contained baseline expression levels of 8793 genes in immortalised B cells from 194 individuals in 14 Centre d’Etude du Polymorphisme Humane (CEPH) Utah pedigrees. Previous analysis of the data showed linkage and association and evidence of substantial individual variations. In particular, correlation was examined on expression levels of 31 genes and 25 target genes corresponding to two master regulatory regions. In this analysis, we apply Bayesian network analysis to gain further insight into these findings. We identify strong dependences and therefore provide additional insight into the underlying relationships between the genes involved. More generally, the approach is expected to be applicable for integrated analysis of genes on biological pathways.
Resumo:
The Stokes drift induced by surface waves distorts turbulence in the wind-driven mixed layer of the ocean, leading to the development of streamwise vortices, or Langmuir circulations, on a wide range of scales. We investigate the structure of the resulting Langmuir turbulence, and contrast it with the structure of shear turbulence, using rapid distortion theory (RDT) and kinematic simulation of turbulence. Firstly, these linear models show clearly why elongated streamwise vortices are produced in Langmuir turbulence, when Stokes drift tilts and stretches vertical vorticity into horizontal vorticity, whereas elongated streaky structures in streamwise velocity fluctuations (u) are produced in shear turbulence, because there is a cancellation in the streamwise vorticity equation and instead it is vertical vorticity that is amplified. Secondly, we develop scaling arguments, illustrated by analysing data from LES, that indicate that Langmuir turbulence is generated when the deformation of the turbulence by mean shear is much weaker than the deformation by the Stokes drift. These scalings motivate a quantitative RDT model of Langmuir turbulence that accounts for deformation of turbulence by Stokes drift and blocking by the air–sea interface that is shown to yield profiles of the velocity variances in good agreement with LES. The physical picture that emerges, at least in the LES, is as follows. Early in the life cycle of a Langmuir eddy initial turbulent disturbances of vertical vorticity are amplified algebraically by the Stokes drift into elongated streamwise vortices, the Langmuir eddies. The turbulence is thus in a near two-component state, with suppressed and . Near the surface, over a depth of order the integral length scale of the turbulence, the vertical velocity (w) is brought to zero by blocking of the air–sea interface. Since the turbulence is nearly two-component, this vertical energy is transferred into the spanwise fluctuations, considerably enhancing at the interface. After a time of order half the eddy decorrelation time the nonlinear processes, such as distortion by the strain field of the surrounding eddies, arrest the deformation and the Langmuir eddy decays. Presumably, Langmuir turbulence then consists of a statistically steady state of such Langmuir eddies. The analysis then provides a dynamical connection between the flow structures in LES of Langmuir turbulence and the dominant balance between Stokes production and dissipation in the turbulent kinetic energy budget, found by previous authors.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.
Resumo:
The main objective is to develop methods that automatically generate kinematic models for the movements of biological and robotic systems. Two methods for the identification of the kinematics are presented. The first method requires the elimination of the displacement variables that cannot be measured while the second method attempts to estimate the changes in these variables. The methods were tested using a planar two-revolute-joint linkage. Results show that the model parameters obtained agree with the actual parameters to within 5%. Moreover, the methods were applied to model head and neck movements in the sagittal plane. The results indicate that these movements are well modeled by a two-revolute-joint system. A spatial three-revolute-joint model was also discussed and tested.
Resumo:
Fine roots constitute an interface between plants and soils and thus play a crucial part in forest carbon, nutrient and water cycles. Their continuous growth and dieback, often termed turnover of fine roots, may constitute a major carbon input to soils and significantly contribute to belowground carbon cycle. For this reason, it is of importance to accurately estimate not only the standing biomass of fine roots, but also its rate of turnover. To date, no direct and reliable method of measuring fine root turnover exists. The main reason for this is that the two component processes of root turnover, namely growth and dieback of fine roots, nearly always happen in the same place and at the same time. Further, the estimation of fine root turnover is complicated by the inaccessibility of tree root systems, its labour intensiveness and is often compounded by artefacts created by soil disturbance. Despite the fact that the elucidation of the patterns and controls of forest fine root turnover is of utmost importance for the development of realistic carbon cycle models, our knowledge of the contribution of fine root turnover to carbon and nutrient cycles in forests remains uncertain. This chapter will detail all major methods currently used for estimating fine root turnover and highlight their advantages, as well as drawbacks.
Resumo:
Equilibrium phase diagrams are calculated for a selection of two-component block copolymer architectures using self-consistent field theory (SCFT). The topology of the phase diagrams is relatively unaffected by differences in architecture, but the phase boundaries shift significantly in composition. The shifts are consistent with the decomposition of architectures into constituent units as proposed by Gido and coworkers, but there are significant quantitative deviations from this principle in the intermediate-segregation regime. Although the complex phase windows continue to be dominated by the gyroid (G) phase, the regions of the newly discovered Fddd (O^70) phase become appreciable for certain architectures and the perforated-lamellar (PL) phase becomes stable when the complex phase windows shift towards high compositional asymmetry.
Resumo:
[1] Sea ice is a two-phase, two-component, reactive porous medium: an example of what is known in other contexts as a mushy layer. The fundamental conservation laws underlying the mathematical description of mushy layers provide a robust foundation for the prediction of sea-ice evolution. Here we show that the general equations describing mushy layers reduce to the model of Maykut and Untersteiner (1971) under the same approximations employed therein.
Resumo:
In the UK, architectural design is regulated through a system of design control for the public interest, which aims to secure and promote ‘quality’ in the built environment. Design control is primarily implemented by locally employed planning professionals with political oversight, and independent design review panels, staffed predominantly by design professionals. Design control has a lengthy and complex history, with the concept of ‘design’ offering a range of challenges for a regulatory system of governance. A simultaneously creative and emotive discipline, architectural design is a difficult issue to regulate objectively or consistently, often leading to policy that is regarded highly discretionary and flexible. This makes regulatory outcomes difficult to predict, as approaches undertaken by the ‘agents of control’ can vary according to the individual. The role of the design controller is therefore central, tasked with the responsibility of interpreting design policy and guidance, appraising design quality and passing professional judgment. However, little is really known about what influences the way design controllers approach their task, providing a ‘veil’ over design control, shrouding the basis of their decisions. This research engaged directly with the attitudes and perceptions of design controllers in the UK, lifting this ‘veil’. Using in-depth interviews and Q-Methodology, the thesis explores this hidden element of control, revealing a number of key differences in how controllers approach and implement policy and guidance, conceptualise design quality, and rationalise their evaluations and judgments. The research develops a conceptual framework for agency in design control – this consists of six variables (Regulation; Discretion; Skills; Design Quality; Aesthetics; and Evaluation) and it is suggested that this could act as a ‘heuristic’ instrument for UK controllers, prompting more reflexivity in relation to evaluating their own position, approaches, and attitudes, leading to better practice and increased transparency of control decisions.