993 resultados para parameter-space graph


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a two-parameter family of Z(2) gauge theories on a lattice discretization T(M) of a three-manifold M and its relation to topological field theories. Familiar models such as the spin-gauge model are curves on a parameter space Gamma. We show that there is a region Gamma(0) subset of Gamma where the partition function and the expectation value h < W-R(gamma)> i of the Wilson loop can be exactly computed. Depending on the point of Gamma(0), the model behaves as topological or quasi-topological. The partition function is, up to a scaling factor, a topological number of M. The Wilson loop on the other hand, does not depend on the topology of gamma. However, for a subset of Gamma(0), < W-R(gamma)> depends on the size of gamma and follows a discrete version of an area law. At the zero temperature limit, the spin-gauge model approaches the topological and the quasi-topological regions depending on the sign of the coupling constant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die Beziehung zwischen genetischem Polymorphismus von Populationen und Umweltvariabilität: Anwendung der Fitness-Set Theorie Das Quantitative Fitness-Set Modell (QFM) ist eine Erweiterung der Fitness-Set Theorie. Das QFM kann Abstufungen zwischen grob- und feinkörnigen regelmäßigen Schwankungen zweier Umwelten darstellen. Umwelt- und artspezifische Parameter, sowie die bewirkte Körnigkeit, sind quantifizierbar. Experimentelle Daten lassen sich analysieren und das QFM erweist sich in großen Populationen als sehr genau, was durch den diskreten Parameterraum unterstützt wird. Kleine Populationen und/oder hohe genetische Diversität führen zu Schätzungsungenauigkeiten, die auch in natürlichen Populationen zu erwarten sind. Ein populationsgrößenabhängiger Unschärfewert erweitert die Punktschätzung eines Parametersatzes zur Intervallschätzung. Diese Intervalle wirken in finiten Populationen als Fitnessbänder. Daraus ergibt sich die Hypothese, dass bei Arten, die in dichten kontinuierlichen Fitnessbändern leben, Generalisten und in diskreten Fitnessbändern Spezialisten evolvieren.Asynchrone Reproduktionsstrategien führen zur Bewahrung genetischer Diversität. Aus dem Wechsel von grobkörniger zu feinkörniger Umweltvariation ergibt sich eine Bevorzugung der spezialisierten Genotypen. Aus diesem Angriffspunkt für disruptive Selektion lässt sich die Hypothese Artbildung in Übergangsszenarien von grobkörniger zu feinkörniger Umweltvariation formulieren. Im umgekehrten Fall ist Diversitätsverlust und stabilisierende Selektion zu erwarten Dies ist somit eine prozessorientierte Erklärung für den Artenreichtum der (feinkörnigen) Tropen im Vergleich zu den artenärmeren, jahreszeitlichen Schwankungen unterworfenen (grobkörnigen) temperaten Zonen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The subject of this thesis are the interactions between nucleosome core particles (NCPs). NCPs are the primary storage units of DNA in eucaryotic cells. Each NCP consists of a core of eight histone proteins and a strand of DNA, which is wrapped around about two times. Each histone protein has a terminal tail passing over and between the superhelix of the wrapped DNA. Special emphasis was placed on the role of the histone tails, since experimental ndings suggest that the tails have a great in uence on the mutual attraction of the NCPs. In those experiments Mangenot et al. observe a dramatic change in the con guration of the tails, which is accompanied by evidence of mutual attraction between NCPs, when a certain salt concentration is reached. Existing models used in the theoretical approaches and in simulations focus on the description of the histone core and the wrapped DNA, but neglect the histone tails. We introduce the multi chain complex as a new simulation model. Here the histone core and the wrapping DNA are modelled via a charged sphere, while the histone tails are represented by oppositely charged chains grafted on the sphere surface. We start by investigating the parameter space describing a single NCP. The Debye-Huckel potential is used to model the electrostatic interactions and to determine the e ective charge of the NCP core. This value is subsequently used for a study of the pairinteraction of two NCPs via an extensive Molecular Dynamics study. The monomer distribution of the full chain model is investigated. The existence of tail bridges between the cores is demonstrated. Finally, by discriminating between bridging and non-bridging con gurations, we can show that the effect of tail bridging between the spheres does indeed account for the observed attraction. The full chain model can serve as a model to study the acetylation of the histone tails of the nucleosome. The reduction of the charge fraction of the tails, that corresponds to the process of acetylation, leads to a reduction or even the disappearance of the attraction. A recent MC study links this e ect to the unfolding of the chromatin ber in the case of acetylated histone tails. In this case the acetylation of the histone tails leads to the formation of heterochromatin, and one could understand how larger regions of the genetic information could be inactivated through this mechanism.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In dieser Arbeit geht es um die Schätzung von Parametern in zeitdiskreten ergodischen Markov-Prozessen im allgemeinen und im CIR-Modell im besonderen. Beim CIR-Modell handelt es sich um eine stochastische Differentialgleichung, die von Cox, Ingersoll und Ross (1985) zur Beschreibung der Dynamik von Zinsraten vorgeschlagen wurde. Problemstellung ist die Schätzung der Parameter des Drift- und des Diffusionskoeffizienten aufgrund von äquidistanten diskreten Beobachtungen des CIR-Prozesses. Nach einer kurzen Einführung in das CIR-Modell verwenden wir die insbesondere von Bibby und Sørensen untersuchte Methode der Martingal-Schätzfunktionen und -Schätzgleichungen, um das Problem der Parameterschätzung in ergodischen Markov-Prozessen zunächst ganz allgemein zu untersuchen. Im Anschluss an Untersuchungen von Sørensen (1999) werden hinreichende Bedingungen (im Sinne von Regularitätsvoraussetzungen an die Schätzfunktion) für die Existenz, starke Konsistenz und asymptotische Normalität von Lösungen einer Martingal-Schätzgleichung angegeben. Angewandt auf den Spezialfall der Likelihood-Schätzung stellen diese Bedingungen zugleich lokal-asymptotische Normalität des Modells sicher. Ferner wird ein einfaches Kriterium für Godambe-Heyde-Optimalität von Schätzfunktionen angegeben und skizziert, wie dies in wichtigen Spezialfällen zur expliziten Konstruktion optimaler Schätzfunktionen verwendet werden kann. Die allgemeinen Resultate werden anschließend auf das diskretisierte CIR-Modell angewendet. Wir analysieren einige von Overbeck und Rydén (1997) vorgeschlagene Schätzer für den Drift- und den Diffusionskoeffizienten, welche als Lösungen quadratischer Martingal-Schätzfunktionen definiert sind, und berechnen das optimale Element in dieser Klasse. Abschließend verallgemeinern wir Ergebnisse von Overbeck und Rydén (1997), indem wir die Existenz einer stark konsistenten und asymptotisch normalen Lösung der Likelihood-Gleichung zeigen und lokal-asymptotische Normalität für das CIR-Modell ohne Einschränkungen an den Parameterraum beweisen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability of block copolymers to spontaneously self-assemble into a variety of ordered nano-structures not only makes them a scientifically interesting system for the investigation of order-disorder phase transitions, but also offers a wide range of nano-technological applications. The architecture of a diblock is the most simple among the block copolymer systems, hence it is often used as a model system in both experiment and theory. We introduce a new soft-tetramer model for efficient computer simulations of diblock copolymer melts. The instantaneous non-spherical shape of polymer chains in molten state is incorporated by modeling each of the two blocks as two soft spheres. The interactions between the spheres are modeled in a way that the diblock melt tends to microphase separate with decreasing temperature. Using Monte Carlo simulations, we determine the equilibrium structures at variable values of the two relevant control parameters, the diblock composition and the incompatibility of unlike components. The simplicity of the model allows us to scan the control parameter space in a completeness that has not been reached in previous molecular simulations.The resulting phase diagram shows clear similarities with the phase diagram found in experiments. Moreover, we show that structural details of block copolymer chains can be reproduced by our simple model.We develop a novel method for the identification of the observed diblock copolymer mesophases that formalizes the usual approach of direct visual observation,using the characteristic geometry of the structures. A cluster analysis algorithm is used to determine clusters of each component of the diblock, and the number and shape of the clusters can be used to determine the mesophase.We also employ methods from integral geometry for the identification of mesophases and compare their usefulness to the cluster analysis approach.To probe the properties of our model in confinement, we perform molecular dynamics simulations of atomistic polyethylene melts confined between graphite surfaces. The results from these simulations are used as an input for an iterative coarse-graining procedure that yields a surface interaction potential for the soft-tetramer model. Using the interaction potential derived in that way, we perform an initial study on the behavior of the soft-tetramer model in confinement. Comparing with experimental studies, we find that our model can reflect basic features of confined diblock copolymer melts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is on loop-induced processes in theories with warped extra dimensions where the fermions and gauge bosons are allowed to propagate in the bulk, while the Higgs sector is localized on or near the infra-red brane. These so-called Randall-Sundrum (RS) models have the potential to simultaneously explain the hierarchy problem and address the question of what causes the large hierarchies in the fermion sector of the Standard Model (SM). The Kaluza-Klein (KK) excitations of the bulk fields can significantly affect the loop-level processes considered in this thesis and, hence, could indirectly indicate the existence of warped extra dimensions. The analytical part of this thesis deals with the detailed calculation of three loop-induced processes in the RS models in question: the Higgs production process via gluon fusion, the Higgs decay into two photons, and the flavor-changing neutral current b → sγ. A comprehensive, five-dimensional (5D) analysis will show that the amplitudes of the Higgs processes can be expressed in terms of integrals over 5D propagators with the Higgs-boson profile along the extra dimension, which can be used for arbitrary models with a compact extra dimension. To this end, both the boson and fermion propagators in a warped 5D background are derived. It will be shown that the seemingly contradictory results for the gluon fusion amplitude in the literature can be traced back to two distinguishable, not smoothly-connected incarnations of the RS model. The investigation of the b → sγ transition is performed in the KK decomposed theory. It will be argued that summing up the entire KK tower leads to a finite result, which can be well approximated by a closed, analytical expression.rnIn the phenomenological part of this thesis, the analytic results of all relevant Higgs couplings in the RS models in question are compared with current and in particular future sensitivities of the Large Hadron Collider (LHC) and the planned International Linear Collider. The latest LHC Higgs data is then used to exclude significant portions of the parameter space of each RS scenario. The analysis will demonstrate that especially the loop-induced Higgs couplings are sensitive to KK particles of the custodial RS model with masses in the multi tera-electronvolt range. Finally, the effect of the RS model on three flavor observables associated with the b → sγ transition are examined. In particular, we study the branching ratio of the inclusive decay B → X_s γ

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Während der letzten Jahre wurde für Spinfilter-Detektoren ein wesentlicher Schritt in Richtung stark erhöhter Effizienz vollzogen. Das ist eine wichtige Voraussetzung für spinaufgelöste Messungen mit Hilfe von modernen Elektronensp ektrometern und Impulsmikroskopen. In dieser Doktorarbeit wurden bisherige Arbeiten der parallel abbildenden Technik weiterentwickelt, die darauf beruht, dass ein elektronenoptisches Bild unter Ausnutzung der k-parallel Erhaltung in der Niedrigenergie-Elektronenbeugung auch nach einer Reflektion an einer kristallinen Oberfläche erhalten bleibt. Frühere Messungen basierend auf der spekularen Reflexion an einerrnW(001) Oberfläche [Kolbe et al., 2011; Tusche et al., 2011] wurden auf einenrnviel größeren Parameterbereich erweitert und mit Ir(001) wurde ein neues System untersucht, welches eine sehr viel längere Lebensdauer der gereinigten Kristalloberfläche im UHV aufweist. Die Streuenergie- und Einfallswinkel-“Landschaft” der Spinempfindlichkeit S und der Reflektivität I/I0 von gestreuten Elektronen wurde im Bereich von 13.7 - 36.7 eV Streuenergie und 30◦ - 60◦ Streuwinkel gemessen. Die dazu neu aufgebaute Messanordnung umfasst eine spinpolarisierte GaAs Elektronenquellernund einen drehbaren Elektronendetektor (Delayline Detektor) zur ortsauflösenden Detektion der gestreuten Elektronen. Die Ergebnisse zeigen mehrere Regionen mit hoher Asymmetrie und großem Gütefaktor (figure of merit FoM), definiert als S2 · I/I0. Diese Regionen eröffnen einen Weg für eine deutliche Verbesserung der Vielkanal-Spinfiltertechnik für die Elektronenspektroskopie und Impulsmikroskopie. Im praktischen Einsatz erwies sich die Ir(001)-Einkristalloberfläche in Bezug auf längere Lebensdauer im UHV (ca. 1 Messtag), verbunden mit hoher FOM als sehr vielversprechend. Der Ir(001)-Detektor wurde in Verbindung mit einem Halbkugelanalysator bei einem zeitaufgelösten Experiment im Femtosekunden-Bereich am Freie-Elektronen-Laser FLASH bei DESY eingesetzt. Als gute Arbeitspunkte erwiesen sich 45◦ Streuwinkel und 39 eV Streuenergie, mit einer nutzbaren Energiebreite von 5 eV, sowie 10 eV Streuenergie mit einem schmaleren Profil von < 1 eV aber etwa 10× größerer Gütefunktion. Die Spinasymmetrie erreicht Werte bis 70 %, was den Einfluss von apparativen Asymmetrien deutlich reduziert. Die resultierende Messungen und Energie-Winkel-Landschaft zeigt recht gute Übereinstimmung mit der Theorie (relativistic layer-KKR SPLEED code [Braun et al., 2013; Feder et al.,rn2012])

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Loading is important to maintain the balance of matrix turnover in the intervertebral disc (IVD). Daily cyclic diurnal assists in the transport of large soluble factors across the IVD and its surrounding circulation and applies direct and indirect stimulus to disc cells. Acute mechanical injury and accumulated overloading, however, could induce disc degeneration. Recently, there is more information available on how cyclic loading, especially axial compression and hydrostatic pressure, affects IVD cell biology. This review summarises recent studies on the response of the IVD and stem cells to applied cyclic compression and hydrostatic pressure. These studies investigate the possible role of loading in the initiation and progression of disc degeneration as well as quantifying a physiological loading condition for the study of disc degeneration biological therapy. Subsequently, a possible physiological/beneficial loading range is proposed. This physiological/beneficial loading could provide insight into how to design loading regimes in specific system for the testing of various biological therapies such as cell therapy, chemical therapy or tissue engineering constructs to achieve a better final outcome. In addition, the parameter space of 'physiological' loading may also be an important factor for the differentiation of stem cells towards most ideally 'discogenic' cells for tissue engineering purpose.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using simulated climate data from the comprehensive coupled climate model IPSL CM4, we simulate the Greenland ice sheet (GrIS) during the Eemian interglaciation with the three-dimensional ice sheet model SICOPOLIS. The Eemian is a period 126 000 yr before present (126 ka) with Arctic temperatures comparable to projections for the end of this century. In our simulation, the northeastern part of the GrIS is unstable and retreats significantly, despite moderate melt rates. This result is found to be robust to perturbations within a wide parameter space of key parameters of the ice sheet model, the choice of initial ice temperature, and has been reproduced with climate forcing from a second coupled climate model, the CCSM3. It is shown that the northeast GrIS is the most vulnerable. Even a small increase in melt removes many years of ice accumulation, giving a large mass imbalance and triggering the strong ice-elevation feedback. Unlike the south and west, melting in the northeast is not compensated by high accumulation. The analogy with modern warming suggests that in coming decades, positive feedbacks could increase the rate of mass loss of the northeastern GrIS, exceeding the recent observed thinning rates in the south.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A method is given for proving efficiency of NPMLE directly linked to empirical process theory. The conditions in general are appropriate consistency of the NPMLE, differentiability of the model, differentiability of the parameter of interest, local convexity of the parameter space, and a Donsker class condition for the class of efficient influence functions obtained by varying the parameters. For the case that the model is linear in the parameter and the parameter space is convex, as with most nonparametric missing data models, we show that the method leads to an identity for the NPMLE which almost says that the NPMLE is efficient and provides us straightforwardly with a consistency and efficiency proof. This identify is extended to an almost linear class of models which contain biased sampling models. To illustrate, the method is applied to the univariate censoring model, random truncation models, interval censoring case I model, the class of parametric models and to a class of semiparametric models.