223 resultados para Prospecting -- Geophysical methods
Resumo:
Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.
Resumo:
The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.
Resumo:
Rapport de synthèse1. Partie de laboratoireCette première étude décrit le développement et la validation, selon les standards internationaux, de deux techniques de mesure des concentrations sanguines de voriconazole, un nouvel agent antifongique à large spectre: 1) la chromatographic en phase liquide à haute pression et 2) le bio-essai utilisant une souche mutante de Candida hypersensible au voriconazole. Ce travail a aussi permis de mettre en évidence une importante et imprévisible variabilité inter- et intra-individuelle des concentrations sanguines de voriconazole malgré l'utilisation des doses recommandées par le fabriquant. Ce travail a été publié dans un journal avec "peer-review": "Variability of voriconazole plasma levels measured by new high- performance liquid chromatography and bioassay methods" by A. Pascual, V. Nieth, T. Calandra, J. Bille, S. Bolay, L.A. Decosterd, T. Buclin, P.A. Majcherczyk, D. Sanglard, 0. Marchetti. Antimicrobial Agents Chemotherapy, 2007; 51:137-432. Partie CliniqueCette deuxième étude a évalué de façon prospective l'impact clinique des concentrations sanguines de voriconazole sur l'efficacité et sécurité thérapeutique chez des patients atteints d'infections fongiques. Des concentrations sanguines élevées étaient significativement associés à la survenue d'une toxicité neurologique (encéphalopathie avec confusion, hallucinations et myoclonies) et des concentrations sanguines basses à une réponse insuffisante au traitement antifongique (persistance ou progression des signes cliniques et radiologiques de l'infection). Dans la majorité des cas, un ajustement de la dose de voriconazole, sur la base des concentrations mesurées, a abouti à une récupération neurologique complète ou à une résolution de l'infection, respectivement. Ce travail a été publié dans un journal avec "peer-review": " Voriconazole Therapeutic Drug Monitoring in Patients with Invasive Mycoses Improves Efficacy and Safety Outcomes" by A. Pascual, T. Calandra, S. Bolay, T. Buclin, J. Bille, and O. Marchetti. Clinical Infectious Diseases, 2008 January 15; 46(2): 201-11.Ces deux études, financées de façon conjointe par un "grant" international de la Société suisse d'infectiologie et la Société internationale de maladies infectieuses et par la Fondation pour le progrès en microbiologie médicale et maladies infectieuses (FAMMID, Lausanne), ont été réalisées au sein du Service des Maladies Infectieuses, Département de Médecine, au CHUV, en étroite collaboration avec la Division de Pharmacologie Clinique, Département de Médecine, au CHUV et l'Institut de Microbiologie du CHUV et de l'Université de Lausanne.
Resumo:
In the last five years, Deep Brain Stimulation (DBS) has become the most popular and effective surgical technique for the treatent of Parkinson's disease (PD). The Subthalamic Nucleus (STN) is the usual target involved when applying DBS. Unfortunately, the STN is in general not visible in common medical imaging modalities. Therefore, atlas-based segmentation is commonly considered to locate it in the images. In this paper, we propose a scheme that allows both, to perform a comparison between different registration algorithms and to evaluate their ability to locate the STN automatically. Using this scheme we can evaluate the expert variability against the error of the algorithms and we demonstrate that automatic STN location is possible and as accurate as the methods currently used.
Resumo:
We present a novel numerical algorithm for the simulation of seismic wave propagation in porous media, which is particularly suitable for the accurate modelling of surface wave-type phenomena. The differential equations of motion are based on Biot's theory of poro-elasticity and solved with a pseudospectral approach using Fourier and Chebyshev methods to compute the spatial derivatives along the horizontal and vertical directions, respectively. The time solver is a splitting algorithm that accounts for the stiffness of the differential equations. Due to the Chebyshev operator the grid spacing in the vertical direction is non-uniform and characterized by a denser spatial sampling in the vicinity of interfaces, which allows for a numerically stable and accurate evaluation of higher order surface wave modes. We stretch the grid in the vertical direction to increase the minimum grid spacing and reduce the computational cost. The free-surface boundary conditions are implemented with a characteristics approach, where the characteristic variables are evaluated at zero viscosity. The same procedure is used to model seismic wave propagation at the interface between a fluid and porous medium. In this case, each medium is represented by a different grid and the two grids are combined through a domain-decomposition method. This wavefield decomposition method accounts for the discontinuity of variables and is crucial for an accurate interface treatment. We simulate seismic wave propagation with open-pore and sealed-pore boundary conditions and verify the validity and accuracy of the algorithm by comparing the numerical simulations to analytical solutions based on zero viscosity obtained with the Cagniard-de Hoop method. Finally, we illustrate the suitability of our algorithm for more complex models of porous media involving viscous pore fluids and strongly heterogeneous distributions of the elastic and hydraulic material properties.
Resumo:
The water content dynamics in the upper soil surface during evaporation is a key element in land-atmosphere exchanges. Previous experimental studies have suggested that the soil water content increases at the depth of 5 to 15 cm below the soil surface during evapo- ration, while the layer in the immediate vicinity of the soil surface is drying. In this study, the dynamics of water content profiles exposed to solar radiative forcing was monitored at a high temporal resolution using dielectric methods both in the presence and absence of evaporation. A 4-d comparison of reported moisture content in coarse sand in covered and uncovered buckets using a commercial dielectric-based probe (70 MHz ECH2O-5TE, Decagon Devices, Pullman, WA) and the standard 1-GHz time domain reflectometry method. Both sensors reported a positive correlation between temperature and water content in the 5- to 10-cm depth, most pronounced in the morning during heating and in the afternoon during cooling. Such positive correlation might have a physical origin induced by evaporation at the surface and redistribution due to liquid water fluxes resulting from the temperature- gradient dynamics within the sand profile at those depths. Our experimental data suggest that the combined effect of surface evaporation and temperature-gradient dynamics should be considered to analyze experimental soil water profiles. Additional effects related to the frequency of operation and to protocols for temperature compensation of the dielectric sensors may also affect the probes' response during large temperature changes.
Resumo:
This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.
Resumo:
Les catastrophes sont souvent perçues comme des événements rapides et aléatoires. Si les déclencheurs peuvent être soudains, les catastrophes, elles, sont le résultat d'une accumulation des conséquences d'actions et de décisions inappropriées ainsi que du changement global. Pour modifier cette perception du risque, des outils de sensibilisation sont nécessaires. Des méthodes quantitatives ont été développées et ont permis d'identifier la distribution et les facteurs sous- jacents du risque.¦Le risque de catastrophes résulte de l'intersection entre aléas, exposition et vulnérabilité. La fréquence et l'intensité des aléas peuvent être influencées par le changement climatique ou le déclin des écosystèmes, la croissance démographique augmente l'exposition, alors que l'évolution du niveau de développement affecte la vulnérabilité. Chacune de ses composantes pouvant changer, le risque est dynamique et doit être réévalué périodiquement par les gouvernements, les assurances ou les agences de développement. Au niveau global, ces analyses sont souvent effectuées à l'aide de base de données sur les pertes enregistrées. Nos résultats montrent que celles-ci sont susceptibles d'être biaisées notamment par l'amélioration de l'accès à l'information. Elles ne sont pas exhaustives et ne donnent pas d'information sur l'exposition, l'intensité ou la vulnérabilité. Une nouvelle approche, indépendante des pertes reportées, est donc nécessaire.¦Les recherches présentées ici ont été mandatées par les Nations Unies et par des agences oeuvrant dans le développement et l'environnement (PNUD, l'UNISDR, la GTZ, le PNUE ou l'UICN). Ces organismes avaient besoin d'une évaluation quantitative sur les facteurs sous-jacents du risque, afin de sensibiliser les décideurs et pour la priorisation des projets de réduction des risques de désastres.¦La méthode est basée sur les systèmes d'information géographique, la télédétection, les bases de données et l'analyse statistique. Une importante quantité de données (1,7 Tb) et plusieurs milliers d'heures de calculs ont été nécessaires. Un modèle de risque global a été élaboré pour révéler la distribution des aléas, de l'exposition et des risques, ainsi que pour l'identification des facteurs de risque sous- jacent de plusieurs aléas (inondations, cyclones tropicaux, séismes et glissements de terrain). Deux indexes de risque multiples ont été générés pour comparer les pays. Les résultats incluent une évaluation du rôle de l'intensité de l'aléa, de l'exposition, de la pauvreté, de la gouvernance dans la configuration et les tendances du risque. Il apparaît que les facteurs de vulnérabilité changent en fonction du type d'aléa, et contrairement à l'exposition, leur poids décroît quand l'intensité augmente.¦Au niveau local, la méthode a été testée pour mettre en évidence l'influence du changement climatique et du déclin des écosystèmes sur l'aléa. Dans le nord du Pakistan, la déforestation induit une augmentation de la susceptibilité des glissements de terrain. Les recherches menées au Pérou (à base d'imagerie satellitaire et de collecte de données au sol) révèlent un retrait glaciaire rapide et donnent une évaluation du volume de glace restante ainsi que des scénarios sur l'évolution possible.¦Ces résultats ont été présentés à des publics différents, notamment en face de 160 gouvernements. Les résultats et les données générées sont accessibles en ligne (http://preview.grid.unep.ch). La méthode est flexible et facilement transposable à des échelles et problématiques différentes, offrant de bonnes perspectives pour l'adaptation à d'autres domaines de recherche.¦La caractérisation du risque au niveau global et l'identification du rôle des écosystèmes dans le risque de catastrophe est en plein développement. Ces recherches ont révélés de nombreux défis, certains ont été résolus, d'autres sont restés des limitations. Cependant, il apparaît clairement que le niveau de développement configure line grande partie des risques de catastrophes. La dynamique du risque est gouvernée principalement par le changement global.¦Disasters are often perceived as fast and random events. If the triggers may be sudden, disasters are the result of an accumulation of actions, consequences from inappropriate decisions and from global change. To modify this perception of risk, advocacy tools are needed. Quantitative methods have been developed to identify the distribution and the underlying factors of risk.¦Disaster risk is resulting from the intersection of hazards, exposure and vulnerability. The frequency and intensity of hazards can be influenced by climate change or by the decline of ecosystems. Population growth increases the exposure, while changes in the level of development affect the vulnerability. Given that each of its components may change, the risk is dynamic and should be reviewed periodically by governments, insurance companies or development agencies. At the global level, these analyses are often performed using databases on reported losses. Our results show that these are likely to be biased in particular by improvements in access to information. International losses databases are not exhaustive and do not give information on exposure, the intensity or vulnerability. A new approach, independent of reported losses, is necessary.¦The researches presented here have been mandated by the United Nations and agencies working in the development and the environment (UNDP, UNISDR, GTZ, UNEP and IUCN). These organizations needed a quantitative assessment of the underlying factors of risk, to raise awareness amongst policymakers and to prioritize disaster risk reduction projects.¦The method is based on geographic information systems, remote sensing, databases and statistical analysis. It required a large amount of data (1.7 Tb of data on both the physical environment and socio-economic parameters) and several thousand hours of processing were necessary. A comprehensive risk model was developed to reveal the distribution of hazards, exposure and risk, and to identify underlying risk factors. These were performed for several hazards (e.g. floods, tropical cyclones, earthquakes and landslides). Two different multiple risk indexes were generated to compare countries. The results include an evaluation of the role of the intensity of the hazard, exposure, poverty, governance in the pattern and trends of risk. It appears that the vulnerability factors change depending on the type of hazard, and contrary to the exposure, their weight decreases as the intensity increases.¦Locally, the method was tested to highlight the influence of climate change and the ecosystems decline on the hazard. In northern Pakistan, deforestation exacerbates the susceptibility of landslides. Researches in Peru (based on satellite imagery and ground data collection) revealed a rapid glacier retreat and give an assessment of the remaining ice volume as well as scenarios of possible evolution.¦These results were presented to different audiences, including in front of 160 governments. The results and data generated are made available online through an open source SDI (http://preview.grid.unep.ch). The method is flexible and easily transferable to different scales and issues, with good prospects for adaptation to other research areas. The risk characterization at a global level and identifying the role of ecosystems in disaster risk is booming. These researches have revealed many challenges, some were resolved, while others remained limitations. However, it is clear that the level of development, and more over, unsustainable development, configures a large part of disaster risk and that the dynamics of risk is primarily governed by global change.
Resumo:
Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. METHODS: This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with (131)I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with (177)Lu-peptides; and case 3, hepatocellular carcinoma treated with (90)Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, D(VD), calculated assuming uniform density, was corrected for density, giving D(VDd). The average 3D-RD absorbed dose values, D(3DRD), were compared with D(VD) and D(VDd), using the relative difference Δ(VD/3DRD). At the voxel level, density-binned Δ(VD/3DRD) and Δ(VDd/3DRD) were plotted against ρ and fitted with a linear regression. RESULTS: The D(VD) calculations showed a good agreement with D(3DRD). Δ(VD/3DRD) was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the Δ(VD/3DRD) range was 0%-14% for cases 1 and 2, and -3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged Δ(VD/3DRD) and density, ρ: case 1 (Δ = -0.56ρ + 0.62, R(2) = 0.93), case 2 (Δ = -0.91ρ + 0.96, R(2) = 0.99), and case 3 (Δ = -0.69ρ + 0.72, R(2) = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (Δ(VDd/3DRD) < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the Δ(VDd/3DRD) range decreased for the 3 clinical cases (case 1, -1% to 4%; case 2, -0.5% to 1.5%, and -1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ - 0.38, R(2) = 0.88) although the slope in case 1 was less pronounced. CONCLUSION: This study shows a small influence of TDH in the abdominal region for 3 representative clinical cases. A simple density-correction method was proposed and improved the comparison in the absorbed dose calculations when using our voxel S value implementation.
Resumo:
The spatial resolution visualized with hydrological models and the conceptualized images of subsurface hydrological processes often exceed resolution of the data collected with classical instrumentation at the field scale. In recent years it was possible to increasingly diminish the inherent gap to information from point like field data through the application of hydrogeophysical methods at field-scale. With regards to all common geophysical exploration techniques, electric and electromagnetic methods have arguably to greatest sensitivity to hydrologically relevant parameters. Of particular interest in this context are induced polarisation (IP) measurements, which essentially constrain the capacity of a probed subsurface region to store an electrical charge. In the absence of metallic conductors the IP- response is largely driven by current conduction along the grain surfaces. This offers the perspective to link such measurements to the characteristics of the solid-fluid-interface and thus, at least in unconsolidated sediments, should allow for first-order estimates of the permeability structure.¦While the IP-effect is well explored through laboratory experiments and in part verified through field data for clay-rich environments, the applicability of IP-based characterizations to clay-poor aquifers is not clear. For example, polarization mechanisms like membrane polarization are not applicable in the rather wide pore-systems of clay free sands, and the direct transposition of Schwarz' theory relating polarization of spheres to the relaxation mechanism of polarized cells to complex natural sediments yields ambiguous results.¦In order to improve our understanding of the structural origins of IP-signals in such environments as well as their correlation with pertinent hydrological parameters, various laboratory measurements have been conducted. We consider saturated quartz samples with a grain size spectrum varying from fine sand to fine gravel, that is grain diameters between 0,09 and 5,6 mm, as well as corresponding pertinent mixtures which can be regarded as proxies for widespread alluvial deposits. The pore space characteristics are altered by changing (i) the grain size spectra, (ii) the degree of compaction, and (iii) the level of sorting. We then examined how these changes affect the SIP response, the hydraulic conductivity, and the specific surface area of the considered samples, while keeping any electrochemical variability during the measurements as small as possible. The results do not follow simple assumptions on relationships to single parameters such as grain size. It was found that the complexity of natural occurring media is not yet sufficiently represented when modelling IP. At the same time simple correlation to permeability was found to be strong and consistent. Hence, adaptations with the aim of better representing the geo-structure of natural porous media were applied to the simplified model space used in Schwarz' IP-effect-theory. The resulting semi- empiric relationship was found to more accurately predict the IP-effect and its relation to the parameters grain size and permeability. If combined with recent findings about the effect of pore fluid electrochemistry together with advanced complex resistivity tomography, these results will allow us to picture diverse aspects of the subsurface with relative certainty. Within the framework of single measurement campaigns, hydrologiste can than collect data with information about the geo-structure and geo-chemistry of the subsurface. However, additional research efforts will be necessary to further improve the understanding of the physical origins of IP-effect and minimize the potential for false interpretations.¦-¦Dans l'étude des processus et caractéristiques hydrologiques des subsurfaces, la résolution spatiale donnée par les modèles hydrologiques dépasse souvent la résolution des données du terrain récoltées avec des méthodes classiques d'hydrologie. Récemment il est possible de réduire de plus en plus cet divergence spatiale entre modèles numériques et données du terrain par l'utilisation de méthodes géophysiques, notamment celles géoélectriques. Parmi les méthodes électriques, la polarisation provoquée (PP) permet de représenter la capacité des roches poreuses et des sols à stocker une charge électrique. En l'absence des métaux dans le sous-sol, cet effet est largement influencé par des caractéristiques de surface des matériaux. En conséquence les mesures PP offrent une information des interfaces entre solides et fluides dans les matériaux poreux que nous pouvons lier à la perméabilité également dirigée par ces mêmes paramètres. L'effet de la polarisation provoquée à été étudié dans différentes études de laboratoire, ainsi que sur le terrain. A cause d'une faible capacité de polarisation des matériaux sableux, comparé aux argiles, leur caractérisation par l'effet-PP reste difficile a interpréter d'une manière cohérente pour les environnements hétérogènes.¦Pour améliorer les connaissances sur l'importance de la structure du sous-sol sableux envers l'effet PP et des paramètres hydrologiques, nous avons fait des mesures de laboratoire variées. En détail, nous avons considéré des échantillons sableux de quartz avec des distributions de taille de grain entre sables fins et graviers fins, en diamètre cela fait entre 0,09 et 5,6 mm. Les caractéristiques de l'espace poreux sont changées en modifiant (i) la distribution de taille des grains, (ii) le degré de compaction, et (iii) le niveau d'hétérogénéité dans la distribution de taille de grains. En suite nous étudions comment ces changements influencent l'effet-PP, la perméabilité et la surface spécifique des échantillons. Les paramètres électrochimiques sont gardés à un minimum pendant les mesures. Les résultats ne montrent pas de relation simple entre les paramètres pétro-physiques comme par exemples la taille des grains. La complexité des media naturels n'est pas encore suffisamment représenté par les modèles des processus PP. Néanmoins, la simple corrélation entre effet PP et perméabilité est fort et consistant. En conséquence la théorie de Schwarz sur l'effet-PP a été adapté de manière semi-empirique pour mieux pouvoir estimer la relation entre les résultats de l'effet-PP et les paramètres taille de graines et perméabilité. Nos résultats concernant l'influence de la texture des matériaux et celles de l'effet de l'électrochimie des fluides dans les pores, permettront de visualiser des divers aspects du sous-sol. Avec des telles mesures géo-électriques, les hydrologues peuvent collectionner des données contenant des informations sur la structure et la chimie des fluides des sous-sols. Néanmoins, plus de recherches sur les origines physiques de l'effet-PP sont nécessaires afin de minimiser le risque potentiel d'une mauvaise interprétation des données.
Resumo:
Methods used to analyze one type of nonstationary stochastic processes?the periodically correlated process?are considered. Two methods of one-step-forward prediction of periodically correlated time series are examined. One-step-forward predictions made in accordance with an autoregression model and a model of an artificial neural network with one latent neuron layer and with an adaptation mechanism of network parameters in a moving time window were compared in terms of efficiency. The comparison showed that, in the case of prediction for one time step for time series of mean monthly water discharge, the simpler autoregression model is more efficient.
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.