966 resultados para Limited Sampling Strategies
Resumo:
Ce travail de recherche a été réalisé dans le laboratoire de pharmacologie clinique, au Centre Hospitalier Universitaire Sainte-Justine, à Montréal. C'est une étude rétrospective basée sur le suivi thérapeutique du Tacrolimus prescrit chez les enfants après transplantation hépatique. Ce suivi est nécessaire car le Tacrolimus possède une importante variabilité pharmacocinétique inter et intra-individuelle ainsi qu'un index thérapeutique très étroit. Actuellement, l'individualisation des doses prescrites est basée sur la mesure de la concentration de base - du médicament dans le sang (C0), mais des études récentes montrent que cette mesure ne reflète pas précisément l'exposition du Tacrolimus dans l'organisme chez les enfants. Le meilleur reflet de cette exposition est la mesure de l'aire sous la courbe (AUC). Cependant, cette dernière implique la mesure de multiples concentrations tout au long de l'intervalle entre 2 doses de médicament (Tacrolimus: 12 heures) ce qui est long, cher et impraticable en ambulatoire. De nouvelles méthodes utilisant un nombre limité de prélèvements ont donc été développées pour prédire au mieux cette AUC. Ce sont les "Limited sampling strategies" ou LSS. La plupart de ces LSS pour le Tacrolimus ont été développées et validées chez des patients transplantés adultes et leur application directe chez les transplantés pédiatriques n'est pas possible en raison de différences importantes au niveau des paramètres pharmacocinétiques du médicament entre ces deux populations. Aussi, le but de ce travail était de développer et valider, pour la première fois, des LSS chez les enfants transplantés hépatiques. Pour cela, une analyse de 36 profils pharmacocinétiques de 28 patients transplantés hépatiques âgés de 0.4- 18.5 ans a été effectuée. Tous les profils ont été réalisés au Centre Hospitalier Universitaire Sainte-Justine entre janvier 2007 et janvier 2009. Les LSS comportant au maximum 4 mesures de concentration ont été développées en utilisant une analyse de régression multiple. Parmi tous les modèles obtenus, cinq ont été sélectionnés sur la base de critères précis puis validés selon la méthode décrite par Sheiner et Beal.¦Les résultats montrent que ces cinq modèles peuvent prédire l'AUC du Tacrolimus avec une précision cliniquement acceptable de ± 15% alors que la C0 présente la plus faible corrélation avec l'AUC.¦En conclusion, cette étude confirme que la C0 ne permet pas de prédire de manière efficace l'exposition du Tacrolimus dans l'organisme dans notre population de patients pédiatriques contrairement aux LSS analysées qui offrent une méthode pratique et fiable. Par ailleurs, en permettant d'obtenir une estimation précise et simplifiée de l'AUC complète du Tacrolimus chez les patients, ces LSS ouvrent la porte à de futures études prospectives visant à mieux définir l'AUC cible du médicament et à déterminer si le suivi basé sur la mesure de l'AUC est plus efficace et plus sûr que celui basé sur la mesure de la C0.
Resumo:
Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.
Resumo:
The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High-frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to three or four of the WFD classes with 95% confidence, due to random sampling effects, whereas with weekly sampling this was one or two classes for the same cases. In the most extreme case, the same water body could have been assigned to any of the five WFD quality classes. Weekly sampling considerably reduces the uncertainties compared to monthly sampling. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Low-frequency measurements will generally be unsuitable for assessing standards expressed as high percentiles. Confining sampling to the working week compared to all 7 days made little difference, but a modest improvement in precision could be obtained by sampling at the same time of day within a 3 h time window, and this is recommended. For parameters with a strong diel variation, such as dissolved oxygen, the value obtained, and thus possibly the WFD classification, can depend markedly on when in the cycle the sample was taken. Specifying this in the sampling regime would be a straightforward way to improve precision, but there needs to be agreement about how best to characterise risk in different types of river. These results suggest that in some cases it will be difficult to assign accurate WFD chemical classes or to detect likely trends using current sampling regimes, even for these largely groundwater-fed rivers. A more critical approach to sampling is needed to ensure that management actions are appropriate and supported by data.
Resumo:
We discuss the development and performance of a low-power sensor node (hardware, software and algorithms) that autonomously controls the sampling interval of a suite of sensors based on local state estimates and future predictions of water flow. The problem is motivated by the need to accurately reconstruct abrupt state changes in urban watersheds and stormwater systems. Presently, the detection of these events is limited by the temporal resolution of sensor data. It is often infeasible, however, to increase measurement frequency due to energy and sampling constraints. This is particularly true for real-time water quality measurements, where sampling frequency is limited by reagent availability, sensor power consumption, and, in the case of automated samplers, the number of available sample containers. These constraints pose a significant barrier to the ubiquitous and cost effective instrumentation of large hydraulic and hydrologic systems. Each of our sensor nodes is equipped with a low-power microcontroller and a wireless module to take advantage of urban cellular coverage. The node persistently updates a local, embedded model of flow conditions while IP-connectivity permits each node to continually query public weather servers for hourly precipitation forecasts. The sampling frequency is then adjusted to increase the likelihood of capturing abrupt changes in a sensor signal, such as the rise in the hydrograph – an event that is often difficult to capture through traditional sampling techniques. Our architecture forms an embedded processing chain, leveraging local computational resources to assess uncertainty by analyzing data as it is collected. A network is presently being deployed in an urban watershed in Michigan and initial results indicate that the system accurately reconstructs signals of interest while significantly reducing energy consumption and the use of sampling resources. We also expand our analysis by discussing the role of this approach for the efficient real-time measurement of stormwater systems.
Resumo:
Strategies for sampling sediment bacteria were examined in intensive shrimp, Penaeus monodon (Fabricius), ponds in tropical Australia. Stratified sampling of bacteria at the end of the production season showed that the pond centre, containing flocculated sludge, had significantly higher bacterial counts (15.5 X 10(9) g(-1) dw) than the pond periphery (8.1 X 10(9) g(-1) dw), where the action of aerators had swept the pond floor. The variation in bacterial counts between these two zones within a pond was higher than that between sites within each zone or between ponds. Therefore, sampling effort should be focused within these zones: for example, sampling two ponds at six locations within each of the two zones resulted in a coefficient of variation of approximate to 5%. Bacterial numbers in the sediment were highly correlated with sediment grain size, probably because eroded soil particles and organic waste both accumulated in the centre of the pond. Despite high inputs of organic matter added to the ponds, principally as pelleted feeds, the mean bacterial numbers and nutrient concentrations (i.e. organic carbon, nitrogen and phosphorus) in the sediment were similar to those found in mangrove sediments. This suggests that bacteria are rapidly remineralizing particulates into soluble compounds. Bacterial numbers were highly correlated with organic carbon and total kjeldahl nitrogen in the sediment, suggesting that these were limiting factors to bacterial growth.
Resumo:
The role of land cover change as a significant component of global change has become increasingly recognized in recent decades. Large databases measuring land cover change, and the data which can potentially be used to explain the observed changes, are also becoming more commonly available. When developing statistical models to investigate observed changes, it is important to be aware that the chosen sampling strategy and modelling techniques can influence results. We present a comparison of three sampling strategies and two forms of grouped logistic regression models (multinomial and ordinal) in the investigation of patterns of successional change after agricultural land abandonment in Switzerland. Results indicated that both ordinal and nominal transitional change occurs in the landscape and that the use of different sampling regimes and modelling techniques as investigative tools yield different results. Synthesis and applications. Our multimodel inference identified successfully a set of consistently selected indicators of land cover change, which can be used to predict further change, including annual average temperature, the number of already overgrown neighbouring areas of land and distance to historically destructive avalanche sites. This allows for more reliable decision making and planning with respect to landscape management. Although both model approaches gave similar results, ordinal regression yielded more parsimonious models that identified the important predictors of land cover change more efficiently. Thus, this approach is favourable where land cover change pattern can be interpreted as an ordinal process. Otherwise, multinomial logistic regression is a viable alternative.
Resumo:
The precise sampling of soil, biological or micro climatic attributes in tropical forests, which are characterized by a high diversity of species and complex spatial variability, is a difficult task. We found few basic studies to guide sampling procedures. The objective of this study was to define a sampling strategy and data analysis for some parameters frequently used in nutrient cycling studies, i. e., litter amount, total nutrient amounts in litter and its composition (Ca, Mg, Κ, Ν and P), and soil attributes at three depths (organic matter, Ρ content, cation exchange capacity and base saturation). A natural remnant forest in the West of São Paulo State (Brazil) was selected as study area and samples were collected in July, 1989. The total amount of litter and its total nutrient amounts had a high spatial independent variance. Conversely, the variance of litter composition was lower and the spatial dependency was peculiar to each nutrient. The sampling strategy for the estimation of litter amounts and the amount of nutrient in litter should be different than the sampling strategy for nutrient composition. For the estimation of litter amounts and the amount of nutrients in litter (related to quantity) a large number of randomly distributed determinations are needed. Otherwise, for the estimation of litter nutrient composition (related to quality) a smaller amount of spatially located samples should be analyzed. The determination of sampling for soil attributes differed according to the depth. Overall, surface samples (0-5 cm) showed high short distance spatial dependent variance, whereas, subsurface samples exhibited spatial dependency in longer distances. Short transects with sampling interval of 5-10 m are recommended for surface sampling. Subsurface samples must also be spatially located, but with transects or grids with longer distances between sampling points over the entire area. Composite soil samples would not provide a complete understanding of the relation between soil properties and surface dynamic processes or landscape aspects. Precise distribution of Ρ was difficult to estimate.
Resumo:
Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS) models for estimating the area under the plasma concentration versus time curve (AUC) and the peak plasma concentration (Cmax) of 4-methylaminoantipyrine (MAA), an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336), measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias <1.5%, precision between 3.1 and 8.3%) by LSS models based on two sampling times. Validation tests indicate that the most informative 2-point LSS models developed for one formulation provide good estimates (R²>0.85) of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h), but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4%) as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%). Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.
Resumo:
Phylogenetic methods hold great promise for the reconstruction of the transition from precursor to modern flora and the identification of underlying factors which drive the process. The phylogenetic methods presently used to address the question of the origin of the Cape flora of South Africa are considered here. The sampling requirements of each of these methods, which include dating of diversifications using calibrated molecular trees, sister pair comparisons, lineage through time plots and biogeographical optimizations are reviewed. Sampling of genes, genomes and species are considered. Although increased higher-level studies and increased sampling are required for robust interpretation, it is clear that much progress is already made. It is argued that despite the remarkable richness of the flora, the Cape flora is a valuable model system to demonstrate the utility of phylogenetic methods in determining the history of a modern flora.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The impact of initial sample distribution on separation and focusing of analytes in a pH 3–11 gradient formed by 101 biprotic carrier ampholytes under concomitant electroosmotic displacement was studied by dynamic high-resolution computer simulation. Data obtained with application of the analytes mixed with the carrier ampholytes (as is customarily done), as a short zone within the initial carrier ampholyte zone, sandwiched between zones of carrier ampholytes, or introduced before or after the initial carrier ampholyte zone were compared. With sampling as a short zone within or adjacent to the carrier ampholytes, separation and focusing of analytes is shown to proceed as a cationic, anionic, or mixed process and separation of the analytes is predicted to be much faster than the separation of the carrier components. Thus, after the initial separation, analytes continue to separate and eventually reach their focusing locations. This is different to the double-peak approach to equilibrium that takes place when analytes and carrier ampholytes are applied as a homogenous mixture. Simulation data reveal that sample application between two zones of carrier ampholytes results in the formation of a pH gradient disturbance as the concentration of the carrier ampholytes within the fluid element initially occupied by the sample will be lower compared to the other parts of the gradient. As a consequence thereof, the properties of this region are sample matrix dependent, the pH gradient is flatter, and the region is likely to represent a conductance gap (hot spot). Simulation data suggest that sample placed at the anodic side or at the anodic end of the initial carrier ampholyte zone are the favorable configurations for capillary isoelectric focusing with electroosmotic zone mobilization.
Resumo:
Fundación Ciudad de la Energía (CIUDEN) is carrying out a project of geological storage of CO2, where CO2 injection tests are planned in saline aquifers at a depth of 1500 m for scientific objectives and project demonstration. Before any CO2 is stored, it is necessary to determine the baseline flux of CO2 in order to detect potential leakage during injection and post-injection monitoring. In November 2009 diffuse flux measurements of CO2 using an accumulationchamber were made in the area selected by CIUDEN for geological storage, located in Hontomin province of Burgos (Spain). This paper presents the tests carried out in order to establish the optimum sampling methodology and the geostatistical analyses performed to determine the range, with which future field campaigns will be planned.
Resumo:
Fundación Ciudad de la Energía (CIUDEN) is carrying out a project of geological storage of CO2, where CO2 injection tests are planned in saline aquifers at a depth of 1500 m for scientific objectives and project demonstration. Before any CO2 is stored, it is necessary to determine the baseline flux of CO2 in order to detect potential leakage during injection and post-injection monitoring. In November 2009 diffuse flux measurements of CO2 using an accumulation chamber were made in the area selected by CIUDEN for geological storage, located in Hontomin province of Burgos (Spain). This paper presents the tests carried out in order to establish the optimum sampling methodology and the geostatistical analyses performed to determine the range, with which future field campaigns will be planned.