992 resultados para VARIABLE SAMPLING INTERVAL


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Etravirine (ETV) is metabolized by cytochrome P450 (CYP) 3A, 2C9, and 2C19. Metabolites are glucuronidated by uridine diphosphate glucuronosyltransferases (UGT). To identify the potential impact of genetic and non-genetic factors involved in ETV metabolism, we carried out a two-step pharmacogenetics-based population pharmacokinetic study in HIV-1 infected individuals. MATERIALS AND METHODS: The study population included 144 individuals contributing 289 ETV plasma concentrations and four individuals contributing 23 ETV plasma concentrations collected in a rich sampling design. Genetic variants [n=125 single-nucleotide polymorphisms (SNPs)] in 34 genes with a predicted role in ETV metabolism were selected. A first step population pharmacokinetic model included non-genetic and known genetic factors (seven SNPs in CYP2C, one SNP in CYP3A5) as covariates. Post-hoc individual ETV clearance (CL) was used in a second (discovery) step, in which the effect of the remaining 98 SNPs in CYP3A, P450 cytochrome oxidoreductase (POR), nuclear receptor genes, and UGTs was investigated. RESULTS: A one-compartment model with zero-order absorption best characterized ETV pharmacokinetics. The average ETV CL was 41 (l/h) (CV 51.1%), the volume of distribution was 1325 l, and the mean absorption time was 1.2 h. The administration of darunavir/ritonavir or tenofovir was the only non-genetic covariate influencing ETV CL significantly, resulting in a 40% [95% confidence interval (CI): 13-69%] and a 42% (95% CI: 17-68%) increase in ETV CL, respectively. Carriers of rs4244285 (CYP2C19*2) had 23% (8-38%) lower ETV CL. Co-administered antiretroviral agents and genetic factors explained 16% of the variance in ETV concentrations. None of the SNPs in the discovery step influenced ETV CL. CONCLUSION: ETV concentrations are highly variable, and co-administered antiretroviral agents and genetic factors explained only a modest part of the interindividual variability in ETV elimination. Opposing effects of interacting drugs effectively abrogate genetic influences on ETV CL, and vice-versa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Helping behavior is any intentional behavior that benefits another living being or group (Hogg & Vaughan, 2010). People tend to underestimate the probability that others will comply with their direct requests for help (Flynn & Lake, 2008). This implies that when they need help, they will assess the probability of getting it (De Paulo, 1982, cited in Flynn & Lake, 2008) and then they will tend to estimate one that is actually lower than the real chance, so they may not even consider worth asking for it. Existing explanations for this phenomenon attribute it to a mistaken cost computation by the help seeker, who will emphasize the instrumental cost of “saying yes”, ignoring that the potential helper also needs to take into account the social cost of saying “no”. And the truth is that, especially in face-to-face interactions, the discomfort caused by refusing to help can be very high. In short, help seekers tend to fail to realize that it might be more costly to refuse to comply with a help request rather than accepting. A similar effect has been observed when estimating trustworthiness of people. Fetchenhauer and Dunning (2010) showed that people also tend to underestimate it. This bias is reduced when, instead of asymmetric feedback (getting feedback only when deciding to trust the other person), symmetric feedback (always given) was provided. This cause could as well be applicable to help seeking as people only receive feedback when they actually make their request but not otherwise. Fazio, Shook, and Eiser (2004) studied something that could be reinforcing these outcomes: Learning asymmetries. By means of a computer game called BeanFest, they showed that people learn better about negatively valenced objects (beans in this case) than about positively valenced ones. This learning asymmetry esteemed from “information gain being contingent on approach behavior” (p. 293), which could be identified with what Fetchenhauer and Dunning mention as ‘asymmetric feedback’, and hence also with help requests. Fazio et al. also found a generalization asymmetry in favor of negative attitudes versus positive ones. They attributed it to a negativity bias that “weights resemblance to a known negative more heavily than resemblance to a positive” (p. 300). Applied to help seeking scenarios, this would mean that when facing an unknown situation, people would tend to generalize and infer that is more likely that they get a negative rather than a positive outcome from it, so, along with what it was said before, people will be more inclined to think that they will get a “no” when requesting help. Denrell and Le Mens (2011) present a different perspective when trying to explain judgment biases in general. They deviate from the classical inappropriate information processing (depicted among other by Fiske & Taylor, 2007, and Tversky & Kahneman, 1974) and explain this in terms of ‘adaptive sampling’. Adaptive sampling is a sampling mechanism in which the selection of sample items is conditioned by the values of the variable of interest previously observed (Thompson, 2011). Sampling adaptively allows individuals to safeguard themselves from experiences they went through once and turned out to lay negative outcomes. However, it also prevents them from giving a second chance to those experiences to get an updated outcome that could maybe turn into a positive one, a more positive one, or just one that regresses to the mean, whatever direction that implies. That, as Denrell and Le Mens (2011) explained, makes sense: If you go to a restaurant, and you did not like the food, you do not choose that restaurant again. This is what we think could be happening when asking for help: When we get a “no”, we stop asking. And here, we want to provide a complementary explanation for the underestimation of the probability that others comply with our direct help requests based on adaptive sampling. First, we will develop and explain a model that represents the theory. Later on, we will test it empirically by means of experiments, and will elaborate on the analysis of its results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: As imatinib pharmacokinetics are highly variable, plasma levels differ largely between patients under the same dosage. Retrospective studies in chronic myeloid leukemia (CML) patients showed significant correlations between low levels and suboptimal response, and between high levels and poor tolerability. Monitoring of plasma levels is thus increasingly advised, targeting trough concentrations of 1000 μg/L and above. Objectives: Our study was launched to assess the clinical usefulness of systematic imatinib TDM in CML patients. The present preliminary evaluation questions the appropriateness of dosage adjustment following plasma level measurement to reach the recommended trough level, while allowing an interval of 4-24 h after last drug intake for blood sampling. Methods: Initial blood samples from the first 9 patients in the intervention arm were obtained 4-25 h after last dose. Trough levels in 7 patients were predicted to be significantly away from the target (6 <750 μg/L, and 1 >1500 μg/L with poor tolerance), based on a Bayesian approach using a population pharmacokinetic model. Individual dosage adjustments were taken up in 5 patients, who had a control measurement 1-4 weeks after dosage change. Predicted trough levels were confronted to anterior model-based extrapolations. Results: Before dosage adjustment, observed concentrations extrapolated at trough ranged from 359 to 1832 μg/L (median 710; mean 804, CV 53%) in the 9 patients. After dosage adjustment they were expected to target between 720 and 1090 μg/L (median 878; mean 872, CV 13%). Observed levels of the 5 recheck measurements extrapolated at trough actually ranged from 710 to 1069 μg/L (median 1015; mean 950, CV 16%) and had absolute differences of 21 to 241 μg/L to the model-based predictions (median 175; mean 157, CV 52%). Differences between observed and predicted trough levels were larger when intervals between last drug intake and sampling were very short (~4 h). Conclusion: These preliminary results suggest that TDM of imatinib using a Bayesian interpretation is able to bring trough levels closer to 1000 μg/L (with CV decreasing from 53% to 16%). While this may simplify blood collection in daily practice, as samples do not have to be drawn exactly at trough, the largest possible interval to last drug intake yet remains preferable. This encourages the evaluation of the clinical benefit of a routine TDM intervention in CML patients, which the randomized Swiss I-COME study aims to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electrocardiography (ECG) QT interval is influenced by fluctuations in heart rate (HR) what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds). Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle) were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc) obtained using the diverse formulae were significantly different (ρ<0.05), while those derived according to the equation QTcV = QT + 0.087(1- RR) were the most consistent (linear regression). QTcV values were strongly correlated (r=0.83) with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS) models for estimating the area under the plasma concentration versus time curve (AUC) and the peak plasma concentration (Cmax) of 4-methylaminoantipyrine (MAA), an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336), measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias <1.5%, precision between 3.1 and 8.3%) by LSS models based on two sampling times. Validation tests indicate that the most informative 2-point LSS models developed for one formulation provide good estimates (R²>0.85) of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h), but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4%) as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%). Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Els models matemàtics quantitatius són simplificacions de la realitat i per tant el comportament obtingut per simulació d'aquests models difereix dels reals. L'ús de models quantitatius complexes no és una solució perquè en la majoria dels casos hi ha alguna incertesa en el sistema real que no pot ser representada amb aquests models. Una forma de representar aquesta incertesa és mitjançant models qualitatius o semiqualitatius. Un model d'aquest tipus de fet representa un conjunt de models. La simulació del comportament de models quantitatius genera una trajectòria en el temps per a cada variable de sortida. Aquest no pot ser el resultat de la simulació d'un conjunt de models. Una forma de representar el comportament en aquest cas és mitjançant envolupants. L'envolupant exacta és complete, és a dir, inclou tots els possibles comportaments del model, i correcta, és a dir, tots els punts dins de l'envolupant pertanyen a la sortida de, com a mínim, una instància del model. La generació d'una envolupant així normalment és una tasca molt dura que es pot abordar, per exemple, mitjançant algorismes d'optimització global o comprovació de consistència. Per aquesta raó, en molts casos s'obtenen aproximacions a l'envolupant exacta. Una aproximació completa però no correcta a l'envolupant exacta és una envolupant sobredimensionada, mentre que una envolupant correcta però no completa és subdimensionada. Aquestes propietats s'han estudiat per diferents simuladors per a sistemes incerts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term monitoring of forest soils as part of a pan-European network to detect environmental change depends on an accurate determination of the mean of the soil properties at each monitoring event. Forest soil is known to be very variable spatially, however. A study was undertaken to explore and quantify this variability at three forest monitoring plots in Britain. Detailed soil sampling was carried out, and the data from the chemical analyses were analysed by classical statistics and geostatistics. An analysis of variance showed that there were no consistent effects from the sample sites in relation to the position of the trees. The variogram analysis showed that there was spatial dependence at each site for several variables and some varied in an apparently periodic way. An optimal sampling analysis based on the multivariate variogram for each site suggested that a bulked sample from 36 cores would reduce error to an acceptable level. Future sampling should be designed so that it neither targets nor avoids trees and disturbed ground. This can be achieved best by using a stratified random sampling design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The decline of bees has raised concerns regarding their conservation and the maintenance of ecosystem services they provide to bee-pollinated wild flowers and crops. Although the Mediterranean region is a hotspot for bee species richness, their status remains poorly studied. There is an urgent need for cost-effective, reliable, and unbiased sampling methods that give good bee species richness estimates. This study aims: (a) to assess bee species richness in two common Mediterranean habitat types: semi-natural scrub (phrygana) and managed olive groves; (b) to compare species richness in those systems to that of other biogeographic regions, and (c) to assess whether six different sampling methods (pan traps, variable and standardized transect walks, observation plots and trap nests), previously tested in other European biogeographic regions, are suitable in Mediterranean communities. Eight study sites, four per habitat type, were selected on the island of Lesvos, Greece. The species richness observed was high compared to other habitat types worldwide for which comparable data exist. Pan traps collected the highest proportion of the total bee species richness across all methods at the scale of a study site. Variable and standardized transect walks detected the highest total richness over all eight study sites. Trap nests and observation plots detected only a limited fraction of the bee species richness. To assess the total bee species richness in bee diversity hotspots, such as the studied habitats, we suggest a combination of transect walks conducted by trained bee collectors and pan trap sampling

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the fact that mites were used at the dawn of forensic entomology to elucidate the postmortem interval, their use in current cases remains quite low for procedural reasons such as inadequate taxonomic knowledge. A special interest is focused on the phoretic stages of some mite species, because the phoront-host specificity allows us to deduce in many occasions the presence of the carrier (usually Diptera or Coleoptera) although it has not been seen in the sampling performed in situ or in the autopsy room. In this article, we describe two cases where Poecilochirus austroasiaticus Vitzthum (Acari: Parasitidae) was sampled in the autopsy room. In the first case, we could sample the host, Thanatophilus ruficornis (Küster) (Coleoptera: Silphidae), which was still carrying phoretic stages of the mite on the body. That attachment allowed, by observing starvation/feeding periods as a function of the digestive tract filling, the establishment of chronological cycles of phoretic behavior, showing maximum peaks of phoronts during arrival and departure from the corpse and the lowest values in the phase of host feeding. From the sarcosaprophagous fauna, we were able to determine in this case a minimum postmortem interval of 10 days. In the second case, we found no Silphidae at the place where the corpse was found or at the autopsy, but a postmortem interval of 13 days could be established by the high specificity of this interspecific relationship and the departure from the corpse of this family of Coleoptera.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Activities involving fauna monitoring are usually limited by the lack of resources; therefore, the choice of a proper and efficient methodology is fundamental to maximize the cost-benefit ratio. Both direct and indirect methods can be used to survey mammals, but the latter are preferred due to the difficulty to come in sight of and/or to capture the individuals, besides being cheaper. We compared the performance of two methods to survey medium and large-sized mammal: track plot recording and camera trapping, and their costs were assessed. At Jatai Ecological Station (S21 degrees 31`15 ``- W47 degrees 34`42 ``-Brazil) we installed ten camera traps along a dirt road directly in front of ten track plots, and monitored them for 10 days. We cleaned the plots, adjusted the cameras, and noted down the recorded species daily. Records taken by both methods showed they sample the local richness in different ways (Wilcoxon, T=231; p;;0.01). The track plot method performed better on registering individuals whereas camera trapping provided records which permitted more accurate species identification. The type of infra-red sensor camera used showed a strong bias towards individual body mass (R(2)=0.70; p=0.017), and the variable expenses of this method in a 10-day survey were estimated about 2.04 times higher compared to track plot method; however, in a long run camera trapping becomes cheaper than track plot recording. Concluding, track plot recording is good enough for quick surveys under a limited budget, and camera trapping is best for precise species identification and the investigation of species details, performing better for large animals. When used together, these methods can be complementary.