966 resultados para Limited Sampling Strategies
Resumo:
The prevalence of obesity in the western world is dramatically rising, with many of these individuals requiring therapeutic intervention for a variety of disease states. Despite the growing prevalence of obesity there is a paucity of information describing how doses should be adjusted, or indeed whether they need to be adjusted, in the clinical setting. This review is aimed at identifying which descriptors of body size provide the most information about the relationship between dose and concentration in the obese. The size descriptors, weight, lean body weight, ideal body weight, body surface area, body mass index, fat-free mass, percent ideal body weight, adjusted body weight and predicted normal body weight were considered as potential size descriptors. We conducted an extensive review of the literature to identify studies that have assessed the quantitative relationship between the parameters clearance (CL) and volume of distribution (V) and these descriptors of body size. Surprisingly few studies have addressed the relationship between obesity and CL or V in a quantitative manner. Despite the lack of studies there were consistent findings: (i) most studies found total body weight to be the best descriptor of V. A further analysis of the studies that have addressed V found that total body weight or another descriptor that incorporated fat mass was the preferred descriptor for drugs that have high lipophilicity; (ii) in contrast, CL was best described by lean body mass and no apparent relationship between lipophilicity or clearance mechanism and preference for body size descriptor was found. In conclusion, no single descriptor described the influence of body size on both CL and V equally well. For drugs that are dosed chronically, and therefore CL is of primary concern, dosing for obese patients should not be based on their total weight. If a weight-based dose individualization is required then we would suggest that chronic drug dosing in the obese subject should be based on lean body weight, at least until a more robust size descriptor becomes available.
Resumo:
Registration of births, recording deaths by age, sex and cause, and calculating mortality levels and differentials are fundamental to evidence-based health policy, monitoring and evaluation. Yet few of the countries with the greatest need for these data have functioning systems to produce them despite legislation providing for the establishment and maintenance of vital registration. Sample vital registration (SVR), when applied in conjunction with validated verbal autopsy, procedures and implemented in a nationally representative sample of population clusters represents an affordable, cost-effective, and sustainable short- and medium-term solution to this problem. SVR complements other information sources by producing age-, sex-, and cause-specific mortality data that are more complete and continuous than those currently available. The tools and methods employed in an SVR system, however, are imperfect and require rigorous validation and continuous quality assurance; sampling strategies for SVR are also still evolving. Nonetheless, interest in establishing SVR is rapidly growing in Africa and Asia. Better systems for reporting and recording data on vital events will be sustainable only if developed hand-in-hand with existing health information strategies at the national and district levels; governance structures; and agendas for social research and development monitoring. If the global community wishes to have mortality measurements 5 or 10 years hence, the foundation stones of SVR must be laid today.
Resumo:
Achieving adequate therapeutic levels of immunosuppressive medications is important in rejection prevention. This study examined exposure to mycophenolic acid (MPA) in kidney transplant patients within the first 5 days posttransplantation. Methods. This single-center, nonrandomized study of first solitary kidney allograft recipients receiving cyclosporine (n = 116) or tacrolimus (n = 50) included patients who received either 1 g or 1.5 g of mycophenolate mofetil twice daily starting postoperatively. Exposure to MPA was measured at days 3 and 5 posttransplant using published limited sampling time equations. Results. There were no significant differences in exposure in the cyclosporine-treated patients receiving 3-g (n = 22) compared to 2-g (n = 94) daily doses (AUC([0-12]) 33.8 +/- 10.0 mg*h/L versus 30.1 +/- 9.7 mg*h/L, P =.20, respectively). About half the patients in both groups had AUC([0-12]) < 30 mg*h/L on days 3 and 5 posttransplant. On the other hand, there was significantly greater exposure on day 3 in the tacrolimus-treated patients receiving 3 g (n = 21) compared to 2 g (n = 29) daily (AUC([0-12]) 43.1 +/- 9.0 mg*h/L versus 36.8 +/- 11.1 mg*h/L, P =.016, respectively). On day 3 one (4.8%) patient receiving 3 g had an AUC([0-12]) of < 30 mg*h/L; whereas, eight (27.5%) receiving 2 g were below this level (P =.068). The AUC([0-12]) levels were not different on day 5. Conclusions. Loading with higher doses of mycophenolate mofetil results in greater exposure and a trend toward more patients in the therapeutic window within the first week for tacrolimus- but not for cyclosporine-treated patients.
Resumo:
There is an alternative model of the 1-way ANOVA called the 'random effects' model or ‘nested’ design in which the objective is not to test specific effects but to estimate the degree of variation of a particular measurement and to compare different sources of variation that influence the measurement in space and/or time. The most important statistics from a random effects model are the components of variance which estimate the variance associated with each of the sources of variation influencing a measurement. The nested design is particularly useful in preliminary experiments designed to estimate different sources of variation and in the planning of appropriate sampling strategies.
Resumo:
We undertook a longitudinal qualitative study involving of 20 patients from Scotland who had type 2 diabetes. We looked at their perceptions and understandings of why they had developed diabetes and how, and why, their causation accounts had changed or remained stable over time. Respondents, all of whom were white, were interviewed four times over a 4-year period (at baseline, 6, 12 and 48 months). Their causation accounts often shifted, sometimes subtly, sometimes radically, over the 4 years. The experiential dimensions of living with, observing, and managing their disease over time were central to understanding the continuities and changes we observed. We also highlight how, through a process of removing, adding and/or de-emphasising explanatory factors, causation accounts could be used as “resources” to justify or enable present treatment choices. We use our work to support critiques of social cognition theories, with their emphasis upon beliefs being antecedent to behaviours. We also provide reflections upon the implications of our findings for qualitative research designs and sampling strategies.
Resumo:
The use of quantitative methods has become increasingly important in the study of neuropathology and especially in neurodegenerative disease. Disorders such as Alzheimer's disease (AD) and the frontotemporal dementias (FTD) are characterized by the formation of discrete, microscopic, pathological lesions which play an important role in pathological diagnosis. This chapter reviews the advantages and limitations of the different methods of quantifying pathological lesions in histological sections including estimates of density, frequency, coverage, and the use of semi-quantitative scores. The sampling strategies by which these quantitative measures can be obtained from histological sections, including plot or quadrat sampling, transect sampling, and point-quarter sampling, are described. In addition, data analysis methods commonly used to analysis quantitative data in neuropathology, including analysis of variance (ANOVA), polynomial curve fitting, multiple regression, classification trees, and principal components analysis (PCA), are discussed. These methods are illustrated with reference to quantitative studies of a variety of neurodegenerative disorders.
Resumo:
A global corporation values both profitability and social acceptance; its units mutually negotiate governance and represent a highly interdependent network where centers of excellence and high-potential employees are identified regardless of geographic locations. These companies try to build geocentric, or “world oriented” (Marquardt, 1999, p. 20), organizational cultures. Such culture “transcends cultural differences and establishes ‘beacons’ – values and attitudes – that are comprehensive and compelling” (Kets de Vries & Florent-Treacy, 2002, p. 299) for all employees, regardless of their national origins. Creating a geocentric organizational culture involves transforming each employee’s mindset, beliefs, and behaviors so that he/she can become “a world citizen in spite of having a national identity” (Marquardt, 1999, p. 47). The purpose of this phenomenological study was to explore how employees with different national identities experience a geocentric organizational culture of a global corporation. Phenomenological research aims to understand “how people experience some phenomenon—how they perceive it, describe it, feel about it, judge it, remember it, make sense of it, and talk about it with others” (Patton, 2002, p. 104). Twelve participants were selected using criteria, convenience, and snow-ball sampling strategies. A semi-structured interview guide was used to collect data. Data were analyzed inductively, using Moustakas’s (1994) Modification of the Stevick-Colaizzi-Keen Method of Analysis of Phenomenological Data. The participants in this study experienced a geocentric organizational culture of a global corporation as on in which they felt connected, valued, and growing personally and professionally. The participants felt connected to the companies via business goals and social responsibility. The participants felt valued by the company because their creativity was welcomed and they could contribute to the corporation certain unique knowledge of the culture and language of their native countries. The participants felt growing personally and professionally due to the professional development opportunities, cross-cultural awareness, and perspective consciousness. Based on the findings from this study, a model of a geocentric organizational culture of a global corporation: An employee perspective is proposed. Implications for research and practice conclude this study.
Resumo:
The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^
Resumo:
'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.
This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications.
Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.
Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.
Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.
Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.
Resumo:
This thesis deals with tensor completion for the solution of multidimensional inverse problems. We study the problem of reconstructing an approximately low rank tensor from a small number of noisy linear measurements. New recovery guarantees, numerical algorithms, non-uniform sampling strategies, and parameter selection algorithms are developed. We derive a fixed point continuation algorithm for tensor completion and prove its convergence. A restricted isometry property (RIP) based tensor recovery guarantee is proved. Probabilistic recovery guarantees are obtained for sub-Gaussian measurement operators and for measurements obtained by non-uniform sampling from a Parseval tight frame. We show how tensor completion can be used to solve multidimensional inverse problems arising in NMR relaxometry. Algorithms are developed for regularization parameter selection, including accelerated k-fold cross-validation and generalized cross-validation. These methods are validated on experimental and simulated data. We also derive condition number estimates for nonnegative least squares problems. Tensor recovery promises to significantly accelerate N-dimensional NMR relaxometry and related experiments, enabling previously impractical experiments. Our methods could also be applied to other inverse problems arising in machine learning, image processing, signal processing, computer vision, and other fields.
Resumo:
Background: Alcohol is a leading cause of global suffering. Europe reports the uppermost volume of alcohol consumption in the world, with Ireland and the United Kingdom reporting the highest levels of binge drinking and drunkenness. Levels of consumption are elevated among university students. Thus, this literature review aims to summarise the current research on alcohol consumption among university students in the Republic of Ireland and the United Kingdom. Methods: MEDLINE, CINAHL, EMBASE and PsychInfo were systematically searched for literature from January 2002 until December 2014. Each database was searched using the following search pillars: alcohol, university student, Ireland or the United Kingdom and prevalence studies. Results: Two thousand one hundred twenty eight articles were retrieved from electronic database searching. These were title searched for relevance. 113 full texts were retrieved and assessed for eligibility. Of these, 29 articles were deemed to meet inclusion criteria for the review. Almost two thirds of students reported a hazardous alcohol consumption score on the AUDIT scale. Over 20 % reported alcohol problems over their lifetime using CAGE while over 20 % exceed sensible limits each week. Noteworthy is the narrowing of the gender gap throughout the past decade. Conclusion: This is the first review to investigate consumption patterns of university students in Ireland and the United Kingdom. A range of sampling strategies and screening tools are employed in alcohol research which preclude comparability. The current review provides an overview of consumption patterns to guide policy development.
Resumo:
Mucosal melanoma of the head and neck region (MM-H&N) is a rare disease, characterized by a poor prognosis and limited therapeutic strategies, especially regarding targeted therapy (lower rate of targetable mutations compared to cutaneous melanoma) and immunotherapy (lack of diagnostic tools able to predict the response). Meanwhile, bright-field multiplex immunohistochemistry (BF-mIHC) is emerging as a promising tool for characterizing tumor microenvironment (TME) and predicting response to immunotherapy in several tumors, including melanoma. This PhD project aims to develop a BF-mIHC protocol to evaluate the TME in MM-H&N, analyze the correlation between immune markers/immune profiles and MM-H&N features (clinicopathologic and molecular), and find new biomarkers useful for prognostic-therapeutic stratification of these patients. Specific aims are: (I) describe the clinicopathological features of MM-H&N; (II) analyze the molecular status of MM-H&N and correlate it with the clinicopathological features; (III) analyze the molecular status of multiple specimens from the same patient to verify whether molecular heterogeneity of MM-H&N could affect the results with relevant prognostic-therapeutic implications; (IV) develop a BF-mIHC protocol to study TME in MM-H&N; (V) analyze the correlation between immune markers/immune profiles and MM-H&N features (clinicopathologic and molecular) to test whether BF-mIHC could be a promising tool for prognostic-therapeutic characterization of these patients.
Resumo:
Captan and folpet are two fungicides largely used in agriculture, but biomonitoring data are mostly limited to measurements of captan metabolite concentrations in spot urine samples of workers, which complicate interpretation of results in terms of internal dose estimation, daily variations according to tasks performed, and most plausible routes of exposure. This study aimed at performing repeated biological measurements of exposure to captan and folpet in field workers (i) to better assess internal dose along with main routes-of-entry according to tasks and (ii) to establish most appropriate sampling and analysis strategies. The detailed urinary excretion time courses of specific and non-specific biomarkers of exposure to captan and folpet were established in tree farmers (n = 2) and grape growers (n = 3) over a typical workweek (seven consecutive days), including spraying and harvest activities. The impact of the expression of urinary measurements [excretion rate values adjusted or not for creatinine or cumulative amounts over given time periods (8, 12, and 24 h)] was evaluated. Absorbed doses and main routes-of-entry were then estimated from the 24-h cumulative urinary amounts through the use of a kinetic model. The time courses showed that exposure levels were higher during spraying than harvest activities. Model simulations also suggest a limited absorption in the studied workers and an exposure mostly through the dermal route. It further pointed out the advantage of expressing biomarker values in terms of body weight-adjusted amounts in repeated 24-h urine collections as compared to concentrations or excretion rates in spot samples, without the necessity for creatinine corrections.
Resumo:
Most of the novel targeted anticancer agents share classical characteristics that define drugs as candidates for blood concentration monitoring: long-term therapy; high interindividual but restricted intraindividual variability; significant drug-drug and drug- food interactions; correlations between concentration and efficacy/ toxicity with rather narrow therapeutic index; reversibility of effects; and absence of early markers of response. Surprisingly though, therapeutic concentration monitoring has received little attention for these drugs despite reiterated suggestions from clinical pharmacologists. Several issues explain the lack of clinical research and development in this field: global tradition of empiricism regarding treatment monitoring, lack of formal conceptual framework, ethical difficulties in the elaboration of controlled clinical trials, disregard from both drug manufacturers and public funders, limited encouragement from regulatory authorities, and practical hurdles making dosage adjustment based on concentration monitoring a difficult task for prescribers. However, new technologies are soon to help us overcome these obstacles, with the advent of miniaturized measurement devices able to quantify circulating drug concentrations at the point-of-care, to evaluate their plausibility given actual dosage and sampling time, to determine their appropriateness with reference to therapeutic targets, and to advise on suitable dosage adjustment. Such evolutions could bring conceptual changes into the clinical development of drugs such as anticancer agents, while increasing the therapeutic impact of population PK-PD studies and systematic reviews. Research efforts in that direction from the clinical pharmacology community will be essential for patients to receive the greatest benefits and the least harm from new anticancer treatments. The example of imatinib, the first commercialized tyrosine kinase inhibitor, will be outlined to illustrate a potential research agenda for the rational development of therapeutic concentration monitoring.