974 resultados para Target Held Method
Resumo:
We describe a simple method using percoll gradient for isolation of highly enriched human monocytes. High numbers of fully functional cells are obtained from whole blood or buffy coat cells. The use of simple laboratory equipment and a relatively cheap reagent makes the described method a convenient approach to obtaining human monocytes.
Resumo:
The susceptibility patterns of 108 Campylobacter jejuni subsp. jejuni clinical strains, to six antimicrobial agents was determined by using the E-test and the double dilution agar methods. Using both metods, no strain was found to be resistant to ciprofloxacin, erythromycin and gentamicin, but two (1.8%) were resistant to tetracycline and all to aztreonam. Seven (6.5%) strains were resistant to ampicillin by the E-test and five (4.6%) by the double dilution agar method and by both meyhods. No great discrepancies were observed between both methods.
Resumo:
Due to the overlapping distribution of Trypanosoma rangeli and T. cruzi in Central and South America, sharing several reservoirs and triatomine vectors, we herein describe a simple method to collect triatomine feces and hemolymph in filter paper for further detection and specific characterization of these two trypanosomes. Experimentally infected triatomines feces and hemolymph were collected in filter paper and specific detection of T. rangeli or T. cruzi DNA by polymerase chain reaction was achieved. This simple DNA collection method allows sample collection in the field and further specific trypanosome detection and characterization in the laboratory.
Resumo:
Projecte de recerca elaborat a partir d’una estada a la University of Calgary, Canadà, entre desembre del 2007 i febrer del 2008. El projecte ha consistit en l'anàlisi de les dades d'una recerca en el camp de la psicologia de la música, concretament en com influeix la música en l'atenció a través de les vies dels estats emocionals i enèrgics de la persona. Per a la recerca es feu ús de videu en les sessions, obtenint dades visuals i auditives per a complementar les dades de tipus quantitatiu provinents dels resultats d'uns tests d'atenció subministrats. L'anàlisi es realitzà segons mètodes i tècniques de caràcter qualitatiu, apresos durant l'estada. Així mateix també s'ha aprofundit en la comprensió del paradigma qualitatiu com a paradigma vàlid i realment complementari del paradigma qualitatiu. S'ha focalitzat especialment en l'anàlisi de la conversa des d'un punt de vista interpretatiu així com l'anàlisi de llenguatge corporal i facial a partir de l'observació de videu, tot formulant descriptors i subdescriptors de la conducta que està relacionada amb la hipòtesis. Alguns descriptors havien estat formulats prèviament a l’anàlisi, en base a altres investigacions i al background de la investigadora; altres s’han anat descobrint durant l’anàlisi. Els descriptors i subdescriptors de la conducta estan relacionats amb l'intent dels estats anímics i enèrgics dels diferents participants. L'anàlisi s'ha realitzat com un estudi de casos, fent un anàlisi exhaustiu persona per persona amb l'objectiu de trobar patrons de reacció intrapersonals i intrapersonals. Els patrons observats s'utilitzaran com a contrast amb la informació quantitativa, tot realitzant triangulació amb les dades per trobar-ne possibles recolzaments o contradiccions entre sí. Els resultats preliminars indiquen relació entre el tipus de música i el comportament, sent que la música d'emotivitat negativa està associada a un tancament de la persona, però quan la música és enèrgica els participants s'activen (conductualment observat) i somriuen si aquesta és positiva.
Resumo:
Rheumatoid arthritis (RA) is characterized by chronic inflammation of the synovial joints resulting from hyperplasia of synovial fibroblasts and infiltration of lymphocytes, macrophages and plasma cells, all of which manifest signs of activation. All these cells proliferate abnormally, invade bone and cartilage, produce an elevated amount of pro-inflammatory cytokines, metalloproteinases and trigger osteoclast formation and activation. Some of the pathophysiological consequences of the disease may be explained by the inadequate apoptosis, which may promote the survival of autoreactive T cells, macrophages or synovial fibroblasts. Although RA does not result from single genetic mutations, elucidation of the molecular mechanisms implicated in joint destruction has revealed novel targets for gene therapy. Gene transfer strategies include inhibition of pro-inflammatory cytokines, blockade of cartilage-degrading metalloproteinases, inhibition of synovial cell activation and manipulation of the Th1-Th2 cytokine balance. Recent findings have iluminated the idea that induction of apoptosis in the rheumatoid joint can be also used to gain therapeutic advantage in the disease. In the present review we will discuss different strategies used for gene transfer in RA and chronic inflammation. Particularly, we will highlight the importance of programmed cell death as a novel target for gene therapy using endogenous biological mediators, such as galectin-1, a beta-galactoside-binding protein that induces apoptosis of activated T cells and immature thymocytes.
Resumo:
Report for the scientific sojourn at the Imperial College of London, United Kingdom, from 2007 to 2009. PTEN is a tumour suppressor enzyme that plays important roles in the PI3K pathway which regulates growth, proliferation and survival and is thus related to many human disorders such as diabetes, neurodegenerative diseases, cardiovascular complications and cancer. It is hence of great interest to understand in detail its molecular behaviour and to find small molecules that can switch on/off its activity. For this purpose, metal complexes have been synthesized and preliminary studies in vivo show that all are capable of inhibiting PTEN.
Resumo:
We present a novel hybrid (or multiphysics) algorithm, which couples pore-scale and Darcy descriptions of two-phase flow in porous media. The flow at the pore-scale is described by the Navier?Stokes equations, and the Volume of Fluid (VOF) method is used to model the evolution of the fluid?fluid interface. An extension of the Multiscale Finite Volume (MsFV) method is employed to construct the Darcy-scale problem. First, a set of local interpolators for pressure and velocity is constructed by solving the Navier?Stokes equations; then, a coarse mass-conservation problem is constructed by averaging the pore-scale velocity over the cells of a coarse grid, which act as control volumes; finally, a conservative pore-scale velocity field is reconstructed and used to advect the fluid?fluid interface. The method relies on the localization assumptions used to compute the interpolators (which are quite straightforward extensions of the standard MsFV) and on the postulate that the coarse-scale fluxes are proportional to the coarse-pressure differences. By numerical simulations of two-phase problems, we demonstrate that these assumptions provide hybrid solutions that are in good agreement with reference pore-scale solutions and are able to model the transition from stable to unstable flow regimes. Our hybrid method can naturally take advantage of several adaptive strategies and allows considering pore-scale fluxes only in some regions, while Darcy fluxes are used in the rest of the domain. Moreover, since the method relies on the assumption that the relationship between coarse-scale fluxes and pressure differences is local, it can be used as a numerical tool to investigate the limits of validity of Darcy's law and to understand the link between pore-scale quantities and their corresponding Darcy-scale variables.
Resumo:
Objectives:To investigate the associations between falls before hospital¦admission, falls during hospitalization, and length of stay in elderly¦people admitted to post-acute geriatric rehabilitation. Method: History¦of falling in the previous 12 months before admission was recorded¦among 249 older persons (mean age 82.3±7.4 years, 69.1% women)¦consecutively admitted to post-acute rehabilitation. Data on medical,¦functional and cognitive status were collected upon admission. Falls¦during hospitalization and length of stay were recorded at discharge.¦Results: Overall, 92 (40.4%) patients reported no fall in the 12 months¦before admission; 63(27.6%) reported 1 fall, and 73(32.0%) reported¦multiple falls. Previous falls occurrence (one or more falls) was significantly¦associated with in-stay falls (19.9% of previous fallers fell¦during the stay vs 7.6% in patients without history of falling, P=.01),¦and with a longer length of stay (22.4 ± 10.1 days vs 27.1 ± 14.3 days,¦P=.01). In multivariate robust regression controlling for gender, age,¦functional and cognitive status, history of falling remained significantly¦associated with longer rehabilitation stay (2.8 days more in single fallers,¦p=.05, and 3.3 days more in multiple fallers, p=.0.1, compared to¦non-fallers). Conclusion: History of falling in the 12 months prior to¦post acute geriatric rehabilitation is independently associated with a¦longer rehabilitation length of stay. Previous fallers have also an¦increased risk of falling during rehabilitation stay. This suggests that¦hospital fall prevention measures should particularly target these high¦riskpatients.
Resumo:
We study preconditioning techniques for discontinuous Galerkin discretizations of isotropic linear elasticity problems in primal (displacement) formulation. We propose subspace correction methods based on a splitting of the vector valued piecewise linear discontinuous finite element space, that are optimal with respect to the mesh size and the Lamé parameters. The pure displacement, the mixed and the traction free problems are discussed in detail. We present a convergence analysis of the proposed preconditioners and include numerical examples that validate the theory and assess the performance of the preconditioners.
Resumo:
BACKGROUND: The provision of sufficient basal insulin to normalize fasting plasma glucose levels may reduce cardiovascular events, but such a possibility has not been formally tested. METHODS: We randomly assigned 12,537 people (mean age, 63.5 years) with cardiovascular risk factors plus impaired fasting glucose, impaired glucose tolerance, or type 2 diabetes to receive insulin glargine (with a target fasting blood glucose level of ≤95 mg per deciliter [5.3 mmol per liter]) or standard care and to receive n-3 fatty acids or placebo with the use of a 2-by-2 factorial design. The results of the comparison between insulin glargine and standard care are reported here. The coprimary outcomes were nonfatal myocardial infarction, nonfatal stroke, or death from cardiovascular causes and these events plus revascularization or hospitalization for heart failure. Microvascular outcomes, incident diabetes, hypoglycemia, weight, and cancers were also compared between groups. RESULTS: The median follow-up was 6.2 years (interquartile range, 5.8 to 6.7). Rates of incident cardiovascular outcomes were similar in the insulin-glargine and standard-care groups: 2.94 and 2.85 per 100 person-years, respectively, for the first coprimary outcome (hazard ratio, 1.02; 95% confidence interval [CI], 0.94 to 1.11; P=0.63) and 5.52 and 5.28 per 100 person-years, respectively, for the second coprimary outcome (hazard ratio, 1.04; 95% CI, 0.97 to 1.11; P=0.27). New diabetes was diagnosed approximately 3 months after therapy was stopped among 30% versus 35% of 1456 participants without baseline diabetes (odds ratio, 0.80; 95% CI, 0.64 to 1.00; P=0.05). Rates of severe hypoglycemia were 1.00 versus 0.31 per 100 person-years. Median weight increased by 1.6 kg in the insulin-glargine group and fell by 0.5 kg in the standard-care group. There was no significant difference in cancers (hazard ratio, 1.00; 95% CI, 0.88 to 1.13; P=0.97). CONCLUSIONS: When used to target normal fasting plasma glucose levels for more than 6 years, insulin glargine had a neutral effect on cardiovascular outcomes and cancers. Although it reduced new-onset diabetes, insulin glargine also increased hypoglycemia and modestly increased weight. (Funded by Sanofi; ORIGIN ClinicalTrials.gov number, NCT00069784.).
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
iii. Catheter-related bloodstream infection (CR-BSI) diagnosis usually involves catheter withdrawal. An alternative method for CR-BSI diagnosis is the differential time to positivity (DTP) between peripheral and catheter hub blood cultures. This study aims to validate the DTP method in short-term catheters. The results show a low prevalence of CR-BSI in the sample (8.4%). The DTP method is a valid alternative for CR-BSI diagnosis in those cases with monomicrobial cultures (80% sensitivity, 99% specificity, 92% positive predictive value, and 98% negative predictive value) and a cut-off point of 17.7 hours for positivity of hub blood culture may assess in CR-BSI diagnosis.
Resumo:
The diagnosis of Strongyloides stercoralis infections is routinely made by microscopic observation of larvae in stool samples, a low sensitivity method, or by other, most effective methods, such as the Baermann or agar culture plate methods. We propose in this paper a practical modification of Baermann method. One hundred and six stool samples from alcoholic patients were analyzed using the direct smear test, agar culture plate method, the standard Baermann method, and its proposed modification. For this modification the funnel used in the original version of the method is substituted by a test tube with a rubber stopper, perforated to allow insertion of a pipette tip. The tube with a fecal suspension is inverted over another tube containing 6 ml of saline solution and incubated at 37°C for at least 2 h. The saline solution from the second tube is centrifuged and the pellet is observed microscopically. Larva of S. stercoralis were detected in six samples (5.7%) by the two versions of the Baermann method. Five samples were positive using the agar culture plate method, and only in two samples the larva were observed using direct microscopic observation of fecal smears. Cysts of Endolimax nana and Entamoeba histolytica/dyspar were also detected in the modification of Baermann method. Data obtained by the modified Baermann method suggest that this methodology may helps concentrate larvae of S. stercoralis as efficiently as the original method.
Resumo:
A modified adsorption-elution method for the concentration of seeded rotavirus from water samples was used to determine various factors which affected the virus recovery. An enzyme-linked immunosorbent assay was used to detect the rotavirus antigen after concentration. Of the various eluents compared, 0.05M glycine, pH 11.5 gave the highest rotavirus antigen recovery using negatively charged membrane filtration whereas 2.9% tryptose phosphate broth containing 6% glycine; pH 9.0 was found to give the greatest elution efficiency when a positively charged membrane was used. Reconcentration of water samples by a speedVac concentrator showed significantly higher rotavirus recovery than polyethylene glycol precipitation through both negatively and positively charged filters (p-value <0.001). In addition, speedVac concentration using negatively charged filtration resulted in greater rotavirus recovery than that using positively charged filtration (p-value = 0.004). Thirty eight environmental water samples were collected from river, domestic sewage, canals receiving raw sewage drains, and tap water collected in containers for domestic use, all from congested areas of Bangkok. In addition, several samples of commercial drinking water were analyzed. All samples were concentrated and examined for rotavirus antigen. Coliforms and fecal coliforms (0->1,800 MPN/100 ml) were observed but rotavirus was not detected in any sample. This study suggests that the speedVac reconcentration method gives the most efficient rotavirus recovery from water samples.
Resumo:
Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating for camera saturation which takes into account the variable activity in the field of view, i.e. time-dependent dead-time effects. The algorithm presented here accomplishes this task.