27 resultados para One-point Quadrature

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Resistance of trypanosomes to melarsoprol is ascribed to reduced uptake of the drug via the P2 nucleoside transporter. The aim of this study was to look for evidence of drug resistance in Trypanosoma brucei gambiense isolates from sleeping sickness patients in Ibba, South Sudan, an area of high melarsoprol failure rate. Eighteen T. b. gambiense stocks were phenotypically and only 10 strains genotypically characterized. In vitro, all isolates were sensitive to melarsoprol, melarsen oxide, and diminazene. Infected mice were cured with a 4 day treatment of 2.5mg/kg bwt melarsoprol, confirming that the isolates were sensitive. The gene that codes for the P2 transporter, TbATI, was amplified by PCR and sequenced. The sequences were almost identical to the TbAT1(sensitive) reference, except for one point mutation, C1384T resulting in the amino acid change proline-462 to serine. None of the described TbAT1(resistant)-type mutations were detected. In a T. b. gambiense sleeping sickness focus where melarsoprol had to be abandoned due to the high incidence of treatment failures, no evidence for drug resistant trypanosomes or for TbAT1(resistant)-type alleles of the P2 transporter could be found. These findings indicate that factors other than drug resistance contribute to melarsoprol treatment failures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: The outcome of Kaposi sarcoma varies. While many patients do well on highly active antiretroviral therapy, others have progressive disease and need chemotherapy. In order to predict which patients are at risk of unfavorable evolution, we established a prognostic score. METHOD: The survival analysis (Kaplan-Meier method; Cox proportional hazards models) of 144 patients with Kaposi sarcoma prospectively included in the Swiss HIV Cohort Study, from January 1996 to December 2004, was conducted. OUTCOME ANALYZED: use of chemotherapy or death. VARIABLES ANALYZED: demographics, tumor staging [T0 or T1 (16)], CD4 cell counts and HIV-1 RNA concentration, human herpesvirus 8 (HHV8) DNA in plasma and serological titers to latent and lytic antigens. RESULTS: Of 144 patients, 54 needed chemotherapy or died. In the univariate analysis, tumor stage T1, CD4 cell count below 200 cells/microl, positive HHV8 DNA and absence of antibodies against the HHV8 lytic antigen at the time of diagnosis were significantly associated with a bad outcome.Using multivariate analysis, the following variables were associated with an increased risk of unfavorable outcome: T1 [hazard ratio (HR) 5.22; 95% confidence interval (CI) 2.97-9.18], CD4 cell count below 200 cells/microl (HR 2.33; 95% CI 1.22-4.45) and positive HHV8 DNA (HR 2.14; 95% CI 1.79-2.85).We created a score with these variables ranging from 0 to 4: T1 stage counted for two points, CD4 cell count below 200 cells/microl for one point, and positive HHV8 viral load for one point. Each point increase was associated with a HR of 2.26 (95% CI 1.79-2.85). CONCLUSION: In the multivariate analysis, staging (T1), CD4 cell count (<200 cells/microl), positive HHV8 DNA in plasma, at the time of diagnosis, predict evolution towards death or the need of chemotherapy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In studies assessing outdoor range use of laying hens, the number of hens seen on outdoor ranges is inversely correlated to flock size. The aim of this study was to assess individual ranging behavior on a covered (veranda) and an uncovered outdoor run (free-range) in laying hen flocks varying in size. Five to ten percent of hens (aged 9–15 months) within 4 small (2–2500 hens), 4 medium (5–6000), and 4 large (≥9000) commercial flocks were fitted with radio frequency identification (RFID) tags. Antennas were placed at both sides of all popholes between the house and the veranda and the veranda and the free-range. Ranging behavior was directly monitored for approximately three weeks in combination with hourly photographs of the free-range for the distribution of hens and 6h long video recordings on two parts of the free-range during two days. Between 79 and 99% of the tagged hens were registered on the veranda at least once and between 47 and 90% were registered on the free-range at least once. There was no association between the percentage of hens registered outside the house (veranda or free-range) and flock size. However, individual hens in small and medium sized flocks visited the areas outside the house more frequently and spent more time there than hens from large flocks. Foraging behavior on the free-range was shown more frequently and for a longer duration by hens from small and medium sized flocks than by hens from large flocks. This difference in ranging behavior could account for the negative relationship between flock size and the number of hens seen outside at one point of time. In conclusion, our work describes individual birds’ use of areas outside the house within large scale commercial egg production.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gaussian random field (GRF) conditional simulation is a key ingredient in many spatial statistics problems for computing Monte-Carlo estimators and quantifying uncertainties on non-linear functionals of GRFs conditional on data. Conditional simulations are known to often be computer intensive, especially when appealing to matrix decomposition approaches with a large number of simulation points. This work studies settings where conditioning observations are assimilated batch sequentially, with one point or a batch of points at each stage. Assuming that conditional simulations have been performed at a previous stage, the goal is to take advantage of already available sample paths and by-products to produce updated conditional simulations at mini- mal cost. Explicit formulae are provided, which allow updating an ensemble of sample paths conditioned on n ≥ 0 observations to an ensemble conditioned on n + q observations, for arbitrary q ≥ 1. Compared to direct approaches, the proposed formulae proveto substantially reduce computational complexity. Moreover, these formulae explicitly exhibit how the q new observations are updating the old sample paths. Detailed complexity calculations highlighting the benefits of this approach with respect to state-of-the-art algorithms are provided and are complemented by numerical experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many things have been said about literature after postmodernism, but one point there seems to be some agreement on is that it does not turn its back radically on its postmodernist forerunner, but rather generally continues to heed and value their insights. There seems to be something strikingly non-oedipal about the recent aesthetic shift. It is a project of reconstruction that remains deeply rooted in postmodernist tenets. Such an essentially non-oedipal attitude, I would argue, is central to the nature of the reconstructive shift. This, however, also raises questions about the wider cultural context from which such an aesthetic stance arises. If postmodernism was nurtured by the revolutionary spirits of the late 1960s, reconstruction faces a different world with different strategies. Instead of the postmodernist urge to subvert, expose and undermine, reconstruction yearns towards tentative and fragile intersubjective understanding, towards responsibility and community. Instead of revolt and rebellion it explores reconciliation and compromise. One instance in which this becomes visible in reconstructive narratives is the recurring figure of the lost father. Missing father figures abound in recent novels by authors like Mark Z. Danielewski, Dave Eggers, Yann Mantel, David Mitchell etc. It almost seems like a younger generation is yearning for the fathers which postmodernism has struggled hard to do away with. My paper will focus on one particularly striking example to explore the implications of this development: Daniel Wallace’s novel Big Fish and Tim Burton’s well-known film adaptation of the same. In their negotiation of fact and fiction, of doubt and belief, of freedom and responsibility, all of which converge in a father-son relationship, they serve well to illustrate central characteristics and concerns of recent attempts to leave postmodernism behind.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider the Schrödinger equation for a relativistic point particle in an external one-dimensional δ-function potential. Using dimensional regularization, we investigate both bound and scattering states, and we obtain results that are consistent with the abstract mathematical theory of self-adjoint extensions of the pseudodifferential operator H=p2+m2−−−−−−−√. Interestingly, this relatively simple system is asymptotically free. In the massless limit, it undergoes dimensional transmutation and it possesses an infrared conformal fixed point. Thus it can be used to illustrate nontrivial concepts of quantum field theory in the simpler framework of relativistic quantum mechanics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

HIV virulence, i.e. the time of progression to AIDS, varies greatly among patients. As for other rapidly evolving pathogens of humans, it is difficult to know if this variance is controlled by the genotype of the host or that of the virus because the transmission chain is usually unknown. We apply the phylogenetic comparative approach (PCA) to estimate the heritability of a trait from one infection to the next, which indicates the control of the virus genotype over this trait. The idea is to use viral RNA sequences obtained from patients infected by HIV-1 subtype B to build a phylogeny, which approximately reflects the transmission chain. Heritability is measured statistically as the propensity for patients close in the phylogeny to exhibit similar infection trait values. The approach reveals that up to half of the variance in set-point viral load, a trait associated with virulence, can be heritable. Our estimate is significant and robust to noise in the phylogeny. We also check for the consistency of our approach by showing that a trait related to drug resistance is almost entirely heritable. Finally, we show the importance of taking into account the transmission chain when estimating correlations between infection traits. The fact that HIV virulence is, at least partially, heritable from one infection to the next has clinical and epidemiological implications. The difference between earlier studies and ours comes from the quality of our dataset and from the power of the PCA, which can be applied to large datasets and accounts for within-host evolution. The PCA opens new perspectives for approaches linking clinical data and evolutionary biology because it can be extended to study other traits or other infectious diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background and Aims: Data on the influence of calibration on accuracy of continuous glucose monitoring (CGM) are scarce. The aim of the present study was to investigate whether the time point of calibration has an influence on sensor accuracy and whether this effect differs according to glycemic level. Subjects and Methods: Two CGM sensors were inserted simultaneously in the abdomen on either side of 20 individuals with type 1 diabetes. One sensor was calibrated predominantly using preprandial glucose (calibration(PRE)). The other sensor was calibrated predominantly using postprandial glucose (calibration(POST)). At minimum three additional glucose values per day were obtained for analysis of accuracy. Sensor readings were divided into four categories according to the glycemic range of the reference values (low, ≤4 mmol/L; euglycemic, 4.1-7 mmol/L; hyperglycemic I, 7.1-14 mmol/L; and hyperglycemic II, >14 mmol/L). Results: The overall mean±SEM absolute relative difference (MARD) between capillary reference values and sensor readings was 18.3±0.8% for calibration(PRE) and 21.9±1.2% for calibration(POST) (P<0.001). MARD according to glycemic range was 47.4±6.5% (low), 17.4±1.3% (euglycemic), 15.0±0.8% (hyperglycemic I), and 17.7±1.9% (hyperglycemic II) for calibration(PRE) and 67.5±9.5% (low), 24.2±1.8% (euglycemic), 15.5±0.9% (hyperglycemic I), and 15.3±1.9% (hyperglycemic II) for calibration(POST). In the low and euglycemic ranges MARD was significantly lower in calibration(PRE) compared with calibration(POST) (P=0.007 and P<0.001, respectively). Conclusions: Sensor calibration predominantly based on preprandial glucose resulted in a significantly higher overall sensor accuracy compared with a predominantly postprandial calibration. The difference was most pronounced in the hypo- and euglycemic reference range, whereas both calibration patterns were comparable in the hyperglycemic range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reconstruction of patient-specific 3D bone surface from 2D calibrated fluoroscopic images and a point distribution model is discussed. We present a 2D/3D reconstruction scheme combining statistical extrapolation and regularized shape deformation with an iterative image-to-model correspondence establishing algorithm, and show its application to reconstruct the surface of proximal femur. The image-to-model correspondence is established using a non-rigid 2D point matching process, which iteratively uses a symmetric injective nearest-neighbor mapping operator and 2D thin-plate splines based deformation to find a fraction of best matched 2D point pairs between features detected from the fluoroscopic images and those extracted from the 3D model. The obtained 2D point pairs are then used to set up a set of 3D point pairs such that we turn a 2D/3D reconstruction problem to a 3D/3D one. We designed and conducted experiments on 11 cadaveric femurs to validate the present reconstruction scheme. An average mean reconstruction error of 1.2 mm was found when two fluoroscopic images were used for each bone. It decreased to 1.0 mm when three fluoroscopic images were used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The differential safety and efficacy profiles of sirolimus-eluting stents when implanted in patients with multivessel coronary artery disease who have increased body mass indexes (BMIs) compared with those with normal BMIs are largely unknown. This study evaluated the impact of BMI on 1-year outcomes in patients with multivessel coronary artery disease treated with sirolimus-eluting stents as part of the Arterial Revascularization Therapies Study Part II (ARTS II). From February to November 2003, 607 patients were included at 45 centers; 176 patients had normal BMIs (<25 kg/m(2)), 289 were overweight (> or =25 and < or =30 kg/m(2)), and 142 were obese (>30 kg/m(2)). At 30 days, the cumulative incidence of the primary combined end point of death, myocardial infarction, cerebrovascular accident, and repeat revascularization (major adverse cardiac and cerebrovascular events) was 3.4% in the group with normal BMIs, 3.1% in overweight patients, and 2.8% in obese patients (p = 0.76). At 1 year, the cumulative incidence of major adverse cardiac and cerebrovascular events was 10.8%, 11.8%, and 7.0% in the normal BMI, overweight, and obese groups, respectively (p = 0.31). In conclusion, BMI had no impact on 1-year clinical outcomes in patients with multivessel coronary artery disease treated with sirolimus-eluting stents in ARTS II.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Bleeding is a frequent complication during surgery. The intraoperative administration of blood products, including packed red blood cells, platelets and fresh frozen plasma (FFP), is often live saving. Complications of blood transfusions contribute considerably to perioperative costs and blood product resources are limited. Consequently, strategies to optimize the decision to transfuse are needed. Bleeding during surgery is a dynamic process and may result in major blood loss and coagulopathy due to dilution and consumption. The indication for transfusion should be based on reliable coagulation studies. While hemoglobin levels and platelet counts are available within 15 minutes, standard coagulation studies require one hour. Therefore, the decision to administer FFP has to be made in the absence of any data. Point of care testing of prothrombin time ensures that one major parameter of coagulation is available in the operation theatre within minutes. It is fast, easy to perform, inexpensive and may enable physicians to rationally determine the need for FFP. METHODS/DESIGN: The objective of the POC-OP trial is to determine the effectiveness of point of care prothrombin time testing to reduce the administration of FFP. It is a patient and assessor blind, single center randomized controlled parallel group trial in 220 patients aged between 18 and 90 years undergoing major surgery (any type, except cardiac surgery and liver transplantation) with an estimated blood loss during surgery exceeding 20% of the calculated total blood volume or a requirement of FFP according to the judgment of the physicians in charge. Patients are randomized to usual care plus point of care prothrombin time testing or usual care alone without point of care testing. The primary outcome is the relative risk to receive any FFP perioperatively. The inclusion of 110 patients per group will yield more than 80% power to detect a clinically relevant relative risk of 0.60 to receive FFP of the experimental as compared with the control group. DISCUSSION: Point of care prothrombin time testing in the operation theatre may reduce the administration of FFP considerably, which in turn may decrease costs and complications usually associated with the administration of blood products. TRIAL REGISTRATION: NCT00656396.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a new response time measure of evaluations, the Evaluative Movement Assessment (EMA). Two properties are verified for the first time in a response time measure: (a) mapping of multiple attitude objects to a single scale, and (b) centering that scale around a neutral point. Property (a) has implications when self-report and response time measures of attitudes have a low correlation. A study using EMA as an indirect measure revealed a low correlation with self-reported attitudes when the correlation reflected between-subjects differences in preferences for one attitude object to a second. Previously this result may have been interpreted as dissociation between two measures. However, when correlations from the same data reflected within-subject preference rank orders between multiple attitude objects, they were substantial (average r = .64). This result suggests that the presence of low correlations between self-report and response time measures in previous studies may be a reflection of methodological aspects of the response time measurement techniques. Property (b) has implications for exploring theoretical questions that require assessment of whether an evaluation is positive or negative (e.g., prejudice), because it allows such classifications in response time measurement to be made for the first time.