817 resultados para rejection algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The endothelium, as an organ at the interface between the intra- and extravascular space, actively participates in maintaining an anti-inflammatory and anti-coagulant environment under physiological conditions. Severe humoral as well as cellular rejection responses, which accompany cross-species transplantation of vascularized organs as well as ischemia/reperfusion injury, primarily target the endothelium and disrupt this delicate balance. Activation of pro-inflammatory and pro-coagulant pathways often lead to irreversible injury not only of the endothelial layer but also of the entire graft, with ensuing rejection. This review focuses on strategies targeted at protecting the endothelium from such damaging effects, ranging from genetic manipulation of the donor organ to soluble, as well as membrane-targeted, protective strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this work was to study and quantify the differences in dose distributions computed with some of the newest dose calculation algorithms available in commercial planning systems. The study was done for clinical cases originally calculated with pencil beam convolution (PBC) where large density inhomogeneities were present. Three other dose algorithms were used: a pencil beam like algorithm, the anisotropic analytic algorithm (AAA), a convolution superposition algorithm, collapsed cone convolution (CCC), and a Monte Carlo program, voxel Monte Carlo (VMC++). The dose calculation algorithms were compared under static field irradiations at 6 MV and 15 MV using multileaf collimators and hard wedges where necessary. Five clinical cases were studied: three lung and two breast cases. We found that, in terms of accuracy, the CCC algorithm performed better overall than AAA compared to VMC++, but AAA remains an attractive option for routine use in the clinic due to its short computation times. Dose differences between the different algorithms and VMC++ for the median value of the planning target volume (PTV) were typically 0.4% (range: 0.0 to 1.4%) in the lung and -1.3% (range: -2.1 to -0.6%) in the breast for the few cases we analysed. As expected, PTV coverage and dose homogeneity turned out to be more critical in the lung than in the breast cases with respect to the accuracy of the dose calculation. This was observed in the dose volume histograms obtained from the Monte Carlo simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to assess the performance of a new motion correction algorithm. Twenty-five dynamic MR mammography (MRM) data sets and 25 contrast-enhanced three-dimensional peripheral MR angiographic (MRA) data sets which were affected by patient motion of varying severeness were selected retrospectively from routine examinations. Anonymized data were registered by a new experimental elastic motion correction algorithm. The algorithm works by computing a similarity measure for the two volumes that takes into account expected signal changes due to the presence of a contrast agent while penalizing other signal changes caused by patient motion. A conjugate gradient method is used to find the best possible set of motion parameters that maximizes the similarity measures across the entire volume. Images before and after correction were visually evaluated and scored by experienced radiologists with respect to reduction of motion, improvement of image quality, disappearance of existing lesions or creation of artifactual lesions. It was found that the correction improves image quality (76% for MRM and 96% for MRA) and diagnosability (60% for MRM and 96% for MRA).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Endothelin regulates cytokine expression in vitro and in vivo. This study investigated the effects of chronic allograft rejection on hepatic endothelin-converting enzyme-1 (ECE-1) gene expression and endothelin-1 (ET-1) plasma clearance. Using the Lewis-F344 minor histocompatibility mismatch model of heterotopic cardiac transplantation, hepatic ECE-1 gene expression was measured by real-time polymerase chain reaction and host plasma clearance of ET-1 was measured 8 weeks after transplantation in the absence of immunosuppression. In animals undergoing allograft rejection, hepatic ECE-1 gene expression increased 2-fold (P < 0.05), whereas no effect of rejection on ET-1 clearance from plasma was observed. In summary, upregulation of ECE-1 gene expression occurs in the liver of the host during chronic allograft rejection. Because the liver represents both a key organ for cytokine production and for endothelin metabolism, increased hepatic ECE-1-mediated ET-1 synthesis may contribute to host responses and cytokine production during allograft rejection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The GLAaS algorithm for pretreatment intensity modulation radiation therapy absolute dose verification based on the use of amorphous silicon detectors, as described in Nicolini et al. [G. Nicolini, A. Fogliata, E. Vanetti, A. Clivio, and L. Cozzi, Med. Phys. 33, 2839-2851 (2006)], was tested under a variety of experimental conditions to investigate its robustness, the possibility of using it in different clinics and its performance. GLAaS was therefore tested on a low-energy Varian Clinac (6 MV) equipped with an amorphous silicon Portal Vision PV-aS500 with electronic readout IAS2 and on a high-energy Clinac (6 and 15 MV) equipped with a PV-aS1000 and IAS3 electronics. Tests were performed for three calibration conditions: A: adding buildup on the top of the cassette such that SDD-SSD = d(max) and comparing measurements with corresponding doses computed at d(max), B: without adding any buildup on the top of the cassette and considering only the intrinsic water-equivalent thickness of the electronic portal imaging devices device (0.8 cm), and C: without adding any buildup on the top of the cassette but comparing measurements against doses computed at d(max). This procedure is similar to that usually applied when in vivo dosimetry is performed with solid state diodes without sufficient buildup material. Quantitatively, the gamma index (gamma), as described by Low et al. [D. A. Low, W. B. Harms, S. Mutic, and J. A. Purdy, Med. Phys. 25, 656-660 (1998)], was assessed. The gamma index was computed for a distance to agreement (DTA) of 3 mm. The dose difference deltaD was considered as 2%, 3%, and 4%. As a measure of the quality of results, the fraction of field area with gamma larger than 1 (%FA) was scored. Results over a set of 50 test samples (including fields from head and neck, breast, prostate, anal canal, and brain cases) and from the long-term routine usage, demonstrated the robustness and stability of GLAaS. In general, the mean values of %FA remain below 3% for deltaD equal or larger than 3%, while they are slightly larger for deltaD = 2% with %FA in the range from 3% to 8%. Since its introduction in routine practice, 1453 fields have been verified with GLAaS at the authors' institute (6 MV beam). Using a DTA of 3 mm and a deltaD of 4% the authors obtained %FA = 0.9 +/- 1.1 for the entire data set while, stratifying according to the dose calculation algorithm, they observed: %FA = 0.7 +/- 0.9 for fields computed with the analytical anisotropic algorithm and %FA = 2.4 +/- 1.3 for pencil-beam based fields with a statistically significant difference between the two groups. If data are stratified according to field splitting, they observed %FA = 0.8 +/- 1.0 for split fields and 1.0 +/- 1.2 for nonsplit fields without any significant difference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important problem in computational biology is finding the longest common subsequence (LCS) of two nucleotide sequences. This paper examines the correctness and performance of a recently proposed parallel LCS algorithm that uses successor tables and pruning rules to construct a list of sets from which an LCS can be easily reconstructed. Counterexamples are given for two pruning rules that were given with the original algorithm. Because of these errors, performance measurements originally reported cannot be validated. The work presented here shows that speedup can be reliably achieved by an implementation in Unified Parallel C that runs on an Infiniband cluster. This performance is partly facilitated by exploiting the software cache of the MuPC runtime system. In addition, this implementation achieved speedup without bulk memory copy operations and the associated programming complexity of message passing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Linear programs, or LPs, are often used in optimization problems, such as improving manufacturing efficiency of maximizing the yield from limited resources. The most common method for solving LPs is the Simplex Method, which will yield a solution, if one exists, but over the real numbers. From a purely numerical standpoint, it will be an optimal solution, but quite often we desire an optimal integer solution. A linear program in which the variables are also constrained to be integers is called an integer linear program or ILP. It is the focus of this report to present a parallel algorithm for solving ILPs. We discuss a serial algorithm using a breadth-first branch-and-bound search to check the feasible solution space, and then extend it into a parallel algorithm using a client-server model. In the parallel mode, the search may not be truly breadth-first, depending on the solution time for each node in the solution tree. Our search takes advantage of pruning, often resulting in super-linear improvements in solution time. Finally, we present results from sample ILPs, describe a few modifications to enhance the algorithm and improve solution time, and offer suggestions for future work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extracellular nucleotides (e.g. ATP, UTP, ADP) are released by activated endothelium, leukocytes and platelets within the injured vasculature and bind specific cell-surface type-2 purinergic (P2) receptors. This process drives vascular inflammation and thrombosis within grafted organs. Importantly, there are also vascular ectonucleotidases i.e. ectoenzymes that hydrolyze extracellular nucleotides in the blood to generate nucleosides (viz. adenosine). Endothelial cell NTPDase1/CD39 has been shown to critically modulate levels of circulating nucleotides. This process tends to limit the activation of platelet and leukocyte expressed P2 receptors and also generates adenosine to reverse inflammatory events. This vascular protective CD39 activity is rapidly inhibited by oxidative reactions, such as is observed with liver ischemia reperfusion injury. In this review, we chiefly address the impact of these signaling cascades following liver transplantation. Interestingly, the hepatic vasculature, hepatocytes and all non-parenchymal cell types express several components co-ordinating the purinergic signaling response. With hepatic and vascular dysfunction, we note heightened P2- expression and alterations in ectonucleotidase expression and function that may predispose to progression of disease. In addition to documented impacts upon the vasculature during engraftment, extracellular nucleotides also have direct influences upon liver function and bile flow (both under physiological and pathological states). We have recently shown that alterations in purinergic signaling mediated by altered CD39 expression have major impacts upon hepatic metabolism, repair mechanisms, regeneration and associated immune responses. Future clinical applications in transplantation might involve new therapeutic modalities using soluble recombinant forms of CD39, altering expression of this ectonucleotidase by drugs and/or using small molecules to inhibit deleterious P2-mediated signaling while augmenting beneficial adenosine-mediated effects within the transplanted liver.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation examines the global technological and environmental history of copper smelting and the conflict that developed between historic preservation and environmental remediation at major copper smelting sites in the United States after their productive periods ended. Part I of the dissertation is a synthetic overview of the history of copper smelting and its environmental impact. After reviewing the basic metallurgy of copper ores, the dissertation contains successive chapters on the history of copper smelting to 1640, culminating in the so-called German, or Continental, processing system; on the emergence of the rival Welsh system during the British industrial revolution; and on the growth of American dominance in copper production the late 19th and early 20th centuries. The latter chapter focuses, in particular, on three of the most important early American copper districts: Michigan’s Keweenaw Peninsula, Tennessee’s Copper Basin, and Butte-Anaconda, Montana. As these three districts went into decline and ultimately out of production, they left a rich industrial heritage and significant waste and pollution problems generated by increasingly more sophisticated technologies capable of commercially processing steadily growing volumes of decreasingly rich ores. Part II of the dissertation looks at the conflict between historic preservation and environmental remediation that emerged locally and nationally in copper districts as they went into decline and eventually ceased production. Locally, former copper mining communities often split between those who wished to commemorate a region’s past importance and develop heritage tourism, and local developers who wished to clear up and clean out old industrial sites for other purposes. Nationally, Congress passed laws in the 1960s and 1970s mandating the preservation of historical resources (National Historic Preservation Act) and laws mandating the cleanup of contaminated landscapes (CERCLA, or Superfund), objectives sometimes in conflict – especially in the case of copper smelting sites. The dissertation devotes individual chapters to the conflicts that developed between environmental remediation, particularly involving the Environmental Protection Agency and the heritage movement in the Tennessee, Montana, and Michigan copper districts. A concluding chapter provides a broad model to illustrate the relationship between industrial decline, federal environmental remediation activities, and the growth of heritage consciousness in former copper mining and smelting areas, analyzes why the outcome varied in the three areas, and suggests methods for dealing with heritage-remediation issues to minimize conflict and maximize heritage preservation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Users of cochlear implant systems, that is, of auditory aids which stimulate the auditory nerve at the cochlea electrically, often complain about poor speech understanding in noisy environments. Despite the proven advantages of multimicrophone directional noise reduction systems for conventional hearing aids, only one major manufacturer has so far implemented such a system in a product, presumably because of the added power consumption and size. We present a physically small (intermicrophone distance 7 mm) and computationally inexpensive adaptive noise reduction system suitable for behind-the-ear cochlear implant speech processors. Supporting algorithms, which allow the adjustment of the opening angle and the maximum noise suppression, are proposed and evaluated. A portable real-time device for test in real acoustic environments is presented.