871 resultados para Rejection-sampling Algorithm
Resumo:
An effective preservation method and decreased rejection are essential for tracheal transplantation in the reconstruction of large airway defects. Our objective in the present study was to evaluate the antigenic properties of glycerin-preserved tracheal segments. Sixty-one tracheal segments (2.4 to 3.1 cm) were divided into three groups: autograft (N = 21), fresh allograft (N = 18) and glycerin-preserved allograft (N = 22). Two segments from different groups were implanted into the greater omentum of dogs (N = 31). After 28 days, the segments were harvested and analyzed for mononuclear infiltration score and for the presence of respiratory epithelium. The fresh allograft group presented the highest score for mononuclear infiltration (1.78 ± 0.43, P <= 0.001) when compared to the autograft and glycerin-preserved allograft groups. In contrast to the regenerated epithelium observed in autograft segments, all fresh allografts and glycerin-preserved allografts had desquamation of the respiratory mucosa. The low antigenicity observed in glycerin segments was probably the result of denudation of the respiratory epithelium and perhaps due to the decrease of major histocompatibility complex class II antigens.
Resumo:
Acute rejection of a transplanted organ is characterized by intense inflammation within the graft. Yet, for many years transplant researchers have overlooked the role of classic mediators of inflammation such as prostaglandins and thromboxane (prostanoids) in alloimmune responses. It has been demonstrated that local production of prostanoids within the allograft is increased during an episode of acute rejection and that these molecules are able to interfere with graft function by modulating vascular tone, capillary permeability, and platelet aggregation. Experimental data also suggest that prostanoids may participate in alloimmune responses by directly modulating T lymphocyte and antigen-presenting cell function. In the present paper, we provide a brief overview of the alloimmune response, of prostanoid biology, and discuss the available evidence for the role of prostaglandin E2 and thromboxane A2 in graft rejection.
Resumo:
Even though frequency analysis of body sway is widely applied in clinical studies, the lack of standardized procedures concerning power spectrum estimation may provide unreliable descriptors. Stabilometric tests were applied to 35 subjects (20-51 years, 54-95 kg, 1.6-1.9 m) and the power spectral density function was estimated for the anterior-posterior center of pressure time series. The median frequency was compared between power spectra estimated according to signal partitioning, sampling rate, test duration, and detrending methods. The median frequency reliability for different test durations was assessed using the intraclass correlation coefficient. When increasing number of segments, shortening test duration or applying linear detrending, the median frequency values increased significantly up to 137%. Even the shortest test duration provided reliable estimates as observed with the intraclass coefficient (0.74-0.89 confidence interval for a single 20-s test). Clinical assessment of balance may benefit from a standardized protocol for center of pressure spectral analysis that provides an adequate relationship between resolution and variance. An algorithm to estimate center of pressure power density spectrum is also proposed.
Resumo:
Prompt and accurate detection of rejection prior to pathological changes after organ transplantation is vital for monitoring rejections. Although biopsy remains the current gold standard for rejection diagnosis, it is an invasive method and cannot be repeated daily. Thus, noninvasive monitoring methods are needed. In this study, by introducing an IL-2 neutralizing monoclonal antibody (IL-2 N-mAb) and immunosuppressants into the culture with the presence of specific stimulators and activated lymphocytes, an activated lymphocyte-specific assay (ALSA) system was established to detect the specific activated lymphocytes. This assay demonstrated that the suppression in the ALSA test was closely related to the existence of specific activated lymphocytes. The ALSA test was applied to 47 heart graft recipients and the proliferation of activated lymphocytes from all rejection recipients proven by endomyocardial biopsies was found to be inhibited by spleen cells from the corresponding donors, suggesting that this suppression could reflect the existence of activated lymphocytes against donor antigens, and thus the rejection of a heart graft. The sensitivity of the ALSA test in these 47 heart graft recipients was 100%; however, the specificity was only 37.5%. It was also demonstrated that IL-2 N-mAb was indispensible, and the proper culture time courses and concentrations of stimulators were essential for the ALSA test. This preliminary study with 47 grafts revealed that the ALSA test was a promising noninvasive tool, which could be used in vitro to assist with the diagnosis of rejection post-heart transplantation.
Resumo:
"La Niora" is a red pepper variety cultivated in Tadla Region (Morocco) which is used for manufacturing paprika after sun drying. The paprika quality (nutritional, chemical and microbiological) was evaluated immediately after milling, from September to December. Sampling time mainly affected paprika color and the total capsaicinoid and vitamin C contents. The commercial quality was acceptable and no aflatoxins were found, but the microbial load sometimes exceeded permitted levels.
Resumo:
This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.
Resumo:
Epilepsy is a chronic brain disorder, characterized by reoccurring seizures. Automatic sei-zure detector, incorporated into a mobile closed-loop system, can improve the quality of life for the people with epilepsy. Commercial EEG headbands, such as Emotiv Epoc, have a potential to be used as the data acquisition devices for such a system. In order to estimate that potential, epileptic EEG signals from the commercial devices were emulated in this work based on the EEG data from a clinical dataset. The emulated characteristics include the referencing scheme, the set of electrodes used, the sampling rate, the sample resolution and the noise level. Performance of the existing algorithm for detection of epileptic seizures, developed in the context of clinical data, has been evaluated on the emulated commercial data. The results show, that after the transformation of the data towards the characteristics of Emotiv Epoc, the detection capabilities of the algorithm are mostly preserved. The ranges of acceptable changes in the signal parameters are also estimated.
Resumo:
A new method for sampling the exact (within the nodal error) ground state distribution and nondiflPerential properties of multielectron systems is developed and applied to firstrow atoms. Calculated properties are the distribution moments and the electronic density at the nucleus (the 6 operator). For this purpose, new simple trial functions are developed and optimized. First, using Hydrogen as a test case, we demonstrate the accuracy of our algorithm and its sensitivity to error in the trial function. Applications to first row atoms are then described. We obtain results which are more satisfactory than the ones obtained previously using Monte Carlo methods, despite the relative crudeness of our trial functions. Also, a comparison is made with results of highly accurate post-Hartree Fock calculations, thereby illuminating the nodal error in our estimates. Taking into account the CPU time spent, our results, particularly for the 8 operator, have a relatively large variance. Several ways of improving the eflSciency together with some extensions of the algorithm are suggested.
Resumo:
A new approach to treating large Z systems by quantum Monte Carlo has been developed. It naturally leads to notion of the 'valence energy'. Possibilities of the new approach has been explored by optimizing the wave function for CuH and Cu and computing dissociation energy and dipole moment of CuH using variational Monte Carlo. The dissociation energy obtained is about 40% smaller than the experimental value; the method is comparable with SCF and simple pseudopotential calculations. The dipole moment differs from the best theoretical estimate by about 50% what is again comparable with other methods (Complete Active Space SCF and pseudopotential methods).
Resumo:
Introduction "To the people of the United States" signed Wm. Coleman.
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.
Resumo:
DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
Understanding the relationship between genetic diseases and the genes associated with them is an important problem regarding human health. The vast amount of data created from a large number of high-throughput experiments performed in the last few years has resulted in an unprecedented growth in computational methods to tackle the disease gene association problem. Nowadays, it is clear that a genetic disease is not a consequence of a defect in a single gene. Instead, the disease phenotype is a reflection of various genetic components interacting in a complex network. In fact, genetic diseases, like any other phenotype, occur as a result of various genes working in sync with each other in a single or several biological module(s). Using a genetic algorithm, our method tries to evolve communities containing the set of potential disease genes likely to be involved in a given genetic disease. Having a set of known disease genes, we first obtain a protein-protein interaction (PPI) network containing all the known disease genes. All the other genes inside the procured PPI network are then considered as candidate disease genes as they lie in the vicinity of the known disease genes in the network. Our method attempts to find communities of potential disease genes strongly working with one another and with the set of known disease genes. As a proof of concept, we tested our approach on 16 breast cancer genes and 15 Parkinson's Disease genes. We obtained comparable or better results than CIPHER, ENDEAVOUR and GPEC, three of the most reliable and frequently used disease-gene ranking frameworks.