991 resultados para Inf-convolution
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A hepatite B crônica apresenta amplo espectro de manifestações clínicas, resultante de diversos fatores, tais como o padrão de secreção e polimorfismo nos genes de citocinas. Este trabalho objetiva correlacionar os polimorfismos TNF-α -308G/A, INF-γ +874A/T, TGF-β1 -509C/T e IL-10 -1081A/G e os níveis séricos destas citocinas com a apresentação clínica da hepatite B. Foram selecionados 53 casos consecutivos de hepatite B, sendo divididos em grupo A (portador inativo= 30) e B (hepatite crônica/cirrose= 23). Como grupo controle, selecionaram-se 100 indivíduos com anti-HBc e anti-HBs positivos. Os níveis séricos das citocinas foram determinados por ensaios imunoenzimáticos, tipo ELISA (eBiosceince, Inc. Califórnia, San Diego, USA). A amplificação gênica das citocinas se realizou pela PCR e a análise histopatológica obedeceu à classificação METAVIR. Identificou-se maior prevalência do genótipo TNF-α -308AG (43,3% vs. 14,4%) no grupo B do que nos controles e a presença do alelo A se correlacionou com risco de infecção crônica pelo VHB (OR= 2,6). Os níveis séricos de INF-γ e de IL-10 foram maiores (p< 0,001) nos controles do que os demais grupos e, inversamente, as concentrações plasmáticas de TGF-β1 foram menores no grupo controle (p< 0,01). Observou-se, na histopatologia hepática, que atividade inflamatória > 2 se correlacionou com maiores níveis de TNF-α e de INF-γ (p< 0,05), assim como a fibrose > 2 com maiores níveis de INF-γ (p< 0,01). Na população pesquisada, menores níveis séricos de INF-γ e de IL-10 e maiores de TGF-β1 estiveram associados com a hepatite B crônica, bem como a presença do alelo A no gene TNF-α - 308 aumentou em 2,6 o risco de cronificação.
Resumo:
A resposta imune na malária é complexa, e os mecanismos de ativação e regulação de linfócitos T efetores e de memória ainda são pouco compreendidos. No presente estudo, determinamos a concentração das citocinas Interferon-γ (IFN-γ), Interleucina-10 (IL-10), Interleucina-4 (IL-4) e Interleucina-12 (IL-12) no soro de indivíduos infectados por Plasmodium vivax, investigamos os polimorfismos no gene do IFN-γ (IFNG+874) e da IL-10 (IL10-1082) e analisamos a associação destes polimorfismos com a concentração das citocinas e com a densidade parasitária. A concentração das citocinas foi determinada por ELISA, e a genotipagem dos polimorfismos IFNG+874 e IL10-1082 foi realizada pelas técnicas de ASO-PCR e PCR-RFLP, respectivamente. Os indivíduos infectados apresentaram níveis séricos de IFN-γ e IL-10 aumentados. A produção de IFN-γ foi maior nos indivíduos primoinfectados, porém não foi associada com a redução da parasitemia. A produção de IL-10 foi alta e associada com altas parasitemias. As citocinas IL-4 e IL-12 não foram detectadas. As freqüências dos genótipos homozigoto mutante AA, heterozigoto AT e selvagem TT do gene do IFN-γ foram 0,51, 0,39 e 0,10, respectivamente. As freqüências dos genótipos homozigoto mutante AA, heterozigoto AG e selvagem GG para IL10 foram 0,49, 0,43 e 0,08, respectivamente. Apenas o polimorfismo do IFN-γ foi associado com níveis reduzidos desta citocina. Na malária causada por P. vivax, houve produção de citocina que caracteriza o perfil Th1 (IFN-γ), com possível participação da IL-10 na imunorregulação.
Resumo:
Financial Support FAPESP, CNPq, CTC/FUNDHERP and INCTC.
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
The purpose of this work was to study and quantify the differences in dose distributions computed with some of the newest dose calculation algorithms available in commercial planning systems. The study was done for clinical cases originally calculated with pencil beam convolution (PBC) where large density inhomogeneities were present. Three other dose algorithms were used: a pencil beam like algorithm, the anisotropic analytic algorithm (AAA), a convolution superposition algorithm, collapsed cone convolution (CCC), and a Monte Carlo program, voxel Monte Carlo (VMC++). The dose calculation algorithms were compared under static field irradiations at 6 MV and 15 MV using multileaf collimators and hard wedges where necessary. Five clinical cases were studied: three lung and two breast cases. We found that, in terms of accuracy, the CCC algorithm performed better overall than AAA compared to VMC++, but AAA remains an attractive option for routine use in the clinic due to its short computation times. Dose differences between the different algorithms and VMC++ for the median value of the planning target volume (PTV) were typically 0.4% (range: 0.0 to 1.4%) in the lung and -1.3% (range: -2.1 to -0.6%) in the breast for the few cases we analysed. As expected, PTV coverage and dose homogeneity turned out to be more critical in the lung than in the breast cases with respect to the accuracy of the dose calculation. This was observed in the dose volume histograms obtained from the Monte Carlo simulations.
Resumo:
Efficient image blurring techniques based on the pyramid algorithm can be implemented on modern graphics hardware; thus, image blurring with arbitrary blur width is possible in real time even for large images. However, pyramidal blurring methods do not achieve the image quality provided by convolution filters; in particular, the shape of the corresponding filter kernel varies locally, which potentially results in objectionable rendering artifacts. In this work, a new analysis filter is designed that significantly reduces this variation for a particular pyramidal blurring technique. Moreover, the pyramidal blur algorithm is generalized to allow for a continuous variation of the blur width. Furthermore, an efficient implementation for programmable graphics hardware is presented. The proposed method is named “quasi-convolution pyramidal blurring” since the resulting effect is very close to image blurring based on a convolution filter for many applications.
Resumo:
A three-dimensional model has been proposed that uses Monte Carlo and fast Fourier transform convolution techniques to calculate the dose distribution from a fast neutron beam. This method transports scattered neutrons and photons in the forward, lateral, and backward directions and protons, electrons, and positrons in the forward and lateral directions by convolving energy spread kernels with initial interaction available energy distributions. The primary neutron and photon spectrums have been derived from narrow beam attenuation measurements. The positions and strengths of the effective primary neutron, scattered neutron, and photon sources have been derived from dual ion chamber measurements. The size of the effective primary neutron source has been measured using a copper activation technique. Heterogeneous tissue calculations require a weighted sum of two convolutions for each component since the kernels must be invariant for FFT convolution. Comparisons between calculations and measurements were performed for several water and heterogeneous phantom geometries. ^
Resumo:
We prove that any isotropic positive definite function on the sphere can be written as the spherical self-convolution of an isotropic real-valued function. It is known that isotropic positive definite functions on d-dimensional Euclidean space admit a continuous derivative of order [(d − 1)/2]. We show that the same holds true for isotropic positive definite functions on spheres and prove that this result is optimal for all odd dimensions.
Resumo:
Let {μ(i)t}t≥0 ( i=1,2 ) be continuous convolution semigroups (c.c.s.) of probability measures on Aff(1) (the affine group on the real line). Suppose that μ(1)1=μ(2)1 . Assume furthermore that {μ(1)t}t≥0 is a Gaussian c.c.s. (in the sense that its generating distribution is a sum of a primitive distribution and a second-order differential operator). Then μ(1)t=μ(2)t for all t≥0 . We end up with a possible application in mathematical finance.
Resumo:
Boberach: Der Aufsatz "Wie denken Preußens Offiziere von einem deutschen Heere?" hat das Offizierskorps verletzt, daß sich gegen jede Mediatisierung Preußens wendet, weil Preußens Heer allein Deutschland vor dem Untergang retten kann
Resumo:
Mode of access: Internet.
Resumo:
Errata slip inserted.